Sample records for efficiency improvement models

  1. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  2. Improving smoothing efficiency of rigid conformal polishing tool using time-dependent smoothing evaluation model

    NASA Astrophysics Data System (ADS)

    Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng

    2017-06-01

    A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o

  3. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    PubMed

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  4. Measuring and Benchmarking Technical Efficiency of Public Hospitals in Tianjin, China

    PubMed Central

    Li, Hao; Dong, Siping

    2015-01-01

    China has long been stuck in applying traditional data envelopment analysis (DEA) models to measure technical efficiency of public hospitals without bias correction of efficiency scores. In this article, we have introduced the Bootstrap-DEA approach from the international literature to analyze the technical efficiency of public hospitals in Tianjin (China) and tried to improve the application of this method for benchmarking and inter-organizational learning. It is found that the bias corrected efficiency scores of Bootstrap-DEA differ significantly from those of the traditional Banker, Charnes, and Cooper (BCC) model, which means that Chinese researchers need to update their DEA models for more scientific calculation of hospital efficiency scores. Our research has helped shorten the gap between China and the international world in relative efficiency measurement and improvement of hospitals. It is suggested that Bootstrap-DEA be widely applied into afterward research to measure relative efficiency and productivity of Chinese hospitals so as to better serve for efficiency improvement and related decision making. PMID:26396090

  5. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.

    PubMed

    Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y

    2016-11-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.

  6. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling

    PubMed Central

    Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.

    2016-01-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739

  7. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  8. Value-based Proposition for a Dedicated Interventional Pulmonology Suite: an Adaptable Business Model.

    PubMed

    Desai, Neeraj R; French, Kim D; Diamond, Edward; Kovitz, Kevin L

    2018-05-31

    Value-based care is evolving with a focus on improving efficiency, reducing cost, and enhancing the patient experience. Interventional pulmonology has the opportunity to lead an effective value-based care model. This model is supported by the relatively low cost of pulmonary procedures and has the potential to improve efficiencies in thoracic care. We discuss key strategies to evaluate and improve efficiency in Interventional Pulmonology practice and describe our experience in developing an interventional pulmonology suite. Such a model can be adapted to other specialty areas and may encourage a more coordinated approach to specialty care. Copyright © 2018. Published by Elsevier Inc.

  9. Measuring and Benchmarking Technical Efficiency of Public Hospitals in Tianjin, China: A Bootstrap-Data Envelopment Analysis Approach.

    PubMed

    Li, Hao; Dong, Siping

    2015-01-01

    China has long been stuck in applying traditional data envelopment analysis (DEA) models to measure technical efficiency of public hospitals without bias correction of efficiency scores. In this article, we have introduced the Bootstrap-DEA approach from the international literature to analyze the technical efficiency of public hospitals in Tianjin (China) and tried to improve the application of this method for benchmarking and inter-organizational learning. It is found that the bias corrected efficiency scores of Bootstrap-DEA differ significantly from those of the traditional Banker, Charnes, and Cooper (BCC) model, which means that Chinese researchers need to update their DEA models for more scientific calculation of hospital efficiency scores. Our research has helped shorten the gap between China and the international world in relative efficiency measurement and improvement of hospitals. It is suggested that Bootstrap-DEA be widely applied into afterward research to measure relative efficiency and productivity of Chinese hospitals so as to better serve for efficiency improvement and related decision making. © The Author(s) 2015.

  10. 75 FR 30387 - Improving Market and Planning Efficiency Through Improved Software; Notice of Agenda and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-01

    ... Market and Planning Efficiency Through Improved Software; Notice of Agenda and Procedures for Staff... planning models and software. The technical conference will be held from 8 a.m. to 5:30 p.m. (EDT) on June.... Agenda for AD10-12 Staff Technical Conference on Planning Models and Software Federal Energy Regulatory...

  11. Improving operating room efficiency by applying bin-packing and portfolio techniques to surgical case scheduling.

    PubMed

    Van Houdenhoven, Mark; van Oostrum, Jeroen M; Hans, Erwin W; Wullink, Gerhard; Kazemier, Geert

    2007-09-01

    An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques. We applied advanced mathematical algorithms in combination with scenarios that model relaxation of various organizational barriers using prospectively collected data. The setting is the main inpatient OR department of a university hospital, which sets its surgical case schedules 2 wk in advance using a block planning method. The main outcome measures are the number of freed OR blocks and OR utilization. Lowering organizational barriers and applying mathematical algorithms can yield a 4.5% point increase in OR utilization (95% confidence interval 4.0%-5.0%). This is obtained by reducing the total required OR time. Efficient OR departments can further improve their efficiency. The paper shows that a radical cultural change that comprises the use of mathematical algorithms and lowering organizational barriers improves OR utilization.

  12. In-Situ Optical Imaging of Carrier Transport in Multilayer Solar Cells

    DTIC Science & Technology

    2008-06-01

    5 1. Efficiency Considerations....................................................... 5 2. Construction...improved efficiency solar cells. The need to move forward on these improvements is driven by the increasing price of oil and other traditional fuels...any improvement in material in a high efficiency multi-junction cell can be difficult to mathematically model, and much effort is involved in

  13. Efficiency improvement by navigated safety inspection involving visual clutter based on the random search model.

    PubMed

    Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao

    2018-06-25

    Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.

  14. Data Envelopment Analysis (DEA) Model in Operation Management

    NASA Astrophysics Data System (ADS)

    Malik, Meilisa; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    Quality management is an effective system in operation management to develops, maintains, and improves quality from groups of companies that allow marketing, production, and service at the most economycal level as well as ensuring customer satisfication. Many companies are practicing quality management to improve their bussiness performance. One of performance measurement is through measurement of efficiency. One of the tools can be used to assess efficiency of companies performance is Data Envelopment Analysis (DEA). The aim of this paper is using Data Envelopment Analysis (DEA) model to assess efficiency of quality management. In this paper will be explained CCR, BCC, and SBM models to assess efficiency of quality management.

  15. Improving Energy Efficiency for the Vehicle Assembly Industry: A Discrete Event Simulation Approach

    NASA Astrophysics Data System (ADS)

    Oumer, Abduaziz; Mekbib Atnaw, Samson; Kie Cheng, Jack; Singh, Lakveer

    2016-11-01

    This paper presented a Discrete Event Simulation (DES) model for investigating and improving energy efficiency in vehicle assembly line. The car manufacturing industry is one of the highest energy consuming industries. Using Rockwell Arena DES package; a detailed model was constructed for an actual vehicle assembly plant. The sources of energy considered in this research are electricity and fuel; which are the two main types of energy sources used in a typical vehicle assembly plant. The model depicts the performance measurement for process- specific energy measures of painting, welding, and assembling processes. Sound energy efficiency model within this industry has two-fold advantage: reducing CO2 emission and cost reduction associated with fuel and electricity consumption. The paper starts with an overview of challenges in energy consumption within the facilities of automotive assembly line and highlights the parameters for energy efficiency. The results of the simulation model indicated improvements for energy saving objectives and reduced costs.

  16. On the next generation of reliability analysis tools

    NASA Technical Reports Server (NTRS)

    Babcock, Philip S., IV; Leong, Frank; Gai, Eli

    1987-01-01

    The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.

  17. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  18. Optimization of single photon detection model based on GM-APD

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Yang, Yi; Hao, Peiyu

    2017-11-01

    One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.

  19. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    PubMed

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  20. Modeling framework for representing long-term effectiveness of best management practices in addressing hydrology and water quality problems: Framework development and demonstration using a Bayesian method

    NASA Astrophysics Data System (ADS)

    Liu, Yaoze; Engel, Bernard A.; Flanagan, Dennis C.; Gitau, Margaret W.; McMillan, Sara K.; Chaubey, Indrajeet; Singh, Shweta

    2018-05-01

    Best management practices (BMPs) are popular approaches used to improve hydrology and water quality. Uncertainties in BMP effectiveness over time may result in overestimating long-term efficiency in watershed planning strategies. To represent varying long-term BMP effectiveness in hydrologic/water quality models, a high level and forward-looking modeling framework was developed. The components in the framework consist of establishment period efficiency, starting efficiency, efficiency for each storm event, efficiency between maintenance, and efficiency over the life cycle. Combined, they represent long-term efficiency for a specific type of practice and specific environmental concern (runoff/pollutant). An approach for possible implementation of the framework was discussed. The long-term impacts of grass buffer strips (agricultural BMP) and bioretention systems (urban BMP) in reducing total phosphorus were simulated to demonstrate the framework. Data gaps were captured in estimating the long-term performance of the BMPs. A Bayesian method was used to match the simulated distribution of long-term BMP efficiencies with the observed distribution with the assumption that the observed data represented long-term BMP efficiencies. The simulated distribution matched the observed distribution well with only small total predictive uncertainties. With additional data, the same method can be used to further improve the simulation results. The modeling framework and results of this study, which can be adopted in hydrologic/water quality models to better represent long-term BMP effectiveness, can help improve decision support systems for creating long-term stormwater management strategies for watershed management projects.

  1. High efficiency silicon solar cell review

    NASA Technical Reports Server (NTRS)

    Godlewski, M. P. (Editor)

    1975-01-01

    An overview is presented of the current research and development efforts to improve the performance of the silicon solar cell. The 24 papers presented reviewed experimental and analytic modeling work which emphasizes the improvment of conversion efficiency and the reduction of manufacturing costs. A summary is given of the round-table discussion, in which the near- and far-term directions of future efficiency improvements were discussed.

  2. Research and development of energy-efficient appliance motor-compressors. Volume IV. Production demonstration and field test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, M.G.; Sauber, R.S.

    Two models of a high-efficiency compressor were manufactured in a pilot production run. These compressors were for low back-pressure applications. While based on a production compressor, there were many changes that required production process changes. Some changes were performed within our company and others were made by outside vendors. The compressors were used in top mount refrigerator-freezers and sold in normal distribution channels. Forty units were placed in residences for a one-year field test. Additional compressors were built so that a life test program could be performed. The results of the field test reveal a 27.0% improvement in energy consumptionmore » for the 18 ft/sup 3/ high-efficiency model and a 15.6% improvement in the 21 ft/sup 3/ improvement in the 21 ft/sup 3/ high-efficiency model as compared to the standard production unit.« less

  3. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  4. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    PubMed Central

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  5. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.

    PubMed

    Zhong, Shan; Liu, Quan; Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.

  6. TRIENNIAL LACTATION SYMPOSIUM: Systems biology of regulatory mechanisms of nutrient metabolism in lactation.

    PubMed

    McNamara, J P

    2015-12-01

    A major role of the dairy cow is to convert low-quality plant materials into high-quality protein and other nutrients for humans. We must select and manage cows with the goal of having animals of the greatest efficiency matched to their environment. We have increased efficiency tremendously over the years, yet the variation in productive and reproductive efficiency among animals is still large. In part, this is because of a lack of full integration of genetic, nutritional, and reproductive biology into management decisions. However, integration across these disciplines is increasing as the biological research findings show specific control points at which genetics, nutrition, and reproduction interact. An ordered systems biology approach that focuses on why and how cells regulate energy and N use and on how and why organs interact through endocrine and neurocrine mechanisms will speed improvements in efficiency. More sophisticated dairy managers will demand better information to improve the efficiency of their animals. Using genetic improvement and animal management to improve milk productive and reproductive efficiency requires a deeper understanding of metabolic processes throughout the life cycle. Using existing metabolic models, we can design experiments specifically to integrate data from global transcriptional profiling into models that describe nutrient use in farm animals. A systems modeling approach can help focus our research to make faster and larger advances in efficiency and determine how this knowledge can be applied on the farms.

  7. 75 FR 33565 - Notice of Intent To Prepare an Environmental Impact Statement for New Medium- and Heavy-Duty Fuel...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-14

    ...- and Heavy-Duty Fuel Efficiency Improvement Program AGENCY: National Highway Traffic Safety... efficiency improvement program for commercial medium- and heavy-duty on-highway vehicles and work trucks... efficiency standards starting with model year (MY) 2016 commercial medium- and heavy-duty on-highway vehicles...

  8. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    PubMed Central

    Reich, Oliver; Rapold, Roland; Flatscher-Thöni, Magdalena

    2012-01-01

    Introduction This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006–2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5%, respectively, of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy-makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care. PMID:22371691

  9. Efficiency of a Care Coordination Model: A Randomized Study with Stroke Patients

    ERIC Educational Resources Information Center

    Claiborne, Nancy

    2006-01-01

    Objectives: This study investigated the efficiency of a social work care coordination model for stroke patients. Care coordination addresses patient care and treatment resources across the health care system to reduce risk, improve clinical outcomes, and maximize efficiency. Method: A randomly assigned, pre-post experimental design measured…

  10. Impact of the Local Public Hospital Reform on the Efficiency of Medium-Sized Hospitals in Japan: An Improved Slacks-Based Measure Data Envelopment Analysis Approach.

    PubMed

    Zhang, Xing; Tone, Kaoru; Lu, Yingzhe

    2018-04-01

    To assess the change in efficiency and total factor productivity (TFP) of the local public hospitals in Japan after the local public hospital reform launched in late 2007, which was aimed at improving the financial capability and operational efficiency of hospitals. Secondary data were collected from the Ministry of Internal Affairs and Communications on 213 eligible medium-sized hospitals, each operating 100-400 beds from FY2006 to FY2011. The improved slacks-based measure nonoriented data envelopment analysis models (Quasi-Max SBM nonoriented DEA models) were used to estimate dynamic efficiency score and Malmquist Index. The dynamic efficiency measure indicated an efficiency gain in the first several years of the reform and then was followed by a decrease. Malmquist Index analysis showed a significant decline in the TFP between 2006 and 2011. The financial improvement of medium-sized hospitals was not associated with enhancement of efficiency. Hospital efficiency was not significantly different among ownership structure and law-application system groups, but it was significantly affected by hospital location. The results indicate a need for region-tailored health care policies and for a more comprehensive reform to overcome the systemic constraints that might contribute to the decline of the TFP. © Health Research and Educational Trust.

  11. Efficiency of bulk-heterojunction organic solar cells

    PubMed Central

    Scharber, M.C.; Sariciftci, N.S.

    2013-01-01

    During the last years the performance of bulk heterojunction solar cells has been improved significantly. For a large-scale application of this technology further improvements are required. This article reviews the basic working principles and the state of the art device design of bulk heterojunction solar cells. The importance of high power conversion efficiencies for the commercial exploitation is outlined and different efficiency models for bulk heterojunction solar cells are discussed. Assuming state of the art materials and device architectures several models predict power conversion efficiencies in the range of 10–15%. A more general approach assuming device operation close to the Shockley–Queisser-limit leads to even higher efficiencies. Bulk heterojunction devices exhibiting only radiative recombination of charge carriers could be as efficient as ideal inorganic photovoltaic devices. PMID:24302787

  12. Research on the influencing factors of financing efficiency of big data industry based on panel data model--Empirical evidence from Guizhou province

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Feng, Yujia

    2018-03-01

    This paper mainly studies the influence factors of financing efficiency of Guizhou big data industry, and selects the financial and macro data of 20 Guizhou big data enterprises from 2010 to 2016. Using the DEA model to obtain the financing efficiency of Guizhou big data enterprises. A panel data model is constructed to select the six macro and micro influencing factors for panel data analysis. The results show that the external economic environment, the turnover rate of the total assets of the enterprises, the increase of operating income, the increase of the revenue per share of each share of the business income have positive impact on the financing efficiency of of the big data industry in Guizhou. The key to improve the financing efficiency of Guizhou big data enterprises is to improve.

  13. 10 CFR 430.24 - Units to be tested.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... efficiency ratio or other measure of energy consumption of a basic model for which consumers would favor..., and (ii) Any represented value of the annual fuel utilization efficiency or other measure of energy... tested basic models by only the design of oven doors the use of which leads to improved efficiency and...

  14. The roles of carrier concentration and interface, bulk, and grain-boundary recombination for 25% efficient CdTe solar cells

    DOE PAGES

    Kanevce, A.; Reese, Matthew O.; Barnes, T. M.; ...

    2017-06-06

    CdTe devices have reached efficiencies of 22% due to continuing improvements in bulk material properties, including minority carrier lifetime. Device modeling has helped to guide these device improvements by quantifying the impacts of material properties and different device designs on device performance. One of the barriers to truly predictive device modeling is the interdependence of these material properties. For example, interfaces become more critical as bulk properties, particularly, hole density and carrier lifetime, increase. We present device-modeling analyses that describe the effects of recombination at the interfaces and grain boundaries as lifetime and doping of the CdTe layer change. Themore » doping and lifetime should be priorities for maximizing open-circuit voltage (V oc) and efficiency improvements. However, interface and grain boundary recombination become bottlenecks for device performance at increased lifetime and doping levels. In conclusion, this work quantifies and discusses these emerging challenges for next-generation CdTe device efficiency.« less

  15. Emergency Preparedness in the Workplace: The Flulapalooza Model for Mass Vaccination.

    PubMed

    Swift, Melanie D; Aliyu, Muktar H; Byrne, Daniel W; Qian, Keqin; McGown, Paula; Kinman, Patricia O; Hanson, Katherine Louise; Culpepper, Demoyne; Cooley, Tamara J; Yarbrough, Mary I

    2017-09-01

    To explore whether an emergency preparedness structure is a feasible, efficient, and sustainable way for health care organizations to manage mass vaccination events. We used the Hospital Incident Command System to conduct a 1-day annual mass influenza vaccination event at Vanderbilt University Medical Center over 5 successive years (2011-2015). Using continuous quality improvement principles, we assessed whether changes in layout, supply management, staffing, and documentation systems improved efficiency. A total of 66 591 influenza vaccines were administered at 5 annual Flulapalooza events; 13 318 vaccines per event on average. Changes to the physical layout, staffing mix, and documentation processes improved vaccination efficiency 74%, from approximately 38 to 67 vaccines per hour per vaccinator, while reducing overall staffing needs by 38%. An unexpected finding was the role of social media in facilitating active engagement. Health care organizations can use a closed point-of-dispensing model and Hospital Incident Command System to conduct mass vaccination events, and can adopt the "Flulapalooza method" as a best practice model to enhance efficiency.

  16. A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle

    PubMed Central

    Lin, Cheng

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention. PMID:25197697

  17. A traction control strategy with an efficiency model in a distributed driving electric vehicle.

    PubMed

    Lin, Cheng; Cheng, Xingqun

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention.

  18. Hyperbola-parabola primary mirror in Cassegrain optical antenna to improve transmission efficiency.

    PubMed

    Zhang, Li; Chen, Lu; Yang, HuaJun; Jiang, Ping; Mao, Shengqian; Caiyang, Weinan

    2015-08-20

    An optical model with a hyperbola-parabola primary mirror added in the Cassegrain optical antenna, which can effectively improve the transmission efficiency, is proposed in this paper. The optimum parameters of a hyperbola-parabola primary mirror and a secondary mirror for the optical antenna system have been designed and analyzed in detail. The parabola-hyperbola primary structure optical antenna is obtained to improve the transmission efficiency of 10.60% in theory, and the simulation efficiency changed 9.359%. For different deflection angles to the receiving antenna with the emit antenna, the coupling efficiency curve of the optical antenna has been obtained.

  19. Numerical convergence improvements for porflow unsaturated flow simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, Greg

    2017-08-14

    Section 3.6 of SRNL (2016) discusses various PORFLOW code improvements to increase modeling efficiency, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision. This memorandum documents interaction with Analytic & Computational Research, Inc. (http://www.acricfd.com/default.htm) to improve numerical convergence efficiency using PORFLOW version 6.42 for unsaturated flow simulations.

  20. Discussion of skill improvement in marine ecosystem dynamic models based on parameter optimization and skill assessment

    NASA Astrophysics Data System (ADS)

    Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen

    2016-07-01

    Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.

  1. Reinventing Emergency Department Flow via Healthcare Delivery Science.

    PubMed

    DeFlitch, Christopher; Geeting, Glenn; Paz, Harold L

    2015-01-01

    Healthcare system flow resulting in emergency departments (EDs) crowding is a quality and access problem. This case study examines an overcrowded academic health center ED with increasing patient volumes and limited physical space for expansion. ED capacity and efficiency improved via engineering principles application, addressing patient and staffing flows, and reinventing the delivery model. Using operational data and staff input, patient and staff flow models were created, identifying bottlenecks (points of inefficiency). A new flow model of emergency care delivery, physician-directed queuing, was developed. Expanding upon physicians in triage, providers passively evaluate all patients upon arrival, actively manage patients requiring fewer resources, and direct patients requiring complex resources to further evaluation in ED areas. Sustained over time, ED efficiency improved as measured by near elimination of "left without being seen" patients and waiting times with improvement in door to doctor, patient satisfaction, and total length of stay. All improvements were in the setting on increased patient volume and no increase in physician staffing. Our experience suggests that practical application of healthcare delivery science can be used to improve ED efficiency. © The Author(s) 2015.

  2. Process and design considerations for high-efficiency solar cells

    NASA Technical Reports Server (NTRS)

    Rohati, A.; Rai-Choudhury, P.

    1985-01-01

    This paper shows that oxide surface passivation coupled with optimum multilayer anti-reflective coating can provide approx. 3% (absolute) improvement in solar cell efficiency. Use of single-layer AR coating, without passivation, gives cell efficiencies in the range of 15 to 15.5% on high-quality, 4 ohm-cm as well as 0.1 to 0.2 ohm-cm float-zone silicon. Oxide surface passivation alone raises the cell efficiency to or = 17%. An optimum double-layer AR coating on oxide-passivated cells provides an additional approx. 5 to 10% improvement over a single-layer AR-coated cell, resulting in cell efficiencies in excess of 18%. Experimentally observed improvements are supported by model calculations and an approach to or = 20% efficient cells is discussed.

  3. A comparative Thermal Analysis of conventional parabolic receiver tube and Cavity model tube in a Solar Parabolic Concentrator

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Ramakrishna, P.; Sangavi, S.

    2018-02-01

    Improvements in heating technology with solar energy is gaining focus, especially solar parabolic collectors. Solar heating in conventional parabolic collectors is done with the help of radiation concentration on receiver tubes. Conventional receiver tubes are open to atmosphere and loose heat by ambient air currents. In order to reduce the convection losses and also to improve the aperture area, we designed a tube with cavity. This study is a comparative performance behaviour of conventional tube and cavity model tube. The performance formulae were derived for the cavity model based on conventional model. Reduction in overall heat loss coefficient was observed for cavity model, though collector heat removal factor and collector efficiency were nearly same for both models. Improvement in efficiency was also observed in the cavity model’s performance. The approach towards the design of a cavity model tube as the receiver tube in solar parabolic collectors gave improved results and proved as a good consideration.

  4. Multi-Complementary Model for Long-Term Tracking

    PubMed Central

    Zhang, Deng; Zhang, Junchang; Xia, Chenyang

    2018-01-01

    In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170

  5. Hydraulic performance improvement of the bidirectional pit pump installation based on CFD

    NASA Astrophysics Data System (ADS)

    Chen, H. X.; Zhou, D. Q.

    2013-12-01

    At present, the efficiency of bidirectional pit pump installation with lift under 2m is still low because of lack of research on it in the past. In the paper, the CFD numerical method and experimental test were applied to study flow characteristic of bidirectional pit pump installation under positive and reverse condition. Through changing airfoil type and position of blade and stay vane, the comprehensive performance of improved model were obtained by calculating many different models. The results showed that when improved model is obtained with type A runner with 4 blades that is 0.7D away from pit exit and unsymmetrical guide vane 0.25dh which away from the impeller outlet, and the flow pattern of the improved solution is steady with high efficiency. Compared with the original scheme, the efficiency of positive and reverse design condition reach to 67.23% and 58.32% respectively, which is increased 6% more than original model on the design condition and 5% on the optimum operating condition, and it achieved the purpose of improvement. According to the runner blade angle of the optimization solution, model synthetic characteristic curve was drawn and internal flow field characteristics was analyzed under optimal positive and reverse conditions. The numerical calculation shows that owing to the lack of stay vane to recycle the energy in outlet runner chamber, the water flow regime is not steady enough in the outlet passage, and that is the main reason for lower efficiency at reverse condition than that at positive condition.

  6. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  7. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  8. High-efficiency resonant coupled wireless power transfer via tunable impedance matching

    NASA Astrophysics Data System (ADS)

    Anowar, Tanbir Ibne; Barman, Surajit Das; Wasif Reza, Ahmed; Kumar, Narendra

    2017-10-01

    For magnetic resonant coupled wireless power transfer (WPT), the axial movement of near-field coupled coils adversely degrades the power transfer efficiency (PTE) of the system and often creates sub-resonance. This paper presents a tunable impedance matching technique based on optimum coupling tuning to enhance the efficiency of resonant coupled WPT system. The optimum power transfer model is analysed from equivalent circuit model via reflected load principle, and the adequate matching are achieved through the optimum tuning of coupling coefficients at both the transmitting and receiving end of the system. Both simulations and experiments are performed to evaluate the theoretical model of the proposed matching technique, and results in a PTE over 80% at close coil proximity without shifting the original resonant frequency. Compared to the fixed coupled WPT, the extracted efficiency shows 15.1% and 19.9% improvements at the centre-to-centre misalignment of 10 and 70 cm, respectively. Applying this technique, the extracted S21 parameter shows more than 10 dB improvements at both strong and weak couplings. Through the developed model, the optimum coupling tuning also significantly improves the performance over matching techniques using frequency tracking and tunable matching circuits.

  9. Quality and Efficiency Improvement Tools for Every Radiologist.

    PubMed

    Kudla, Alexei U; Brook, Olga R

    2018-06-01

    In an era of value-based medicine, data-driven quality improvement is more important than ever to ensure safe and efficient imaging services. Familiarity with high-value tools enables all radiologists to successfully engage in quality and efficiency improvement. In this article, we review the model for improvement, strategies for measurement, and common practical tools with real-life examples that include Run chart, Control chart (Shewhart chart), Fishbone (Cause-and-Effect or Ishikawa) diagram, Pareto chart, 5 Whys, and Root Cause Analysis. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  10. Market-Based Higher Education: Does Colorado's Voucher Model Improve Higher Education Access and Efficiency?

    ERIC Educational Resources Information Center

    Hillman, Nicholas W.; Tandberg, David A.; Gross, Jacob P. K.

    2014-01-01

    In 2004, Colorado introduced the nation's first voucher model for financing public higher education. With state appropriations now allocated to students, rather than institutions, state officials expect this model to create cost efficiencies while also expanding college access. Using difference-in-difference regression analysis, we find limited…

  11. Knowledge discovery from data and Monte-Carlo DEA to evaluate technical efficiency of mental health care in small health areas

    PubMed Central

    García-Alonso, Carlos; Pérez-Naranjo, Leonor

    2009-01-01

    Introduction Knowledge management, based on information transfer between experts and analysts, is crucial for the validity and usability of data envelopment analysis (DEA). Aim To design and develop a methodology: i) to assess technical efficiency of small health areas (SHA) in an uncertainty environment, and ii) to transfer information between experts and operational models, in both directions, for improving expert’s knowledge. Method A procedure derived from knowledge discovery from data (KDD) is used to select, interpret and weigh DEA inputs and outputs. Based on KDD results, an expert-driven Monte-Carlo DEA model has been designed to assess the technical efficiency of SHA in Andalusia. Results In terms of probability, SHA 29 is the most efficient being, on the contrary, SHA 22 very inefficient. 73% of analysed SHA have a probability of being efficient (Pe) >0.9 and 18% <0.5. Conclusions Expert knowledge is necessary to design and validate any operational model. KDD techniques make the transfer of information from experts to any operational model easy and results obtained from the latter improve expert’s knowledge.

  12. Hot kinetic model as a guide to improve organic photovoltaic materials.

    PubMed

    Sosorev, Andrey Yu; Godovsky, Dmitry Yu; Paraschuk, Dmitry Yu

    2018-01-31

    The modeling of organic solar cells (OSCs) can provide a roadmap for their further improvement. Many OSC models have been proposed in recent years; however, the impact of the key intermediates from photons to electricity-hot charge-transfer (CT) states-on the OSC efficiency is highly ambiguous. In this study, we suggest an analytical kinetic model for OSC that considers a two-step charge generation via hot CT states. This hot kinetic model allowed us to evaluate the impact of different material parameters on the OSC performance: the driving force for charge separation, optical bandgap, charge mobility, geminate recombination rate, thermalization rate, average electron-hole separation distance in the CT state, dielectric permittivity, reorganization energy and charge delocalization. In contrast to a widespread trend of lowering the material bandgap, the model predicts that this approach is only efficient along with improvement of the other material properties. The most promising ways to increase the OSC performance are decreasing the reorganization energy, i.e., an energy change accompanying CT from the donor molecule to the acceptor, increasing the dielectric permittivity and charge delocalization. The model suggests that there are no fundamental limitations that can prevent achieving the OSC efficiency above 20%.

  13. Photosynthesis, light use efficiency, and yield of reduced-chlorophyll soybean mutants in field conditions

    USDA-ARS?s Scientific Manuscript database

    Reducing chlorophyll (chl) content may improve the conversion efficiency of absorbed radiation into biomass (ec) and therefore yield in dense monoculture crops by improving light penetration and distribution within the canopy. Modeling suggests that reducing chl content may also reduce leaf temperat...

  14. Optimum outlier model for potential improvement of environmental cleaning and disinfection.

    PubMed

    Rupp, Mark E; Huerta, Tomas; Cavalieri, R J; Lyden, Elizabeth; Van Schooneveld, Trevor; Carling, Philip; Smith, Philip W

    2014-06-01

    The effectiveness and efficiency of 17 housekeepers in terminal cleaning 292 hospital rooms was evaluated through adenosine triphosphate detection. A subgroup of housekeepers was identified who were significantly more effective and efficient than their coworkers. These optimum outliers may be used in performance improvement to optimize environmental cleaning.

  15. Emergency Preparedness in the Workplace: The Flulapalooza Model for Mass Vaccination

    PubMed Central

    Aliyu, Muktar H.; Byrne, Daniel W.; Qian, Keqin; McGown, Paula; Kinman, Patricia O.; Hanson, Katherine Louise; Culpepper, Demoyne; Cooley, Tamara J.; Yarbrough, Mary I.

    2017-01-01

    Objectives. To explore whether an emergency preparedness structure is a feasible, efficient, and sustainable way for health care organizations to manage mass vaccination events. Methods. We used the Hospital Incident Command System to conduct a 1-day annual mass influenza vaccination event at Vanderbilt University Medical Center over 5 successive years (2011–2015). Using continuous quality improvement principles, we assessed whether changes in layout, supply management, staffing, and documentation systems improved efficiency. Results. A total of 66 591 influenza vaccines were administered at 5 annual Flulapalooza events; 13 318 vaccines per event on average. Changes to the physical layout, staffing mix, and documentation processes improved vaccination efficiency 74%, from approximately 38 to 67 vaccines per hour per vaccinator, while reducing overall staffing needs by 38%. An unexpected finding was the role of social media in facilitating active engagement. Conclusions. Health care organizations can use a closed point-of-dispensing model and Hospital Incident Command System to conduct mass vaccination events, and can adopt the “Flulapalooza method” as a best practice model to enhance efficiency. PMID:28892449

  16. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    NASA Astrophysics Data System (ADS)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.

  17. Electrical and Optical Enhancement in Internally Nanopatterned Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Fina, Michael Dane

    Organic light-emitting diodes (OLEDs) have made tremendous technological progress in the past two decades and have emerged as a top competitor for next generation light-emitting displays and lighting. State-of-the-art OLEDs have been reported in literature to approach, and even surpass, white fluorescent tube efficiency. However, despite rapid technological progress, efficiency metrics must be improved to compete with traditional inorganic light-emitting diode (LED) technology. Organic materials possess specialized traits that permit manipulations to the light-emitting cavity. Overall, as demonstrated within, these modifications can be used to improve electrical and optical device efficiencies. This work is focused at analyzing the effects that nanopatterned geometric modifications to the organic active layers play on device efficiency. In general, OLED efficiency is complicated by the complex, coupled processes which contribute to spontaneous dipole emission. A composite of three sub-systems (electrical, exciton and optical) ultimately dictate the OLED device efficiency. OLED electrical operation is believed to take place via a low-mobility-modified Schottky injection process. In the injection-limited regime, geometric effects are expected to modify the local electric field leading to device current enhancement. It is shown that the patterning effect can be used to enhance charge carrier parity, thereby enhancing overall recombination. Current density and luminance characteristics are shown to be improved by OLED nanopatterning from both the model developed within and experimental techniques. Next, the optical enhancement effects produced by the nanopatterned array are considered. Finite-difference time-domain (FDTD) simulations are used to determine positional, spectral optical enhancement for the nanopatterned device. The results show beneficial effects to the device performance. The optical enhancements are related to the reduction in internal radiative quenching (improved internal quantum efficiency) and improvement in light extraction (improved outcoupling efficiency). Furthermore, the electrical model is used to construct a positional radiative efficiency map that when combined with the optical enhancement reveals the overall external quantum efficiency enhancement.

  18. Release modeling and comparison of nanoarchaeosomal, nanoliposomal and pegylated nanoliposomal carriers for paclitaxel.

    PubMed

    Movahedi, Fatemeh; Ebrahimi Shahmabadi, Hasan; Alavi, Seyed Ebrahim; Koohi Moftakhari Esfahani, Maedeh

    2014-09-01

    Breast cancer is the most prevalent cancer among women. Recently, delivering by nanocarriers has resulted in a remarkable evolution in treatment of numerous cancers. Lipid nanocarriers are important ones while liposomes and archaeosomes are common lipid nanocarriers. In this work, paclitaxel was used and characterized in nanoliposomal and nanoarchaeosomal form to improve efficiency. To increase stability, efficiency and solubility, polyethylene glycol 2000 (PEG 2000) was added to some samples. MTT assay confirmed effectiveness of nanocarriers on MCF-7 cell line and size measuring validated nano-scale of particles. Nanoarchaeosomal carriers demonstrated highest encapsulation efficiency and lowest release rate. On the other hand, pegylated nanoliposomal carrier showed higher loading efficiency and less release compared with nanoliposomal carrier which verifies effect of PEG on improvement of stability and efficiency. Additionally, release pattern was modeled using artificial neural network (ANN) and genetic algorithm (GA). Using ANN modeling for release prediction, resulted in R values of 0.976, 0.989 and 0.999 for nanoliposomal, pegylated nanoliposomal and nanoarchaeosomal paclitaxel and GA modeling led to values of 0.954, 0.951 and 0.976, respectively. ANN modeling was more successful in predicting release compared with the GA strategy.

  19. An improvement in the calculation of the efficiency of oxidative phosphorylation and rate of energy dissipation in mitochondria

    NASA Astrophysics Data System (ADS)

    Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman

    2014-12-01

    The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.

  20. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Jošt, D.; Škerlavaj, A.; Lipej, A.

    2012-11-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  1. Energy-efficiency program for clothes washers, clothes dryers, and dishwashers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-11-01

    The objectives of this study of dishwashers, clothes washers, and clothes dryers are: to evaluate existing energy efficiency test procedures and recommend the use of specific test procedures for each appliance group and to establish the maximum economically and technologically feasible energy-efficiency improvement goals for each appliance group. Specifically, the program requirements were to determine the energy efficiency of the 1972 models, to evaluate the feasibility improvements that could be implemented by 1980 to maximize energy efficiency, and to calculate the percentage efficiency improvement based on the 1972 baseline and the recommended 1980 targets. The test program was conducted usingmore » 5 dishwashers, 4 top-loading clothes washers, one front-loading clothes washer, 4 electric clothes dryers, and 4 gas clothes dryers. (MCW)« less

  2. On the Asymptotic Relative Efficiency of Planned Missingness Designs.

    PubMed

    Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D

    2016-03-01

    In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.

  3. A Prototype for the Support of Integrated Software Process Development and Improvement

    NASA Astrophysics Data System (ADS)

    Porrawatpreyakorn, Nalinpat; Quirchmayr, Gerald; Chutimaskul, Wichian

    An efficient software development process is one of key success factors for quality software. Not only can the appropriate establishment but also the continuous improvement of integrated project management and of the software development process result in efficiency. This paper hence proposes a software process maintenance framework which consists of two core components: an integrated PMBOK-Scrum model describing how to establish a comprehensive set of project management and software engineering processes and a software development maturity model advocating software process improvement. Besides, a prototype tool to support the framework is introduced.

  4. Predictive control strategy of a gas turbine for improvement of combined cycle power plant dynamic performance and efficiency.

    PubMed

    Mohamed, Omar; Wang, Jihong; Khalil, Ashraf; Limhabrash, Marwan

    2016-01-01

    This paper presents a novel strategy for implementing model predictive control (MPC) to a large gas turbine power plant as a part of our research progress in order to improve plant thermal efficiency and load-frequency control performance. A generalized state space model for a large gas turbine covering the whole steady operational range is designed according to subspace identification method with closed loop data as input to the identification algorithm. Then the model is used in developing a MPC and integrated into the plant existing control strategy. The strategy principle is based on feeding the reference signals of the pilot valve, natural gas valve, and the compressor pressure ratio controller with the optimized decisions given by the MPC instead of direct application of the control signals. If the set points for the compressor controller and turbine valves are sent in a timely manner, there will be more kinetic energy in the plant to release faster responses on the output and the overall system efficiency is improved. Simulation results have illustrated the feasibility of the proposed application that has achieved significant improvement in the frequency variations and load following capability which are also translated to be improvements in the overall combined cycle thermal efficiency of around 1.1 % compared to the existing one.

  5. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  6. Parametric modeling and stagger angle optimization of an axial flow fan

    NASA Astrophysics Data System (ADS)

    Li, M. X.; Zhang, C. H.; Liu, Y.; Y Zheng, S.

    2013-12-01

    Axial flow fans are widely used in every field of social production. Improving their efficiency is a sustained and urgent demand of domestic industry. The optimization of stagger angle is an important method to improve fan performance. Parametric modeling and calculation process automation are realized in this paper to improve optimization efficiency. Geometric modeling and mesh division are parameterized based on GAMBIT. Parameter setting and flow field calculation are completed in the batch mode of FLUENT. A control program is developed in Visual C++ to dominate the data exchange of mentioned software. It also extracts calculation results for optimization algorithm module (provided by Matlab) to generate directive optimization control parameters, which as feedback are transferred upwards to modeling module. The center line of the blade airfoil, based on CLARK y profile, is constructed by non-constant circulation and triangle discharge method. Stagger angles of six airfoil sections are optimized, to reduce the influence of inlet shock loss as well as gas leak in blade tip clearance and hub resistance at blade root. Finally an optimal solution is obtained, which meets the total pressure requirement under given conditions and improves total pressure efficiency by about 6%.

  7. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  8. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  9. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  10. Holographic heat engine within the framework of massive gravity

    NASA Astrophysics Data System (ADS)

    Mo, Jie-Xiong; Li, Gu-Qiang

    2018-05-01

    Heat engine models are constructed within the framework of massive gravity in this paper. For the four-dimensional charged black holes in massive gravity, it is shown that the existence of graviton mass improves the heat engine efficiency significantly. The situation is more complicated for the five-dimensional neutral black holes since the constant which corresponds to the third massive potential also contributes to the efficiency. It is also shown that the existence of graviton mass can improve the heat engine efficiency. Moreover, we probe how the massive gravity influences the behavior of the heat engine efficiency approaching the Carnot efficiency.

  11. Modelling of different measures for improving removal in a stormwater pond.

    PubMed

    German, J; Jansons, K; Svensson, G; Karlsson, D; Gustafsson, L G

    2005-01-01

    The effect of retrofitting an existing pond on removal efficiency and hydraulic performance was modelled using the commercial software Mike21 and compartmental modelling. The Mike21 model had previously been calibrated on the studied pond. Installation of baffles, the addition of culverts under a causeway and removal of an existing island were all studied as possible improvement measures in the pond. The subsequent effect on hydraulic performance and removal of suspended solids was then evaluated. Copper, cadmium, BOD, nitrogen and phosphorus removal were also investigated for that specific improvement measure showing the best results. Outcomes of this study reveal that all measures increase the removal efficiency of suspended solids. The hydraulic efficiency is improved for all cases, except for the case where the island is removed. Compartmental modelling was also used to evaluate hydraulic performance and facilitated a better understanding of the way each of the different measures affected the flow pattern and performance. It was concluded that the installation of baffles is the best of the studied measures resulting in a reduction in the annual load on the receiving lake by approximately 8,000 kg of suspended solids (25% reduction of the annual load), 2 kg of copper (10% reduction of the annual load) and 600 kg of BOD (10% reduction of the annual load).

  12. Options to improve energy efficiency for educational building

    NASA Astrophysics Data System (ADS)

    Jahan, Mafruha

    The cost of energy is a major factor that must be considered for educational facility budget planning purpose. The analysis of energy related issues and options can be complex and requires significant time and detailed effort. One way to facilitate the inclusion of energy option planning in facility planning efforts is to utilize a tool that allows for quick appraisal of the facility energy profile. Once such an appraisal is accomplished, it is then possible to rank energy improvement options consistently with other facility needs and requirements. After an energy efficiency option has been determined to have meaningful value in comparison with other facility planning options, it is then possible to utilize the initial appraisal as the basis for an expanded consideration of additional facility and energy use detail using the same analytic system used for the initial appraisal. This thesis has developed a methodology and an associated analytic model to assist in these tasks and thereby improve the energy efficiency of educational facilities. A detailed energy efficiency and analysis tool is described that utilizes specific university building characteristics such as size, architecture, envelop, lighting, occupancy, thermal design which allows reducing the annual energy consumption. Improving the energy efficiency of various aspects of an educational building's energy performance can be complex and can require significant time and experience to make decisions. The approach developed in this thesis initially assesses the energy design for a university building. This initial appraisal is intended to assist administrators in assessing the potential value of energy efficiency options for their particular facility. Subsequently this scoping design can then be extended as another stage of the model by local facility or planning personnel to add more details and engineering aspects to the initial screening model. This approach can assist university planning efforts to identify the most cost effective combinations of energy efficiency strategies. The model analyzes and compares the payback periods of all proposed Energy Performance Measures (EPMs) to determine which has the greatest potential value.

  13. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less

  14. Blood flow dynamic improvement with aneurysm repair detected by a patient-specific model of multiple aortic aneurysms.

    PubMed

    Sughimoto, Koichi; Takahara, Yoshiharu; Mogi, Kenji; Yamazaki, Kenji; Tsubota, Ken'ichi; Liang, Fuyou; Liu, Hao

    2014-05-01

    Aortic aneurysms may cause the turbulence of blood flow and result in the energy loss of the blood flow, while grafting of the dilated aorta may ameliorate these hemodynamic disturbances, contributing to the alleviation of the energy efficiency of blood flow delivery. However, evaluating of the energy efficiency of blood flow in an aortic aneurysm has been technically difficult to estimate and not comprehensively understood yet. We devised a multiscale computational biomechanical model, introducing novel flow indices, to investigate a single male patient with multiple aortic aneurysms. Preoperative levels of wall shear stress and oscillatory shear index (OSI) were elevated but declined after staged grafting procedures: OSI decreased from 0.280 to 0.257 (first operation) and 0.221 (second operation). Graftings may strategically counter the loss of efficient blood delivery to improve hemodynamics of the aorta. The energy efficiency of blood flow also improved postoperatively. Novel indices of pulsatile pressure index (PPI) and pulsatile energy loss index (PELI) were evaluated to characterize and quantify energy loss of pulsatile blood flow. Mean PPI decreased from 0.445 to 0.423 (first operation) and 0.359 (second operation), respectively; while the preoperative PELI of 0.986 dropped to 0.820 and 0.831. Graftings contributed not only to ameliorate wall shear stress or oscillatory shear index but also to improve efficient blood flow. This patient-specific modeling will help in analyzing the mechanism of aortic aneurysm formation and may play an important role in quantifying the energy efficiency or loss in blood delivery.

  15. Tool Efficiency Analysis model research in SEMI industry

    NASA Astrophysics Data System (ADS)

    Lei, Ma; Nana, Zhang; Zhongqiu, Zhang

    2018-06-01

    One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  16. Efficient model checking of network authentication protocol based on SPIN

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan

    2013-03-01

    Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.

  17. Electronic and mechanical improvement of the receiving terminal of a free-space microwave power transmission system

    NASA Technical Reports Server (NTRS)

    Brown, W. C.

    1977-01-01

    Significant advancements were made in a number of areas: improved efficiency of basic receiving element at low power density levels, improved resolution and confidence in efficiency measurements mathematical modelling and computer simulation of the receiving element and the design, construction, and testing of an environmentally protected two-plane construction suitable for low cost, highly automated construction of large receiving arrays.

  18. Low-Cost CdTe/Silicon Tandem Solar Cells

    DOE PAGES

    Tamboli, Adele C.; Bobela, David C.; Kanevce, Ana; ...

    2017-09-06

    Achieving higher photovoltaic efficiency in single-junction devices is becoming increasingly difficult, but tandem modules offer the possibility of significant efficiency improvements. By device modeling we show that four-terminal CdTe/Si tandem solar modules offer the prospect of 25%-30% module efficiency, and technoeconomic analysis predicts that these efficiency gains can be realized at costs per Watt that are competitive with CdTe and Si single junction alternatives. The cost per Watt of the modeled tandems is lower than crystalline silicon, but slightly higher than CdTe alone. But, these higher power modules reduce area-related balance of system costs, providing increased value especially in area-constrainedmore » applications. This avenue for high-efficiency photovoltaics enables improved performance on a near-term timeframe, as well as a path to further reduced levelized cost of electricity as module and cell processes continue to advance.« less

  19. Low-Cost CdTe/Silicon Tandem Solar Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamboli, Adele C.; Bobela, David C.; Kanevce, Ana

    Achieving higher photovoltaic efficiency in single-junction devices is becoming increasingly difficult, but tandem modules offer the possibility of significant efficiency improvements. By device modeling we show that four-terminal CdTe/Si tandem solar modules offer the prospect of 25%-30% module efficiency, and technoeconomic analysis predicts that these efficiency gains can be realized at costs per Watt that are competitive with CdTe and Si single junction alternatives. The cost per Watt of the modeled tandems is lower than crystalline silicon, but slightly higher than CdTe alone. But, these higher power modules reduce area-related balance of system costs, providing increased value especially in area-constrainedmore » applications. This avenue for high-efficiency photovoltaics enables improved performance on a near-term timeframe, as well as a path to further reduced levelized cost of electricity as module and cell processes continue to advance.« less

  20. Subreflector model depending on elevation for the Tianma 65m Radio Telescope

    NASA Astrophysics Data System (ADS)

    Sun, Zheng-Xiong; Wang, Jin-Qing; Chen, Lan

    2016-08-01

    A subreflector adjustment system for the Tianma 65 m radio telescope, administered by Shanghai Astronomical Observatory, has been installed to compensate for gravitational deformation of the main reflector and the structure supporting the subreflector. The position and attitude of the subreflector are variable in order to improve the efficiency at different elevations. The subreflector model has the goal of improving the antenna's performance. A new fitting formulation which is different from the traditional formulation is proposed to reduce the fitting error in the Y direction. The only difference in the subreflector models of the 65m radio telescope is the bias of a constant term in the Z direction. We have investigated the effect of movements of the subreflector on the pointing of the antenna. The results of these performance measurements made by moving the antenna in elevation show that the subreflector model can effectively improve the efficiency of the 65 m radio telescope at each elevation. An antenna efficiency of about 60% at the Ku band is reached in the whole angular range of elevation.

  1. The Position and Attitude of Sub-reflector Modeling for TM65 m Radio Telescope

    NASA Astrophysics Data System (ADS)

    Sun, Z. X.; Chen, L.; Wang, J. Q.

    2016-01-01

    In the course of astronomical observations, with changes in angle of pitch, the large radio telescope will have different degrees of deformation in the sub-reflector support, back frame, main reflector etc, which will lead to the dramatic decline of antenna efficiency in both high and low elevation. A sub-reflector system of the Tian Ma 65 m radio telescope has been installed in order to compensate for the gravitational deformations of the sub-reflector support and the main reflector. The position and attitude of the sub-reflector are variable in order to improve the pointing performance and the efficiency at different elevations. In this paper, it is studied that the changes of position and attitude of the sub-reflector have influence on the efficiency of antenna in the X band and Ku band. A model has been constructed to determine the position and attitude of the sub-reflector with elevation, as well as the point compensation model, by observing the radio source. In addition, antenna efficiency was tested with sub-reflector position adjusted and fixed. The results show that the model of sub-reflector can effectively improve the efficiency of the 65 m radio telescope. In X band, the aperture efficiency of the radio telescope reaches more than 60% over the entire elevation range.

  2. Improving actuation efficiency through variable recruitment hydraulic McKibben muscles: modeling, orderly recruitment control, and experiments.

    PubMed

    Meller, Michael; Chipka, Jordan; Volkov, Alexander; Bryant, Matthew; Garcia, Ephrahim

    2016-11-03

    Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.

  3. Advanced interface modelling of n-Si/HNO3 doped graphene solar cells to identify pathways to high efficiency

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Ma, Fa-Jun; Ding, Ke; Zhang, Hao; Jie, Jiansheng; Ho-Baillie, Anita; Bremner, Stephen P.

    2018-03-01

    In graphene/silicon solar cells, it is crucial to understand the transport mechanism of the graphene/silicon interface to further improve power conversion efficiency. Until now, the transport mechanism has been predominantly simplified as an ideal Schottky junction. However, such an ideal Schottky contact is never realised experimentally. According to literature, doped graphene shows the properties of a semiconductor, therefore, it is physically more accurate to model graphene/silicon junction as a Heterojunction. In this work, HNO3-doped graphene/silicon solar cells were fabricated with the power conversion efficiency of 9.45%. Extensive characterization and first-principles calculations were carried out to establish an advanced technology computer-aided design (TCAD) model, where p-doped graphene forms a straddling heterojunction with the n-type silicon. In comparison with the simple Schottky junction models, our TCAD model paves the way for thorough investigation on the sensitivity of solar cell performance to graphene properties like electron affinity. According to the TCAD heterojunction model, the cell performance can be improved up to 22.5% after optimizations of the antireflection coatings and the rear structure, highlighting the great potentials for fabricating high efficiency graphene/silicon solar cells and other optoelectronic devices.

  4. Sequence determinants of improved CRISPR sgRNA design.

    PubMed

    Xu, Han; Xiao, Tengfei; Chen, Chen-Hao; Li, Wei; Meyer, Clifford A; Wu, Qiu; Wu, Di; Cong, Le; Zhang, Feng; Liu, Jun S; Brown, Myles; Liu, X Shirley

    2015-08-01

    The CRISPR/Cas9 system has revolutionized mammalian somatic cell genetics. Genome-wide functional screens using CRISPR/Cas9-mediated knockout or dCas9 fusion-mediated inhibition/activation (CRISPRi/a) are powerful techniques for discovering phenotype-associated gene function. We systematically assessed the DNA sequence features that contribute to single guide RNA (sgRNA) efficiency in CRISPR-based screens. Leveraging the information from multiple designs, we derived a new sequence model for predicting sgRNA efficiency in CRISPR/Cas9 knockout experiments. Our model confirmed known features and suggested new features including a preference for cytosine at the cleavage site. The model was experimentally validated for sgRNA-mediated mutation rate and protein knockout efficiency. Tested on independent data sets, the model achieved significant results in both positive and negative selection conditions and outperformed existing models. We also found that the sequence preference for CRISPRi/a is substantially different from that for CRISPR/Cas9 knockout and propose a new model for predicting sgRNA efficiency in CRISPRi/a experiments. These results facilitate the genome-wide design of improved sgRNA for both knockout and CRISPRi/a studies. © 2015 Xu et al.; Published by Cold Spring Harbor Laboratory Press.

  5. Improving Environmental Model Calibration and Prediction

    DTIC Science & Technology

    2011-01-18

    REPORT Final Report - Improving Environmental Model Calibration and Prediction 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: First, we have continued to...develop tools for efficient global optimization of environmental models. Our algorithms are hybrid algorithms that combine evolutionary strategies...toward practical hybrid optimization tools for environmental models. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 18-01-2011 13

  6. Biomechanics of forearm rotation: force and efficiency of pronator teres.

    PubMed

    Ibáñez-Gimeno, Pere; Galtés, Ignasi; Jordana, Xavier; Malgosa, Assumpció; Manyosa, Joan

    2014-01-01

    Biomechanical models are useful to assess the effect of muscular forces on bone structure. Using skeletal remains, we analyze pronator teres rotational efficiency and its force components throughout the entire flexion-extension and pronation-supination ranges by means of a new biomechanical model and 3D imaging techniques, and we explore the relationship between these parameters and skeletal structure. The results show that maximal efficiency is the highest in full elbow flexion and is close to forearm neutral position for each elbow angle. The vertical component of pronator teres force is the highest among all components and is greater in pronation and elbow extension. The radial component becomes negative in pronation and reaches lower values as the elbow flexes. Both components could enhance radial curvature, especially in pronation. The model also enables to calculate efficiency and force components simulating changes in osteometric parameters. An increase of radial curvature improves efficiency and displaces the position where the radial component becomes negative towards the end of pronation. A more proximal location of pronator teres radial enthesis and a larger humeral medial epicondyle increase efficiency and displace the position where this component becomes negative towards forearm neutral position, which enhances radial curvature. Efficiency is also affected by medial epicondylar orientation and carrying angle. Moreover, reaching an object and bringing it close to the face in a close-to-neutral position improve efficiency and entail an equilibrium between the forces affecting the elbow joint stability. When the upper-limb skeleton is used in positions of low efficiency, implying unbalanced force components, it undergoes plastic changes, which improve these parameters. These findings are useful for studies on ergonomics and orthopaedics, and the model could also be applied to fossil primates in order to infer their locomotor form. Moreover, activity patterns in human ancient populations could be deduced from parameters reported here.

  7. An Improved Perturb and Observe Algorithm for Photovoltaic Motion Carriers

    NASA Astrophysics Data System (ADS)

    Peng, Lele; Xu, Wei; Li, Liming; Zheng, Shubin

    2018-03-01

    An improved perturbation and observation algorithm for photovoltaic motion carriers is proposed in this paper. The model of the proposed algorithm is given by using Lambert W function and tangent error method. Moreover, by using matlab and experiment of photovoltaic system, the tracking performance of the proposed algorithm is tested. And the results demonstrate that the improved algorithm has fast tracking speed and high efficiency. Furthermore, the energy conversion efficiency by the improved method has increased by nearly 8.2%.

  8. A Secure and Efficient Handover Authentication Protocol for Wireless Networks

    PubMed Central

    Wang, Weijia; Hu, Lei

    2014-01-01

    Handover authentication protocol is a promising access control technology in the fields of WLANs and mobile wireless sensor networks. In this paper, we firstly review an efficient handover authentication protocol, named PairHand, and its existing security attacks and improvements. Then, we present an improved key recovery attack by using the linearly combining method and reanalyze its feasibility on the improved PairHand protocol. Finally, we present a new handover authentication protocol, which not only achieves the same desirable efficiency features of PairHand, but enjoys the provable security in the random oracle model. PMID:24971471

  9. Simplified Floor-Area-Based Energy-Moisture-Economic Model for Residential Buildings

    ERIC Educational Resources Information Center

    Martinez, Luis A.

    2009-01-01

    In the United States, 21% of all energy is used in residential buildings (40% of which is for heating and cooling homes). Promising improvements in residential building energy efficiency are underway such as the Building America Program and the Passive House Concept. The ability of improving energy efficiency in buildings is enhanced by building…

  10. Wind tunnel investigation of a high lift system with pneumatic flow control

    NASA Astrophysics Data System (ADS)

    Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu

    2016-06-01

    Next generation passenger aircrafts require more efficient high lift systems under size and mass constraints, to achieve more fuel efficiency. This can be obtained in various ways: to improve/maintain aerodynamic performance while simplifying the mechanical design of the high lift system going to a single slotted flap, to maintain complexity and improve the aerodynamics even more, etc. Laminar wings have less efficient leading edge high lift systems if any, requiring more performance from the trailing edge flap. Pulsed blowing active flow control (AFC) in the gap of single element flap is investigated for a relatively large model. A wind tunnel model, test campaign and results and conclusion are presented.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less

  12. Determinants of eco-efficiency in the Chinese industrial sector.

    PubMed

    Fujii, Hidemichi; Managi, Shunsuke

    2013-12-01

    This study measures productive inefficiency within the context of multi-environmental pollution (eco-efficiency) in the Chinese industrial sector. The weighted Russell directional distance model is applied to measure eco-efficiency using production technology. The objective is to clarify how external factors affect eco-efficiency. The major findings are that both foreign direct investment and investment for pollution abatement improve eco-efficiency as measured by air pollutant substances. A levy system for wastewater discharge improves eco-efficiency as measured by wastewater pollutant substances. However, an air pollutant levy does not significantly affect eco-efficiency as measured by air pollutants. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  13. Developing the Mathematics Learning Management Model for Improving Creative Thinking in Thailand

    ERIC Educational Resources Information Center

    Sriwongchai, Arunee; Jantharajit, Nirat; Chookhampaeng, Sumalee

    2015-01-01

    The study purposes were: 1) To study current states and problems of relevant secondary students in developing mathematics learning management model for improving creative thinking, 2) To evaluate the effectiveness of model about: a) efficiency of learning process, b) comparisons of pretest and posttest on creative thinking and achievement of…

  14. Organizational climate, occupational stress, and employee mental health: mediating effects of organizational efficiency.

    PubMed

    Arnetz, Bengt B; Lucas, Todd; Arnetz, Judith E

    2011-01-01

    To determine whether the relationship between organizational climate and employee mental health is consistent (ie, invariant) or differs across four large hospitals, and whether organizational efficiency mediates this relationship. Participants (total N = 5316) completed validated measures of organizational climate variables (social climate, participatory management, goal clarity, and performance feedback), organizational efficiency, occupational stress, and mental health. Path analysis best supported a model in which organizational efficiency partially mediated relationships between organizational climate, occupational stress, and mental health. Focusing on improving both the psychosocial work environment and organizational efficiency might contribute to decreased employee stress, improved mental well-being, and organizational performance.

  15. Software Surface Modeling and Grid Generation Steering Committee

    NASA Technical Reports Server (NTRS)

    Smith, Robert E. (Editor)

    1992-01-01

    It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.

  16. Final Report: Utilizing Alternative Fuel Ignition Properties to Improve SI and CI Engine Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wooldridge, Margaret; Boehman, Andre; Lavoie, George

    Experimental and modeling studies were completed to explore leveraging physical and chemical fuel properties for improved thermal efficiency of internal combustion engines. Fundamental studies of the ignition chemistry of ethanol and iso-octane blends and constant volume spray chamber studies of gasoline and diesel sprays supported the core research effort which used several reciprocating engine platforms. Single cylinder spark ignition (SI) engine studies were carried out to characterize the impact of ethanol/gasoline, syngas (H 2 and CO)/gasoline and other oxygenate/gasoline blends on engine performance. The results of the single-cylinder engine experiments and other data from the literature were used to trainmore » a GT Power model and to develop a knock criteria based on reaction chemistry. The models were used to interpret the experimental results and project future performance. Studies were also carried out using a state of the art, direct injection (DI) turbocharged multi- cylinder engine with piezo-actuated fuel injectors to demonstrate the promising spray and spark timing strategies from single-cylinder engine studies on the multi-cylinder engine. Key outcomes and conclusions of the studies were: 1. Efficiency benefits of ethanol and gasoline fuel blends were consistent and substantial (e.g. 5-8% absolute improvement in gross indicated thermal efficiency (GITE)). 2. The best ethanol/gasoline blend (based on maximum thermal efficiency) was determined by the engine hardware and limits based on component protection (e.g. peak in-cylinder pressure or maximum turbocharger inlet temperature) – and not by knock limits. Blends with <50% ethanol delivered significant thermal efficiency gains with conventional SI hardware while maintain good safety integrity to the engine hardware. 3. Other compositions of fuel blends including syngas (H 2 and CO) and other dilution strategies provided significant efficiency gains as well (e.g. 5% absolute improvement in ITE). 4. When the combination of engine and fuel system is not knock limited, multiple fuel injection events maintain thermal efficiency while improving engine-out emissions (e.g. CO, UHC, and particulate number).« less

  17. Advanced Hydrogen Liquefaction Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Joseph; Kromer, Brian; Neu, Ben

    2011-09-28

    The project identified and quantified ways to reduce the cost of hydrogen liquefaction, and reduce the cost of hydrogen distribution. The goal was to reduce the power consumption by 20% and then to reduce the capital cost. Optimizing the process, improving process equipment, and improving ortho-para conversion significantly reduced the power consumption of liquefaction, but by less than 20%. Because the efficiency improvement was less than the target, the program was stopped before the capital cost was addressed. These efficiency improvements could provide a benefit to the public to improve the design of future hydrogen liquefiers. The project increased themore » understanding of hydrogen liquefaction by modeling different processes and thoroughly examining ortho-para separation and conversion. The process modeling provided a benefit to the public because the project incorporated para hydrogen into the process modeling software, so liquefaction processes can be modeled more accurately than using only normal hydrogen. Adding catalyst to the first heat exchanger, a simple method to reduce liquefaction power, was identified, analyzed, and quantified. The demonstrated performance of ortho-para separation is sufficient for at least one identified process concept to show reduced power cost when compared to hydrogen liquefaction processes using conventional ortho-para conversion. The impact of improved ortho-para conversion can be significant because ortho para conversion uses about 20-25% of the total liquefaction power, but performance improvement is necessary to realize a substantial benefit. Most of the energy used in liquefaction is for gas compression. Improvements in hydrogen compression will have a significant impact on overall liquefier efficiency. Improvements to turbines, heat exchangers, and other process equipment will have less impact.« less

  18. Cost efficiency of the non-associative flow rule simulation of an industrial component

    NASA Astrophysics Data System (ADS)

    Galdos, Lander; de Argandoña, Eneko Saenz; Mendiguren, Joseba

    2017-10-01

    In the last decade, metal forming industry is becoming more and more competitive. In this context, the FEM modeling has become a primary tool of information for the component and process design. Numerous researchers have been focused on improving the accuracy of the material models implemented on the FEM in order to improve the efficiency of the simulations. Aimed at increasing the efficiency of the anisotropic behavior modelling, in the last years the use of non-associative flow rule models (NAFR) has been presented as an alternative to the classic associative flow rule models (AFR). In this work, the cost efficiency of the used flow rule model has been numerically analyzed by simulating an industrial drawing operation with two different models of the same degree of flexibility: one AFR model and one NAFR model. From the present study, it has been concluded that the flow rule has a negligible influence on the final drawing prediction; this is mainly driven by the model parameter identification procedure. Even though the NAFR formulation is complex when compared to the AFR, the present study shows that the total simulation time while using explicit FE solvers has been reduced without loss of accuracy. Furthermore, NAFR formulations have an advantage over AFR formulations in parameter identification because the formulation decouples the yield stress and the Lankford coefficients.

  19. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  20. The admissible portfolio selection problem with transaction costs and an improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Zhang, Wei-Guo

    2010-05-01

    In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.

  1. An Initial Multi-Domain Modeling of an Actively Cooled Structure

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur

    1997-01-01

    A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.

  2. Drive Cycle Powertrain Efficiencies and Trends Derived from EPA Vehicle Dynamometer Results

    DOE PAGES

    Thomas, John

    2014-10-13

    Vehicle manufacturers among others are putting great emphasis on improving fuel economy (FE) of light-duty vehicles in the U.S. market, with significant FE gains being realized in recent years. The U.S. Environmental Protection Agency (EPA) data indicates that the aggregate FE of vehicles produced for the U.S. market has improved by over 20% from model year (MY) 2005 to 2013. This steep climb in FE includes changes in vehicle choice, improvements in engine and transmission technology, and reducing aerodynamic drag, rolling resistance, and parasitic losses. The powertrain related improvements focus on optimizing in-use efficiency of the transmission and engine asmore » a system, and may make use of what is termed downsizing and/or downspeeding. This study explores quantifying recent improvements in powertrain efficiency, viewed separately from other vehicle alterations and attributes (noting that most vehicle changes are not completely independent). A methodology is outlined to estimate powertrain efficiency for the U.S city and highway cycle tests using data from the EPA vehicle database. Comparisons of common conventional gasoline powertrains for similar MY 2005 and 2013 vehicles are presented, along with results for late-model hybrid electric vehicles, the Nissan Leaf, Chevy Volt and other selected vehicles.« less

  3. Efficiency evaluation with feedback for regional water use and wastewater treatment

    NASA Astrophysics Data System (ADS)

    Hu, Zhineng; Yan, Shiyu; Yao, Liming; Moudi, Mahdi

    2018-07-01

    Clean water is crucial for sustainable economic and social development; however, around the world low water use efficiency and increasing water pollution have become serious problems. To comprehensively evaluate water use and wastewater treatment, this paper integrated bi-level programming (BLP) and Data Envelopment Analysis (DEA) with a feedback variable to deal with poor output to rank DMUs using a super efficiency DEA. The proposed model was applied to a case study of 10 cities in the Minjiang River Basin to demonstrate the applicability and effectiveness, from which it was found that a water system can only be cost-efficient when both the water use and wastewater treatment subsystems are both cost-efficient. The comparison analysis demonstrated that the proposed model was more discriminating, and stable than traditional DEA models and was able to better improve total water system cost efficiencies than a BLP-DEA model.

  4. Can Community Colleges Afford to Improve Completion? Measuring the Cost and Efficiency Consequences of Reform

    ERIC Educational Resources Information Center

    Belfield, Clive; Crosta, Peter; Jenkins, Davis

    2014-01-01

    Community colleges are under pressure to improve completion rates and efficiency despite limited economic evidence on how to do so and the consequences of different reform strategies. Here, we set out an economic model of student course pathways linked to college expenditures and revenues. Using detailed data from a single college, we calculate…

  5. A hybrid fuzzy logic and extreme learning machine for improving efficiency of circulating water systems in power generation plant

    NASA Astrophysics Data System (ADS)

    Aziz, Nur Liyana Afiqah Abdul; Siah Yap, Keem; Afif Bunyamin, Muhammad

    2013-06-01

    This paper presents a new approach of the fault detection for improving efficiency of circulating water system (CWS) in a power generation plant using a hybrid Fuzzy Logic System (FLS) and Extreme Learning Machine (ELM) neural network. The FLS is a mathematical tool for calculating the uncertainties where precision and significance are applied in the real world. It is based on natural language which has the ability of "computing the word". The ELM is an extremely fast learning algorithm for neural network that can completed the training cycle in a very short time. By combining the FLS and ELM, new hybrid model, i.e., FLS-ELM is developed. The applicability of this proposed hybrid model is validated in fault detection in CWS which may help to improve overall efficiency of power generation plant, hence, consuming less natural recourses and producing less pollutions.

  6. [Efficiency of industrial energy conservation and carbon emission reduction in Liaoning Pro-vince based on data envelopment analysis (DEA)method.

    PubMed

    Wang, Li; Xi, Feng Ming; Li, Jin Xin; Liu, Li Li

    2016-09-01

    Taking 39 industries as independent decision-making units in Liaoning Province from 2003 to 2012 and considering the benefits of energy, economy and environment, we combined direction distance function and radial DEA method to estimate and decompose the energy conservation and carbon emissions reduction efficiency of the industries. Carbon emission of each industry was calculated and defined as an undesirable output into the model of energy saving and carbon emission reduction efficiency. The results showed that energy saving and carbon emission reduction efficiency of industries had obvious heterogeneity in Liaoning Province. The whole energy conservation and carbon emissions reduction efficiency in each industry of Liaoning Province was not high, but it presented a rising trend. Improvements of pure technical efficiency and scale efficiency were the main measures to enhance energy saving and carbon emission reduction efficiency, especially scale efficiency improvement. In order to improve the energy saving and carbon emission reduction efficiency of each industry in Liaoning Province, we put forward that Liaoning Province should adjust industry structure, encourage the development of low carbon high benefit industries, improve scientific and technological level and adjust the industry scale reasonably, meanwhile, optimize energy structure, and develop renewable and clean energy.

  7. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  8. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Lu, Dan; Ye, Ming; Gunzburger, Max; Webster, Clayton

    2013-10-01

    Bayesian analysis has become vital to uncertainty quantification in groundwater modeling, but its application has been hindered by the computational cost associated with numerous model executions required by exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, a new approach is developed to improve the computational efficiency of Bayesian inference by constructing a surrogate of the PPDF, using an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, this paper utilizes a compactly supported higher-order hierarchical basis to construct the surrogate system, resulting in a significant reduction in the number of required model executions. In addition, using the hierarchical surplus as an error indicator allows locally adaptive refinement of sparse grids in the parameter space, which further improves computational efficiency. To efficiently build the surrogate system for the PPDF with multiple significant modes, optimization techniques are used to identify the modes, for which high-probability regions are defined and components of the aSG-hSC approximation are constructed. After the surrogate is determined, the PPDF can be evaluated by sampling the surrogate system directly without model execution, resulting in improved efficiency of the surrogate-based MCMC compared with conventional MCMC. The developed method is evaluated using two synthetic groundwater reactive transport models. The first example involves coupled linear reactions and demonstrates the accuracy of our high-order hierarchical basis approach in approximating high-dimensional posteriori distribution. The second example is highly nonlinear because of the reactions of uranium surface complexation, and demonstrates how the iterative aSG-hSC method is able to capture multimodal and non-Gaussian features of PPDF caused by model nonlinearity. Both experiments show that aSG-hSC is an effective and efficient tool for Bayesian inference.

  9. Compound Synthesis or Growth and Development of Roots/Stomata Regulate Plant Drought Tolerance or Water Use Efficiency/Water Uptake Efficiency.

    PubMed

    Meng, Lai-Sheng

    2018-04-11

    Water is crucial to plant growth and development because it serves as a medium for all cellular functions. Thus, the improvement of plant drought tolerance or water use efficiency/water uptake efficiency is important in modern agriculture. In this review, we mainly focus on new genetic factors for ameliorating drought tolerance or water use efficiency/water uptake efficiency of plants and explore the involvement of these genetic factors in the regulation of improving plant drought tolerance or water use efficiency/water uptake efficiency, which is a result of altered stomata density and improving root systems (primary root length, hair root growth, and lateral root number) and enhanced production of osmotic protectants, which is caused by transcription factors, proteinases, and phosphatases and protein kinases. These results will help guide the synthesis of a model for predicting how the signals of genetic and environmental stress are integrated at a few genetic determinants to control the establishment of either water use efficiency or water uptake efficiency. Collectively, these insights into the molecular mechanism underpinning the control of plant drought tolerance or water use efficiency/water uptake efficiency may aid future breeding or design strategies to increase crop yield.

  10. Research of an emergency medical system for mass casualty incidents in Shanghai, China: a system dynamics model.

    PubMed

    Yu, Wenya; Lv, Yipeng; Hu, Chaoqun; Liu, Xu; Chen, Haiping; Xue, Chen; Zhang, Lulu

    2018-01-01

    Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise.

  11. Analysis on trust influencing factors and trust model from multiple perspectives of online Auction

    NASA Astrophysics Data System (ADS)

    Yu, Wang

    2017-10-01

    Current reputation models lack the research on online auction trading completely so they cannot entirely reflect the reputation status of users and may cause problems on operability. To evaluate the user trust in online auction correctly, a trust computing model based on multiple influencing factors is established. It aims at overcoming the efficiency of current trust computing methods and the limitations of traditional theoretical trust models. The improved model comprehensively considers the trust degree evaluation factors of three types of participants according to different participation modes of online auctioneers, to improve the accuracy, effectiveness and robustness of the trust degree. The experiments test the efficiency and the performance of our model under different scale of malicious user, under environment like eBay and Sporas model. The experimental results analysis show the model proposed in this paper makes up the deficiency of existing model and it also has better feasibility.

  12. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  13. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  14. Efficient preloading of the ventricles by a properly timed atrial contraction underlies stroke work improvement in the acute response to cardiac resynchronization therapy

    PubMed Central

    Hu, Yuxuan; Gurev, Viatcheslav; Constantino, Jason; Trayanova, Natalia

    2013-01-01

    Background The acute response to cardiac resynchronization therapy (CRT) has been shown to be due to three mechanisms: resynchronization of ventricular contraction, efficient preloading of the ventricles by a properly timed atrial contraction, and mitral regurgitation reduction. However, the contribution of each of the three mechanisms to the acute response of CRT, specifically stroke work improvement, has not been quantified. Objective The goal of this study was to use an MRI-based anatomically accurate 3D model of failing canine ventricular electromechanics to quantify the contribution of each of the three mechanisms to stroke work improvement and identify the predominant mechanisms. Methods An MRI-based electromechanical model of the failing canine ventricles assembled previously by our group was further developed and modified. Three different protocols were used to dissect the contribution of each of the three mechanisms to stroke work improvement. Results Resynchronization of ventricular contraction did not lead to significant stroke work improvement. Efficient preloading of the ventricles by a properly timed atrial contraction was the predominant mechanism underlying stroke work improvement. Stroke work improvement peaked at an intermediate AV delay, as it allowed ventricular filling by atrial contraction to occur at a low diastolic LV pressure but also provided adequate time for ventricular filling before ventricular contraction. Diminution of mitral regurgitation by CRT led to stroke work worsening instead of improvement. Conclusion Efficient preloading of the ventricles by a properly timed atrial contraction is responsible for significant stroke work improvement in the acute CRT response. PMID:23928177

  15. Characteristics of locomotion efficiency of an expanding-extending robotic endoscope in the intestinal environment.

    PubMed

    He, Shu; Yan, Guozheng; Wang, Zhiwu; Gao, Jinyang; Yang, Kai

    2015-07-01

    Robotic endoscopes with locomotion ability are among the most promising alternatives to traditional endoscopes; the locomotion ability is an important factor when evaluating the performance of the robot. This article describes the research on the characteristics of an expanding-extending robotic endoscope's locomotion efficiency in real intestine and explores an approach to improve the locomotion ability in this environment. In the article, the robot's locomotion efficiency was first calculated according to its gait in the gut, and the reasons for step losses were analyzed. Next, dynamical models of the robot and the intestine were built to calculate the step losses caused by failed anchoring and intestinal compression/extension. Based on the models and the calculation results, methods for reducing step losses were proposed. Finally, a series of ex vivo experiments were carried out, and the actual locomotion efficiency of the robot was analyzed on the basis of the theoretical models. In the experiment, on a level platform, the locomotion efficiency of the robot varied between 34.2% and 63.7%; the speed of the robot varied between 0.62 and 1.29 mm/s. The robot's efficiency when climbing a sloping intestine was also tested and analyzed. The proposed theoretical models and experimental results provide a good reference for improving the design of robotic endoscopy. © IMechE 2015.

  16. Measuring the efficiency of zakat collection process using data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Hamzah, Ahmad Aizuddin; Krishnan, Anath Rau

    2016-10-01

    It is really necessary for each zakat institution in the nation to timely measure and understand their efficiency in collecting zakat for the sake of continuous betterment. Pusat Zakat Sabah, Malaysia which has kicked off its operation in early of 2007, is not excused from this obligation as well. However, measuring the collection efficiency is not a very easy task as it usually incorporates the consideration of multiple inputs or/and outputs. This paper sequentially employed three data envelopment analysis models, namely Charnes-Cooper-Rhodes (CCR) primal model, CCR dual model, and slack based model to quantitatively evaluate the efficiency of zakat collection in Sabah across the year of 2007 up to 2015 by treating each year as a decision making unit. The three models were developed based on two inputs (i.e. number of zakat branches and number of staff) and one output (i.e. total collection). The causes for not achieving efficiency and the suggestions on how the efficiency in each year could have been improved were disclosed.

  17. Airport security inspection process model and optimization based on GSPN

    NASA Astrophysics Data System (ADS)

    Mao, Shuainan

    2018-04-01

    Aiming at the efficiency of airport security inspection process, Generalized Stochastic Petri Net is used to establish the security inspection process model. The model is used to analyze the bottleneck problem of airport security inspection process. The solution to the bottleneck is given, which can significantly improve the efficiency and reduce the waiting time by adding the place for people to remove their clothes and the X-ray detector.

  18. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  19. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  20. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  1. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  2. Design of a Kaplan turbine for a wide range of operating head -Curved draft tube design and model test verification-

    NASA Astrophysics Data System (ADS)

    KO, Pohan; MATSUMOTO, Kiyoshi; OHTAKE, Norio; DING, Hua

    2016-11-01

    As for turbomachine off-design performance improvement is challenging but critical for maximising the performing area. In this paper, a curved draft tube for a medium head Kaplan type hydro turbine is introduced and discussed for its significant effect on expanding operating head range. Without adding any extra structure and working fluid for swirl destruction and damping, a carefully designed outline shape of draft tube with the selected placement of center-piers successfully supresses the growth of turbulence eddy and the transport of the swirl to the outlet. Also, more kinetic energy is recovered and the head lost is improved. Finally, the model test results are also presented. The obvious performance improvement was found in the lower net head area, where the maximum efficiency improvement was measured up to 20% without compromising the best efficiency point. Additionally, this design results in a new draft tube more compact in size and so leads to better construction and manufacturing cost performance for prototype. The draft tube geometry parameter designing process was concerning the best efficiency point together with the off-design points covering various water net heads and discharges. The hydraulic performance and flow behavior was numerically previewed and visualized by solving Reynolds-Averaged Navier-Stokes equations with Shear Stress Transport turbulence model. The simulation was under the assumption of steady-state incompressible turbulence flow inside the flow passage, and the inlet boundary condition was the carefully simulated flow pattern from the runner outlet. For confirmation, the corresponding turbine efficiency performance of the entire operating area was verified by model test.

  3. Optoelectronic engineering of colloidal quantum-dot solar cells beyond the efficiency black hole: a modeling approach

    NASA Astrophysics Data System (ADS)

    Mahpeykar, Seyed Milad; Wang, Xihua

    2017-02-01

    Colloidal quantum dot (CQD) solar cells have been under the spotlight in recent years mainly due to their potential for low-cost solution-processed fabrication and efficient light harvesting through multiple exciton generation (MEG) and tunable absorption spectrum via the quantum size effect. Despite the impressive advances achieved in charge carrier mobility of quantum dot solids and the cells' light trapping capabilities, the recent progress in CQD solar cell efficiencies has been slow, leaving them behind other competing solar cell technologies. In this work, using comprehensive optoelectronic modeling and simulation, we demonstrate the presence of a strong efficiency loss mechanism, here called the "efficiency black hole", that can significantly hold back the improvements achieved by any efficiency enhancement strategy. We prove that this efficiency black hole is the result of sole focus on enhancement of either light absorption or charge extraction capabilities of CQD solar cells. This means that for a given thickness of CQD layer, improvements accomplished exclusively in optic or electronic aspect of CQD solar cells do not necessarily translate into tangible enhancement in their efficiency. The results suggest that in order for CQD solar cells to come out of the mentioned black hole, incorporation of an effective light trapping strategy and a high quality CQD film at the same time is an essential necessity. Using the developed optoelectronic model, the requirements for this incorporation approach and the expected efficiencies after its implementation are predicted as a roadmap for CQD solar cell research community.

  4. Does integration of HIV and sexual and reproductive health services improve technical efficiency in Kenya and Swaziland? An application of a two-stage semi parametric approach incorporating quality measures

    PubMed Central

    Obure, Carol Dayo; Jacobs, Rowena; Guinness, Lorna; Mayhew, Susannah; Vassall, Anna

    2016-01-01

    Theoretically, integration of vertically organized services is seen as an important approach to improving the efficiency of health service delivery. However, there is a dearth of evidence on the effect of integration on the technical efficiency of health service delivery. Furthermore, where technical efficiency has been assessed, there have been few attempts to incorporate quality measures within efficiency measurement models particularly in sub-Saharan African settings. This paper investigates the technical efficiency and the determinants of technical efficiency of integrated HIV and sexual and reproductive health (SRH) services using data collected from 40 health facilities in Kenya and Swaziland for 2008/2009 and 2010/2011. Incorporating a measure of quality, we estimate the technical efficiency of health facilities and explore the effect of integration and other environmental factors on technical efficiency using a two-stage semi-parametric double bootstrap approach. The empirical results reveal a high degree of inefficiency in the health facilities studied. The mean bias corrected technical efficiency scores taking quality into consideration varied between 22% and 65% depending on the data envelopment analysis (DEA) model specification. The number of additional HIV services in the maternal and child health unit, public ownership and facility type, have a positive and significant effect on technical efficiency. However, number of additional HIV and STI services provided in the same clinical room, proportion of clinical staff to overall staff, proportion of HIV services provided, and rural location had a negative and significant effect on technical efficiency. The low estimates of technical efficiency and mixed effects of the measures of integration on efficiency challenge the notion that integration of HIV and SRH services may substantially improve the technical efficiency of health facilities. The analysis of quality and efficiency as separate dimensions of performance suggest that efficiency may be achieved without sacrificing quality. PMID:26803655

  5. Does integration of HIV and sexual and reproductive health services improve technical efficiency in Kenya and Swaziland? An application of a two-stage semi parametric approach incorporating quality measures.

    PubMed

    Obure, Carol Dayo; Jacobs, Rowena; Guinness, Lorna; Mayhew, Susannah; Vassall, Anna

    2016-02-01

    Theoretically, integration of vertically organized services is seen as an important approach to improving the efficiency of health service delivery. However, there is a dearth of evidence on the effect of integration on the technical efficiency of health service delivery. Furthermore, where technical efficiency has been assessed, there have been few attempts to incorporate quality measures within efficiency measurement models particularly in sub-Saharan African settings. This paper investigates the technical efficiency and the determinants of technical efficiency of integrated HIV and sexual and reproductive health (SRH) services using data collected from 40 health facilities in Kenya and Swaziland for 2008/2009 and 2010/2011. Incorporating a measure of quality, we estimate the technical efficiency of health facilities and explore the effect of integration and other environmental factors on technical efficiency using a two-stage semi-parametric double bootstrap approach. The empirical results reveal a high degree of inefficiency in the health facilities studied. The mean bias corrected technical efficiency scores taking quality into consideration varied between 22% and 65% depending on the data envelopment analysis (DEA) model specification. The number of additional HIV services in the maternal and child health unit, public ownership and facility type, have a positive and significant effect on technical efficiency. However, number of additional HIV and STI services provided in the same clinical room, proportion of clinical staff to overall staff, proportion of HIV services provided, and rural location had a negative and significant effect on technical efficiency. The low estimates of technical efficiency and mixed effects of the measures of integration on efficiency challenge the notion that integration of HIV and SRH services may substantially improve the technical efficiency of health facilities. The analysis of quality and efficiency as separate dimensions of performance suggest that efficiency may be achieved without sacrificing quality. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Number and location of drainage catheter side holes: in vitro evaluation.

    PubMed

    Ballard, D H; Alexander, J S; Weisman, J A; Orchard, M A; Williams, J T; D'Agostino, H B

    2015-09-01

    To evaluate the influence of number and location of catheter shaft side holes regarding drainage efficiency in an in vitro model. Three different drainage catheter models were constructed: open-ended model with no side holes (one catheter), unilateral side hole model (six catheters with one to six unilateral side holes), and bilateral side hole model (six catheters with one to six bilateral side holes). Catheters were inserted into a drainage output-measuring device with a constant-pressure reservoir of water. The volume of water evacuated by each of the catheters at 10-second intervals was measured. A total of five trials were performed for each catheter. Data were analysed using one-way analysis of variance. The open-ended catheter had a mean drainage volume comparable to the unilateral model catheters with three, four, and five side holes. Unilateral model catheters had significant drainage volume increases up to three side holes; unilateral model catheters with more than three side holes had no significant improvement in drainage volume. All bilateral model catheters had significantly higher mean drainage volumes than their unilateral counterparts. There was no significant difference between the mean drainage volume with one, two, or three pairs of bilateral side holes. Further, there was no drainage improvement by adding additional bilateral side holes. The present in vitro study suggests that beyond a critical side hole number threshold, adding more distal side holes does not improve catheter drainage efficiency. These results may be used to enhance catheter design towards improving their drainage efficiency. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. Cost inefficiency in Washington hospitals: a stochastic frontier approach using panel data.

    PubMed

    Li, T; Rosenman, R

    2001-06-01

    We analyze a sample of Washington State hospitals with a stochastic frontier panel data model, specifying the cost function as a generalized Leontief function which, according to a Hausman test, performs better in this case than the translog form. A one-stage FGLS estimation procedure which directly models the inefficiency effects improves the efficiency of our estimates. We find that hospitals with higher casemix indices or more beds are less efficient while for-profit hospitals and those with higher proportion of Medicare patient days are more efficient. Relative to the most efficient hospital, the average hospital is only about 67% efficient.

  8. Domestic refrigeration appliances in Poland: Potential for improving energy efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers, S.; Schipper, L.; Lebot, B.

    1993-08-01

    This report is based on information collected from the main Polish manufacturer of refrigeration appliances. We describe their production facilities, and show that the energy consumption of their models for domestic sale is substantially higher than the average for similar models made in W. Europe. Lack of data and uncertainty about future production costs in Poland limits our evaluation of the cost-effective potential to increase energy efficiency, but it appears likely that considerable improvement would be economic from a societal perspective. Many design options are likely to have a simple payback of less than five years. We found that themore » production facilities are in need of substantial modernization in order to produce higher quality and more efficient appliances. We discuss policy options that could help to build a market for more efficient appliances in Poland and thereby encourage investment to produce such equipment.« less

  9. Distribution system model calibration with big data from AMI and PV inverters

    DOE PAGES

    Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...

    2016-03-03

    Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less

  10. Distribution system model calibration with big data from AMI and PV inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.

    Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less

  11. Enhanced analysis of real-time PCR data by using a variable efficiency model: FPK-PCR

    PubMed Central

    Lievens, Antoon; Van Aelst, S.; Van den Bulcke, M.; Goetghebeur, E.

    2012-01-01

    Current methodology in real-time Polymerase chain reaction (PCR) analysis performs well provided PCR efficiency remains constant over reactions. Yet, small changes in efficiency can lead to large quantification errors. Particularly in biological samples, the possible presence of inhibitors forms a challenge. We present a new approach to single reaction efficiency calculation, called Full Process Kinetics-PCR (FPK-PCR). It combines a kinetically more realistic model with flexible adaptation to the full range of data. By reconstructing the entire chain of cycle efficiencies, rather than restricting the focus on a ‘window of application’, one extracts additional information and loses a level of arbitrariness. The maximal efficiency estimates returned by the model are comparable in accuracy and precision to both the golden standard of serial dilution and other single reaction efficiency methods. The cycle-to-cycle changes in efficiency, as described by the FPK-PCR procedure, stay considerably closer to the data than those from other S-shaped models. The assessment of individual cycle efficiencies returns more information than other single efficiency methods. It allows in-depth interpretation of real-time PCR data and reconstruction of the fluorescence data, providing quality control. Finally, by implementing a global efficiency model, reproducibility is improved as the selection of a window of application is avoided. PMID:22102586

  12. The development of furrower model blade to paddlewheel aerator for improving aeration efficiency

    NASA Astrophysics Data System (ADS)

    Bahri, Samsul; Praeko Agus Setiawan, Radite; Hermawan, Wawan; Zairin Junior, Muhammad

    2018-05-01

    The successful of intensive aquaculture is strongly influenced by the ability of the farmers to overcome the deterioration of water quality. The problem is low dissolved oxygen through aeration process. The aerator device which widely used in pond farming is paddle wheel aerator because it is the best aerator in aeration mechanism and usable driven power. However, this aerator still has a low performance of aeration, so that the cost of aerator operational for aquaculture is still high. Up to now, the effort to improve the performance of aeration was made by two-dimensional blade design. Obviously, it does not provide the optimum result due to the power requirements for aeration is directly proportional to the increase of aeration rate. The aim of this research is to develop three-dimensional model furrowed blades. Design of Furrower model blades was 1.6 cm diameter hole, 45º of vertical angle blade position and 30º of the horizontal position. The optimum performance furrowed model blades operated on the submerged blade 9 cm with 567.54 Watt of electrical power consumption and 4.322 m3 of splash coverage volume. The standard efficiency aeration is 2.72 kg O2 kWh-1. The furrowed model blades can improve the aeration efficiency of paddlewheel aerator.

  13. An improved Burgers cellular automaton model for bicycle flow

    NASA Astrophysics Data System (ADS)

    Xue, Shuqi; Jia, Bin; Jiang, Rui; Li, Xingang; Shan, Jingjing

    2017-12-01

    As an energy-efficient and healthy transport mode, bicycling has recently attracted the attention of governments, transport planners, and researchers. The dynamic characteristics of the bicycle flow must be investigated to improve the facility design and traffic operation of bicycling. We model the bicycle flow by using an improved Burgers cellular automaton model. Through a following move mechanism, the modified model enables bicycles to move smoothly and increase the critical density to a more rational level than the original model. The model is calibrated and validated by using experimental data and field data. The results show that the improved model can effectively simulate the bicycle flow. The performance of the model under different parameters is investigated and discussed. Strengths and limitations of the improved model are suggested for future work.

  14. Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia

    PubMed Central

    Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B.; Loewy, Rachel; Vinogradov, Sophia

    2016-01-01

    BACKGROUND Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. METHODS 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. RESULTS We observed signifcant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed inter-individual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20–40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. CONCLUSIONS There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of inter-individual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. PMID:27617637

  15. Is high-intensity interval training a time-efficient exercise strategy to improve health and fitness?

    PubMed

    Gillen, Jenna B; Gibala, Martin J

    2014-03-01

    Growing research suggests that high-intensity interval training (HIIT) is a time-efficient exercise strategy to improve cardiorespiratory and metabolic health. "All out" HIIT models such as Wingate-type exercise are particularly effective, but this type of training may not be safe, tolerable or practical for many individuals. Recent studies, however, have revealed the potential for other models of HIIT, which may be more feasible but are still time-efficient, to stimulate adaptations similar to more demanding low-volume HIIT models and high-volume endurance-type training. As little as 3 HIIT sessions per week, involving ≤10 min of intense exercise within a time commitment of ≤30 min per session, including warm-up, recovery between intervals and cool down, has been shown to improve aerobic capacity, skeletal muscle oxidative capacity, exercise tolerance and markers of disease risk after only a few weeks in both healthy individuals and people with cardiometabolic disorders. Additional research is warranted, as studies conducted have been relatively short-term, with a limited number of measurements performed on small groups of subjects. However, given that "lack of time" remains one of the most commonly cited barriers to regular exercise participation, low-volume HIIT is a time-efficient exercise strategy that warrants consideration by health practitioners and fitness professionals.

  16. Evaluating the efficiency of a zakat institution over a period of time using data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Krishnan, Anath Rau; Hamzah, Ahmad Aizuddin

    2017-08-01

    It is crucial for a zakat institution to evaluate and understand how efficiently they have operated in the past, thus ideal strategies could be developed for future improvement. However, evaluating the efficiency of a zakat institution is actually a challenging process as it involves the presence of multiple inputs or/and outputs. This paper proposes a step-by-step procedure comprising two data envelopment analysis models, namely dual Charnes-Cooper-Rhodes and slack-based model to quantitatively measure the overall efficiency of a zakat institution over a period of time. The applicability of the proposed procedure was demonstrated by evaluating the efficiency of Pusat Zakat Sabah, Malaysia from the year of 2007 up to 2015 by treating each year as a decision making unit. Two inputs (i.e. number of staff and number of branches) and two outputs (i.e. total collection and total distribution) were used to measure the overall efficiency achieved each year. The causes of inefficiency and strategy for future improvement were discussed based on the results.

  17. A two-stage DEA approach for environmental efficiency measurement.

    PubMed

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  18. Second harmonic generation efficiency affected by radiation force of a high-energy laser beam through stress within a mounted potassium dihydrogen phosphate crystal

    NASA Astrophysics Data System (ADS)

    Su, Ruifeng; Zhu, Mingzhi; Huang, Zhan; Wang, Baoxu; Wu, Wenkai

    2018-01-01

    Influence of radiation force of a high-energy laser beam on the second harmonic generation (SHG) efficiency through stress within a mounted potassium dihydrogen phosphate (KDP) crystal is studied, as well as an active method of improving the SHG efficiency by controlling the stress is proposed. At first, the model for studying the influence of the radiation force on the SHG efficiency is established, where the radiation force is theoretically analyzed, the stress caused by the radiation force is theoretically analyzed and numerically calculated using the finite-element method, and the influence of the stress on the SHG efficiency is theoretically analyzed. Then, a method of improving the SHG efficiency by controlling the stress through adjusting the structural parameters of the mounting set of the KDP crystal is examined. It demonstrates that the radiation force causes stress within the KDP crystal and further militates against the SHG efficiency; however, the SHG efficiency could be improved by controlling the stress through adjusting the structural parameters of the mounting set of the KDP crystal.

  19. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  20. Update on Bayesian Blocks: Segmented Models for Sequential Data

    NASA Technical Reports Server (NTRS)

    Scargle, Jeff

    2017-01-01

    The Bayesian Block algorithm, in wide use in astronomy and other areas, has been improved in several ways. The model for block shape has been generalized to include other than constant signal rate - e.g., linear, exponential, or other parametric models. In addition the computational efficiency has been improved, so that instead of O(N**2) the basic algorithm is O(N) in most cases. Other improvements in the theory and application of segmented representations will be described.

  1. Strained layer relaxation effect on current crowding and efficiency improvement of GaN based LED

    NASA Astrophysics Data System (ADS)

    Aurongzeb, Deeder

    2012-02-01

    Efficiency droop effect of GaN based LED at high power and high temperature is addressed by several groups based on career delocalization and photon recycling effect(radiative recombination). We extend the previous droop models to optical loss parameters. We correlate stained layer relaxation at high temperature and high current density to carrier delocalization. We propose a third order model and show that Shockley-Hall-Read and Auger recombination effect is not enough to account for the efficiency loss. Several strained layer modification scheme is proposed based on the model.

  2. High efficiency solar cell processing

    NASA Technical Reports Server (NTRS)

    Ho, F.; Iles, P. A.

    1985-01-01

    At the time of writing, cells made by several groups are approaching 19% efficiency. General aspects of the processing required for such cells are discussed. Most processing used for high efficiency cells is derived from space-cell or concentrator cell technology, and recent advances have been obtained from improved techniques rather than from better understanding of the limiting mechanisms. Theory and modeling are fairly well developed, and adequate to guide further asymptotic increases in performance of near conventional cells. There are several competitive cell designs with promise of higher performance ( 20%) but for these designs further improvements are required. The available cell processing technology to fabricate high efficiency cells is examined.

  3. Hybrid Hydro Renewable Energy Storage Model

    NASA Astrophysics Data System (ADS)

    Dey, Asit Kr

    2018-01-01

    This paper aims at presenting wind & tidal turbine pumped-storage solutions for improving the energy efficiency and economic sustainability of renewable energy systems. Indicated a viable option to solve problems of energy production, as well as in the integration of intermittent renewable energies, providing system flexibility due to energy load’s fluctuation, as long as the storage of energy from intermittent sources. Sea water storage energy is one of the best and most efficient options in terms of renewable resources as an integrated solution allowing the improvement of the energy system elasticity and the global system efficiency.

  4. Development of Efficient Real-Fluid Model in Simulating Liquid Rocket Injector Flows

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Farmer, Richard

    2003-01-01

    The characteristics of propellant mixing near the injector have a profound effect on the liquid rocket engine performance. However, the flow features near the injector of liquid rocket engines are extremely complicated, for example supercritical-pressure spray, turbulent mixing, and chemical reactions are present. Previously, a homogeneous spray approach with a real-fluid property model was developed to account for the compressibility and evaporation effects such that thermodynamics properties of a mixture at a wide range of pressures and temperatures can be properly calculated, including liquid-phase, gas- phase, two-phase, and dense fluid regions. The developed homogeneous spray model demonstrated a good success in simulating uni- element shear coaxial injector spray combustion flows. However, the real-fluid model suffered a computational deficiency when applied to a pressure-based computational fluid dynamics (CFD) code. The deficiency is caused by the pressure and enthalpy being the independent variables in the solution procedure of a pressure-based code, whereas the real-fluid model utilizes density and temperature as independent variables. The objective of the present research work is to improve the computational efficiency of the real-fluid property model in computing thermal properties. The proposed approach is called an efficient real-fluid model, and the improvement of computational efficiency is achieved by using a combination of a liquid species and a gaseous species to represent a real-fluid species.

  5. Experimental feasibility of the airborne measurement of absolute oil fluorescence spectral conversion efficiency

    NASA Technical Reports Server (NTRS)

    Hoge, F. E.; Swift, R. N.

    1983-01-01

    Airborne lidar oil spill experiments carried out to determine the practicability of the AOFSCE (absolute oil fluorescence spectral conversion efficiency) computational model are described. The results reveal that the model is suitable over a considerable range of oil film thicknesses provided the fluorescence efficiency of the oil does not approach the minimum detection sensitivity limitations of the lidar system. Separate airborne lidar experiments to demonstrate measurement of the water column Raman conversion efficiency are also conducted to ascertain the ultimate feasibility of converting such relative oil fluorescence to absolute values. Whereas the AOFSCE model is seen as highly promising, further airborne water column Raman conversion efficiency experiments with improved temporal or depth-resolved waveform calibration and software deconvolution techniques are thought necessary for a final determination of suitability.

  6. A new framework to increase the efficiency of large-scale solar power plants.

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Kleissl, Jan P.

    2015-11-01

    A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.

  7. Application of RFID in the area of agricultural products quality traceability and tracking and the anti-collision algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zu-liang; Zhang, Ting; Xie, Shi-yang

    2017-01-01

    In order to improve the agricultural tracing efficiency and reduce tracking and monitoring cost, agricultural products quality tracking and tracing based on Radio-Frequency Identification(RFID) technology is studied, then tracing and tracking model is set up. Three-layer structure model is established to realize the high quality of agricultural products traceability and tracking. To solve the collision problems between multiple RFID tags and improve the identification efficiency a new reservation slot allocation mechanism is proposed. And then we analyze and optimize the parameter by numerical simulation method.

  8. Local deformation for soft tissue simulation

    PubMed Central

    Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2016-01-01

    ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482

  9. Efficiency and Productivity of County-level Public Hospitals Based on the Data Envelopment Analysis Model and Malmquist Index in Anhui, China

    PubMed Central

    Li, Nian-Nian; Wang, Cun-Hui; Ni, Hong; Wang, Heng

    2017-01-01

    Background: China began to implement the national medical and health system and public hospital reforms in 2009 and 2012, respectively. Anhui Province is one of the four pilot provinces, and the medical reform measures received wide attention nationwide. The effectiveness of the above reform needs to get attention. This study aimed to master the efficiency and productivity of county-level public hospitals based on the data envelopment analysis (DEA) model and Malmquist index in Anhui, China, and then provide improvement measures for the future hospital development. Methods: We chose 12 country-level hospitals based on geographical distribution and the economic development level in Anhui Province. Relevant data that were collected in the field and then sorted were provided by the administrative departments of the hospitals. DEA models were used to calculate the dynamic efficiency and Malmquist index factors for the 12 institutions. Results: During 2010–2015, the overall average relative service efficiency of 12 county-level public hospitals was 0.926, and the number of hospitals achieved an effective DEA for each year from 2010 to 2015 was 4, 6, 7, 7, 6, and 8, respectively, as measured using DEA. During this same period, the average overall production efficiency was 0.983, and the total productivity factor had declined. The overall production efficiency of five hospitals was >1, and the rest are <1 between 2010 and 2015. Conclusions: In 2010–2015, the relative service efficiency of 12 county-level public hospitals in Anhui Province showed a decreasing trend, and the service efficiency of each hospital changed. In the past 6 years, although some hospitals have been effective, the efficiency of the county-level public hospitals in Anhui Province has not improved significantly, and the total factor productivity has not been effectively improved. County-level public hospitals need to combine their own reality to find their own deficiencies. PMID:29176142

  10. Efficiency and Productivity of County-level Public Hospitals Based on the Data Envelopment Analysis Model and Malmquist Index in Anhui, China.

    PubMed

    Li, Nian-Nian; Wang, Cun-Hui; Ni, Hong; Wang, Heng

    2017-12-05

    China began to implement the national medical and health system and public hospital reforms in 2009 and 2012, respectively. Anhui Province is one of the four pilot provinces, and the medical reform measures received wide attention nationwide. The effectiveness of the above reform needs to get attention. This study aimed to master the efficiency and productivity of county-level public hospitals based on the data envelopment analysis (DEA) model and Malmquist index in Anhui, China, and then provide improvement measures for the future hospital development. We chose 12 country-level hospitals based on geographical distribution and the economic development level in Anhui Province. Relevant data that were collected in the field and then sorted were provided by the administrative departments of the hospitals. DEA models were used to calculate the dynamic efficiency and Malmquist index factors for the 12 institutions. During 2010-2015, the overall average relative service efficiency of 12 county-level public hospitals was 0.926, and the number of hospitals achieved an effective DEA for each year from 2010 to 2015 was 4, 6, 7, 7, 6, and 8, respectively, as measured using DEA. During this same period, the average overall production efficiency was 0.983, and the total productivity factor had declined. The overall production efficiency of five hospitals was >1, and the rest are <1 between 2010 and 2015. In 2010-2015, the relative service efficiency of 12 county-level public hospitals in Anhui Province showed a decreasing trend, and the service efficiency of each hospital changed. In the past 6 years, although some hospitals have been effective, the efficiency of the county-level public hospitals in Anhui Province has not improved significantly, and the total factor productivity has not been effectively improved. County-level public hospitals need to combine their own reality to find their own deficiencies.

  11. Efficient implementation of the Metropolis-Hastings algorithm, with application to the Cormack?Jolly?Seber model

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2008-01-01

    Judicious choice of candidate generating distributions improves efficiency of the Metropolis-Hastings algorithm. In Bayesian applications, it is sometimes possible to identify an approximation to the target posterior distribution; this approximate posterior distribution is a good choice for candidate generation. These observations are applied to analysis of the Cormack?Jolly?Seber model and its extensions.

  12. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  13. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.

    PubMed

    Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi

    2015-04-22

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.

  14. ImSET: Impact of Sector Energy Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roop, Joseph M.; Scott, Michael J.; Schultz, Robert W.

    2005-07-19

    This version of the Impact of Sector Energy Technologies (ImSET) model represents the ''next generation'' of the previously developed Visual Basic model (ImBUILD 2.0) that was developed in 2003 to estimate the macroeconomic impacts of energy-efficient technology in buildings. More specifically, a special-purpose version of the 1997 benchmark national Input-Output (I-O) model was designed specifically to estimate the national employment and income effects of the deployment of Office of Energy Efficiency and Renewable Energy (EERE) -developed energy-saving technologies. In comparison with the previous versions of the model, this version allows for more complete and automated analysis of the essential featuresmore » of energy efficiency investments in buildings, industry, transportation, and the electric power sectors. This version also incorporates improvements in the treatment of operations and maintenance costs, and improves the treatment of financing of investment options. ImSET is also easier to use than extant macroeconomic simulation models and incorporates information developed by each of the EERE offices as part of the requirements of the Government Performance and Results Act.« less

  15. Monitoring Crop Productivity over the U.S. Corn Belt using an Improved Light Use Efficiency Model

    NASA Astrophysics Data System (ADS)

    Wu, X.; Xiao, X.; Zhang, Y.; Qin, Y.; Doughty, R.

    2017-12-01

    Large-scale monitoring of crop yield is of great significance for forecasting food production and prices and ensuring food security. Satellite data that provide temporally and spatially continuous information that by themselves or in combination with other data or models, raises possibilities to monitor and understand agricultural productivity regionally. In this study, we first used an improved light use efficiency model-Vegetation Photosynthesis Model (VPM) to simulate the gross primary production (GPP). Model evaluation showed that the simulated GPP (GPPVPM) could well captured the spatio-temporal variation of GPP derived from FLUXNET sites. Then we applied the GPPVPM to further monitor crop productivity for corn and soybean over the U.S. Corn Belt and benchmarked with county-level crop yield statistics. We found VPM-based approach provides pretty good estimates (R2 = 0.88, slope = 1.03). We further showed the impacts of climate extremes on the crop productivity and carbon use efficiency. The study indicates the great potential of VPM in estimating crop yield and in understanding of crop yield responses to climate variability and change.

  16. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    PubMed

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Growing Chlorella sp. on meat processing wastewater for nutrient removal and biomass production.

    PubMed

    Lu, Qian; Zhou, Wenguang; Min, Min; Ma, Xiaochen; Chandra, Ceria; Doan, Yen T T; Ma, Yiwei; Zheng, Hongli; Cheng, Sibo; Griffith, Richard; Chen, Paul; Chen, Chi; Urriola, Pedro E; Shurson, Gerald C; Gislerød, Hans R; Ruan, Roger

    2015-12-01

    In this work, Chlorella sp. (UM6151) was selected to treat meat processing wastewater for nutrient removal and biomass production. To balance the nutrient profile and improve biomass yield at low cost, an innovative algae cultivation model based on wastewater mixing was developed. The result showed that biomass yield (0.675-1.538 g/L) of algae grown on mixed wastewater was much higher than that on individual wastewater and artificial medium. Wastewater mixing eased the bottleneck for algae growth and contributed to the improved biomass yield. Furthermore, in mixed wastewater with sufficient nitrogen, ammonia nitrogen removal efficiencies (68.75-90.38%) and total nitrogen removal efficiencies (30.06-50.94%) were improved. Wastewater mixing also promoted the synthesis of protein in algal cells. Protein content of algae growing on mixed wastewater reached 60.87-68.65%, which is much higher than that of traditional protein source. Algae cultivation model based on wastewater mixing is an efficient and economical way to improve biomass yield. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Research of an emergency medical system for mass casualty incidents in Shanghai, China: a system dynamics model

    PubMed Central

    Liu, Xu; Chen, Haiping; Xue, Chen

    2018-01-01

    Objectives Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. Methods This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Results Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. Conclusion The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise. PMID:29440876

  19. Reduced-order modeling of piezoelectric energy harvesters with nonlinear circuits under complex conditions

    NASA Astrophysics Data System (ADS)

    Xiang, Hong-Jun; Zhang, Zhi-Wei; Shi, Zhi-Fei; Li, Hong

    2018-04-01

    A fully coupled modeling approach is developed for piezoelectric energy harvesters in this work based on the use of available robust finite element packages and efficient reducing order modeling techniques. At first, the harvester is modeled using finite element packages. The dynamic equilibrium equations of harvesters are rebuilt by extracting system matrices from the finite element model using built-in commands without any additional tools. A Krylov subspace-based scheme is then applied to obtain a reduced-order model for improving simulation efficiency but preserving the key features of harvesters. Co-simulation of the reduced-order model with nonlinear energy harvesting circuits is achieved in a system level. Several examples in both cases of harmonic response and transient response analysis are conducted to validate the present approach. The proposed approach allows to improve the simulation efficiency by several orders of magnitude. Moreover, the parameters used in the equivalent circuit model can be conveniently obtained by the proposed eigenvector-based model order reduction technique. More importantly, this work establishes a methodology for modeling of piezoelectric energy harvesters with any complicated mechanical geometries and nonlinear circuits. The input load may be more complex also. The method can be employed by harvester designers to optimal mechanical structures or by circuit designers to develop novel energy harvesting circuits.

  20. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  1. Strengthening and Improving Yield Asymmetry of Magnesium Alloys by Second Phase Particle Refinement Under the Guidance of Integrated Computational Materials Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dongsheng; Lavender, Curt

    2015-05-08

    Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less

  2. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  3. Developing an Energy Performance Modeling Startup Kit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2012-10-01

    In 2011, the NAHB Research Center began assessing the needs and motivations of residential remodelers regarding energy performance remodeling. This report outlines: the current remodeling industry and the role of energy efficiency; gaps and barriers to adding energy efficiency into remodeling; and support needs of professional remodelers to increase sales and projects involving improving home energy efficiency.

  4. Final Report Collaborative Project. Improving the Representation of Coastal and Estuarine Processes in Earth System Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan, Frank; Dennis, John; MacCready, Parker

    This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less

  5. Numerical analysis of an entire ceramic kiln under actual operating conditions for the energy efficiency improvement.

    PubMed

    Milani, Massimo; Montorsi, Luca; Stefani, Matteo; Saponelli, Roberto; Lizzano, Maurizio

    2017-12-01

    The paper focuses on the analysis of an industrial ceramic kiln in order to improve the energy efficiency and thus the fuel consumption and the corresponding carbon dioxide emissions. A lumped and distributed parameter model of the entire system is constructed to simulate the performance of the kiln under actual operating conditions. The model is able to predict accurately the temperature distribution along the different modules of the kiln and the operation of the many natural gas burners employed to provide the required thermal power. Furthermore, the temperature of the tiles is also simulated so that the quality of the final product can be addressed by the modelling. Numerical results are validated against experimental measurements carried out on a real ceramic kiln during regular production operations. The developed numerical model demonstrates to be an efficient tool for the investigation of different design solutions for the kiln's components. In addition, a number of control strategies for the system working conditions can be simulated and compared in order to define the best trade off in terms of fuel consumption and product quality. In particular, the paper analyzes the effect of a new burner type characterized by internal heat recovery capability aimed at improving the energy efficiency of the ceramic kiln. The fuel saving and the relating reduction of carbon dioxide emissions resulted in the order of 10% when compared to the standard burner. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Improved design method of a rotating spool compressor using a comprehensive model and comparison to experimental results

    NASA Astrophysics Data System (ADS)

    Bradshaw, Craig R.; Kemp, Greg; Orosz, Joe; Groll, Eckhard A.

    2017-08-01

    An improvement to the design process of the rotating spool compressor is presented. This improvement utilizes a comprehensive model to explore two working uids (R410A and R134a), various displaced volumes, at a variety of geometric parameters. The geometric parameters explored consists of eccentricity ratio and length-to-diameter ratio. The eccentricity ratio is varied between 0.81 and 0.92 and the length-to-diameter ratio is varied between 0.4 and 3. The key tradeoffs are evaluated and the results show that there is an optimum eccentricity and length-to-diameter ratio, which will maximize the model predicted performance, that is unique to a particular uid and displaced volume. For R410A, the modeling tool predicts that the overall isentropic efficiency will optimize at a length-to-diameter ratio that is lower than for R134a. Additionally, the tool predicts that as the displaced volume increases the overall isentropic efficiency will increase and the ideal length-to-diameter ratio will shift. The result from this study are utilized to develop a basic design for a 141 kW (40 tonsR) capacity prototype spool compressor for light-commercial air-conditioning applications. Results from a prototype compressor constructed based on these efforts is presented. The volumetric efficiency predictions are found to be very accurate with the overall isentropic efficiency predictions shown to be slightly over-predicted.

  7. Optimization on the impeller of a low-specific-speed centrifugal pump for hydraulic performance improvement

    NASA Astrophysics Data System (ADS)

    Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng

    2016-09-01

    In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.

  8. Peer Review of March 2013 LDV Rebound Report By Small ...

    EPA Pesticide Factsheets

    The regulatory option of encouraging the adoption of advanced technologies for improving vehicle efficiency can result in significant fuel savings and GHG emissions benefits. At the same time, it is possible that some of these benefits might be offset by additional driving that is encouraged by the reduced costs of operating more efficient vehicles. This so called “rebound effect”, the increased driving that results from an improvement in the energy efficiency of a vehicle, must be determined in order to reliably estimate the overall benefits of GHG regulations for light-duty vehicles. Dr. Ken Small, an Economist at the Department of Economics, University of California at Irvine, with contributions by Dr. Kent Hymel, Department of Economics, California State University at Northridge, have developed a methodology to estimate the rebound effect for light-duty vehicles in the U.S. Specifically, rebound is estimated as the change in vehicle miles traveled (VMT) with respect to the change in per mile fuel costs that can occur, for example, when vehicle operating efficiency is improved. The model analyzes aggregate personal motor-vehicle travel within a simultaneous model of aggregate VMT, fleet size, fuel efficiency, and congestion formation. To use the peer review process to help assure that the methodologies considered by the U.S. EPA for estimating VMT rebound have been thoroughly examined.

  9. Hierarchical Probabilistic Inference of the Color-Magnitude Diagram and Shrinkage of Stellar Distance Uncertainties

    NASA Astrophysics Data System (ADS)

    Leistedt, Boris; Hogg, David W.

    2017-12-01

    We present a hierarchical probabilistic model for improving geometric stellar distance estimates using color-magnitude information. This is achieved with a data-driven model of the color-magnitude diagram, not relying on stellar models but instead on the relative abundances of stars in color-magnitude cells, which are inferred from very noisy magnitudes and parallaxes. While the resulting noise-deconvolved color-magnitude diagram can be useful for a range of applications, we focus on deriving improved stellar distance estimates relying on both parallax and photometric information. We demonstrate the efficiency of this approach on the 1.4 million stars of the Gaia TGAS sample that also have AAVSO Photometric All Sky Survey magnitudes. Our hierarchical model has 4 million parameters in total, most of which are marginalized out numerically or analytically. We find that distance estimates are significantly improved for the noisiest parallaxes and densest regions of the color-magnitude diagram. In particular, the average distance signal-to-noise ratio (S/N) and uncertainty improve by 19% and 36%, respectively, with 8% of the objects improving in S/N by a factor greater than 2. This computationally efficient approach fully accounts for both parallax and photometric noise and is a first step toward a full hierarchical probabilistic model of the Gaia data.

  10. Template-Directed Instrumentation Reduces Cost and Improves Efficiency for Total Knee Arthroplasty: An Economic Decision Analysis and Pilot Study.

    PubMed

    McLawhorn, Alexander S; Carroll, Kaitlin M; Blevins, Jason L; DeNegre, Scott T; Mayman, David J; Jerabek, Seth A

    2015-10-01

    Template-directed instrumentation (TDI) for total knee arthroplasty (TKA) may streamline operating room (OR) workflow and reduce costs by preselecting implants and minimizing instrument tray burden. A decision model simulated the economics of TDI. Sensitivity analyses determined thresholds for model variables to ensure TDI success. A clinical pilot was reviewed. The accuracy of preoperative templates was validated, and 20 consecutive primary TKAs were performed using TDI. The model determined that preoperative component size estimation should be accurate to ±1 implant size for 50% of TKAs to implement TDI. The pilot showed that preoperative template accuracy exceeded 97%. There were statistically significant improvements in OR turnover time and in-room time for TDI compared to an historical cohort of TKAs. TDI reduces costs and improves OR efficiency. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Building mental models by dissecting physical models.

    PubMed

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. © 2015 The International Union of Biochemistry and Molecular Biology.

  12. Efficient wireless power charging of electric vehicle by modifying the magnetic characteristics of the medium

    NASA Astrophysics Data System (ADS)

    Mahmud, Mohammad Hazzaz

    There is a developing enthusiasm for electric vehicle (EV) innovations as a result of their lessened fuel utilization and greenhouse emission especially through wireless power transfer (WPT) due to the convenience and continuous charging. Numerous research initiatives target on wireless power transfer (WPT) system in the attempt to improve the transportation for last few decades. But several problems like less efficiency, high frequency, long distance energy transfer etc. were always been occupied by the wireless power transfer system. Two ideas have been developed in this research to resolve the two main problems of WPT for electric vehicles which are low efficiency due to large distance between the two coils and slow charging time. As the first phase of study, a proper model, including the coils and cores were required. The selected model was a finite element (FE) modeling. Another part of this study was to create a modified cement that will act as a semi-conductive material for covering the transmitting antenna area. A high frequency wide band gap switch will be used for transferring high amount of power in a very short time. More over this research also proves that, if cores could be added with the transmitter coil and receiver coil then the output efficiency dramatically increased comparing with without core model of transmitter and receiver. The wireless charging is not restricted to parking lot, since it's planned to be embedded into parking space concrete or roadway concrete or asphalt. Therefore, it can also be installed at junctions (behind red lights), stop signs or any spot that the vehicle might stop for several moments. This technology will become more feasible, if the charging time decreases. Therefore, a new model of for wireless power transfer has been proposed in this study which has shown significant improvement. Another motive of this study was to improve the conductivity and permeability in such a way that the medium that is on the top of the transmitting antenna can transfer the power efficiently to the receiving antenna. The best efficiency of 83% was achieved by using this model and the medium.

  13. Increasing operating room productivity by duration categories and a newsvendor model.

    PubMed

    Lehtonen, Juha-Matti; Torkki, Paulus; Peltokorpi, Antti; Moilanen, Teemu

    2013-01-01

    Previous studies approach surgery scheduling mainly from the mathematical modeling perspective which is often hard to apply in a practical environment. The aim of this study is to develop a practical scheduling system that considers the advantages of both surgery categorization and newsvendor model to surgery scheduling. The research was carried out in a Finnish orthopaedic specialist centre that performs only joint replacement surgery. Four surgery categorization scenarios were defined and their productivity analyzed by simulation and newsvendor model. Detailed analyses of surgery durations and the use of more accurate case categories and their combinations in scheduling improved OR productivity 11.3 percent when compared to the base case. Planning to have one OR team to work longer led to remarkable decrease in scheduling inefficiency. In surgical services, productivity and cost-efficiency can be improved by utilizing historical data in case scheduling and by increasing flexibility in personnel management. The study increases the understanding of practical scheduling methods used to improve efficiency in surgical services.

  14. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    NASA Astrophysics Data System (ADS)

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  15. Risk Profiling May Improve Lung Cancer Screening

    Cancer.gov

    A new modeling study suggests that individualized, risk-based selection of ever-smokers for lung cancer screening may prevent more lung cancer deaths and improve the effectiveness and efficiency of screening compared with current screening recommendations

  16. Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia.

    PubMed

    Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B; Loewy, Rachel; Vinogradov, Sophia

    2016-11-01

    Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. We observed significant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed interindividual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20 and 40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of interindividual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. A new cooperative MIMO scheme based on SM for energy-efficiency improvement in wireless sensor network.

    PubMed

    Peng, Yuyang; Choi, Jaeho

    2014-01-01

    Improving the energy efficiency in wireless sensor networks (WSN) has attracted considerable attention nowadays. The multiple-input multiple-output (MIMO) technique has been proved as a good candidate for improving the energy efficiency, but it may not be feasible in WSN which is due to the size limitation of the sensor node. As a solution, the cooperative multiple-input multiple-output (CMIMO) technique overcomes this constraint and shows a dramatically good performance. In this paper, a new CMIMO scheme based on the spatial modulation (SM) technique named CMIMO-SM is proposed for energy-efficiency improvement. We first establish the system model of CMIMO-SM. Based on this model, the transmission approach is introduced graphically. In order to evaluate the performance of the proposed scheme, a detailed analysis in terms of energy consumption per bit of the proposed scheme compared with the conventional CMIMO is presented. Later, under the guide of this new scheme we extend our proposed CMIMO-SM to a multihop clustered WSN for further achieving energy efficiency by finding an optimal hop-length. Equidistant hop as the traditional scheme will be compared in this paper. Results from the simulations and numerical experiments indicate that by the use of the proposed scheme, significant savings in terms of total energy consumption can be achieved. Combining the proposed scheme with monitoring sensor node will provide a good performance in arbitrary deployed WSN such as forest fire detection system.

  18. Analyzing whether countries are equally efficient at improving longevity for men and women.

    PubMed

    Barthold, Douglas; Nandi, Arijit; Mendoza Rodríguez, José M; Heymann, Jody

    2014-11-01

    We examined the efficiency of country-specific health care spending in improving life expectancies for men and women. We estimated efficiencies of health care spending for 27 Organisation for Economic Co-operation and Development (OECD) countries during the period 1991 to 2007 using multivariable regression models, including country fixed-effects and controlling for time-varying levels of national social expenditures, economic development, and health behaviors. Findings indicated robust differences in health-spending efficiency. A 1% annual increase in health expenditures was associated with percent changes in life expectancy ranging from 0.020 in the United States (95% confidence interval [CI] = 0.008, 0.032) to 0.121 in Germany (95% CI = 0.099, 0.143). Health-spending increases were associated with greater life expectancy improvements for men than for women in nearly every OECD country. This is the first study to our knowledge to estimate the effect of country-specific health expenditures on life expectancies of men and women. Future work understanding the determinants of these differences has the potential to improve the overall efficiency and equity of national health systems.

  19. Thin film solar cells grown by organic vapor phase deposition

    NASA Astrophysics Data System (ADS)

    Yang, Fan

    Organic solar cells have the potential to provide low-cost photovoltaic devices as a clean and renewable energy resource. In this thesis, we focus on understanding the energy conversion process in organic solar cells, and improving the power conversion efficiencies via controlled growth of organic nanostructures. First, we explain the unique optical and electrical properties of organic materials used for photovoltaics, and the excitonic energy conversion process in donor-acceptor heterojunction solar cells that place several limiting factors of their power conversion efficiency. Then, strategies for improving exciton diffusion and carrier collection are analyzed using dynamical Monte Carlo models for several nanostructure morphologies. Organic vapor phase deposition is used for controlling materials crystallization and film morphology. We improve the exciton diffusion efficiency while maintaining good carrier conduction in a bulk heterojunction solar cell. Further efficiency improvement is obtained in a novel nanocrystalline network structure with a thick absorbing layer, leading to the demonstration of an organic solar cell with 4.6% efficiency. In addition, solar cells using simultaneously active heterojunctions with broad spectral response are presented. We also analyze the efficiency limits of single and multiple junction organic solar cells, and discuss the challenges facing their practical implementations.

  20. Scaling production and improving efficiency in DEA: an interactive approach

    NASA Astrophysics Data System (ADS)

    Rödder, Wilhelm; Kleine, Andreas; Dellnitz, Andreas

    2017-10-01

    DEA models help a DMU to detect its (in-)efficiency and to improve activities, if necessary. Efficiency is only one economic aim for a decision-maker; however, up- or downsizing might be a second one. Improving efficiency is the main topic in DEA; the long-term strategy towards the right production size should attract our attention as well. Not always the management of a DMU primarily focuses on technical efficiency but rather is interested in gaining scale effects. In this paper, a formula for returns to scale (RTS) is developed, and this formula is even applicable for interior points of technology. Particularly, technical and scale inefficient DMUs need sophisticated instruments to improve their situation. Considering RTS as well as efficiency, in this paper, we give an advice for each DMU to find an economically reliable path from its actual situation to better activities and finally to most productive scale size (mpss), perhaps. For realizing this path, we propose an interactive algorithm, thus harmonizing the scientific findings and the interests of the management. Small numerical examples illustrate such paths for selected DMUs; an empirical application in theatre management completes the contribution.

  1. Enhancing population pharmacokinetic modeling efficiency and quality using an integrated workflow.

    PubMed

    Schmidt, Henning; Radivojevic, Andrijana

    2014-08-01

    Population pharmacokinetic (popPK) analyses are at the core of Pharmacometrics and need to be performed regularly. Although these analyses are relatively standard, a large variability can be observed in both the time (efficiency) and the way they are performed (quality). Main reasons for this variability include the level of experience of a modeler, personal preferences and tools. This paper aims to examine how the process of popPK model building can be supported in order to increase its efficiency and quality. The presented approach to the conduct of popPK analyses is centered around three key components: (1) identification of most common and important popPK model features, (2) required information content and formatting of the data for modeling, and (3) methodology, workflow and workflow supporting tools. This approach has been used in several popPK modeling projects and a documented example is provided in the supplementary material. Efficiency of model building is improved by avoiding repetitive coding and other labor-intensive tasks and by putting the emphasis on a fit-for-purpose model. Quality is improved by ensuring that the workflow and tools are in alignment with a popPK modeling guidance which is established within an organization. The main conclusion of this paper is that workflow based approaches to popPK modeling are feasible and have significant potential to ameliorate its various aspects. However, the implementation of such an approach in a pharmacometric organization requires openness towards innovation and change-the key ingredient for evolution of integrative and quantitative drug development in the pharmaceutical industry.

  2. Design and in vivo evaluation of more efficient and selective deep brain stimulation electrodes

    NASA Astrophysics Data System (ADS)

    Howell, Bryan; Huynh, Brian; Grill, Warren M.

    2015-08-01

    Objective. Deep brain stimulation (DBS) is an effective treatment for movement disorders and a promising therapy for treating epilepsy and psychiatric disorders. Despite its clinical success, the efficiency and selectivity of DBS can be improved. Our objective was to design electrode geometries that increased the efficiency and selectivity of DBS. Approach. We coupled computational models of electrodes in brain tissue with cable models of axons of passage (AOPs), terminating axons (TAs), and local neurons (LNs); we used engineering optimization to design electrodes for stimulating these neural elements; and the model predictions were tested in vivo. Main results. Compared with the standard electrode used in the Medtronic Model 3387 and 3389 arrays, model-optimized electrodes consumed 45-84% less power. Similar gains in selectivity were evident with the optimized electrodes: 50% of parallel AOPs could be activated while reducing activation of perpendicular AOPs from 44 to 48% with the standard electrode to 0-14% with bipolar designs; 50% of perpendicular AOPs could be activated while reducing activation of parallel AOPs from 53 to 55% with the standard electrode to 1-5% with an array of cathodes; and, 50% of TAs could be activated while reducing activation of AOPs from 43 to 100% with the standard electrode to 2-15% with a distal anode. In vivo, both the geometry and polarity of the electrode had a profound impact on the efficiency and selectivity of stimulation. Significance. Model-based design is a powerful tool that can be used to improve the efficiency and selectivity of DBS electrodes.

  3. On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier

    2018-01-01

    Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.

  4. Feature-based Approach in Product Design with Energy Efficiency Consideration

    NASA Astrophysics Data System (ADS)

    Li, D. D.; Zhang, Y. J.

    2017-10-01

    In this paper, a method to measure the energy efficiency and ecological footprint metrics of features is proposed for product design. First the energy consumption models of various manufacturing features, like cutting feature, welding feature, etc. are studied. Then, the total energy consumption of a product is modeled and estimated according to its features. Finally, feature chains that combined by several sequence features based on the producing operation orders are defined and analyzed to calculate global optimal solution. The corresponding assessment model is also proposed to estimate their energy efficiency and ecological footprint. Finally, an example is given to validate the proposed approach in the improvement of sustainability.

  5. Efficient multiscale magnetic-domain analysis of iron-core material under mechanical stress

    NASA Astrophysics Data System (ADS)

    Nishikubo, Atsushi; Ito, Shumpei; Mifune, Takeshi; Matsuo, Tetsuji; Kaido, Chikara; Takahashi, Yasuhito; Fujiwara, Koji

    2018-05-01

    For an efficient analysis of magnetization, a partial-implicit solution method is improved using an assembled domain structure model with six-domain mesoscopic particles exhibiting pinning-type hysteresis. The quantitative analysis of non-oriented silicon steel succeeds in predicting the stress dependence of hysteresis loss with computation times greatly reduced by using the improved partial-implicit method. The effect of cell division along the thickness direction is also evaluated.

  6. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    NASA Astrophysics Data System (ADS)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  7. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.

    PubMed

    Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M

    2016-12-07

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  8. Improved system integration for integrated gasification combined cycle (IGCC) systems.

    PubMed

    Frey, H Christopher; Zhu, Yunhua

    2006-03-01

    Integrated gasification combined cycle (IGCC) systems are a promising technology for power generation. They include an air separation unit (ASU), a gasification system, and a gas turbine combined cycle power block, and feature competitive efficiency and lower emissions compared to conventional power generation technology. IGCC systems are not yet in widespread commercial use and opportunities remain to improve system feasibility via improved process integration. A process simulation model was developed for IGCC systems with alternative types of ASU and gas turbine integration. The model is applied to evaluate integration schemes involving nitrogen injection, air extraction, and combinations of both, as well as different ASU pressure levels. The optimal nitrogen injection only case in combination with an elevated pressure ASU had the highest efficiency and power output and approximately the lowest emissions per unit output of all cases considered, and thus is a recommended design option. The optimal combination of air extraction coupled with nitrogen injection had slightly worse efficiency, power output, and emissions than the optimal nitrogen injection only case. Air extraction alone typically produced lower efficiency, lower power output, and higher emissions than all other cases. The recommended nitrogen injection only case is estimated to provide annualized cost savings compared to a nonintegrated design. Process simulation modeling is shown to be a useful tool for evaluation and screening of technology options.

  9. Alternative Loglinear Smoothing Models and Their Effect on Equating Function Accuracy. Research Report. ETS RR-09-48

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2009-01-01

    This simulation study evaluated the potential of alternative loglinear smoothing strategies for improving equipercentile equating function accuracy. These alternative strategies use cues from the sample data to make automatable and efficient improvements to model fit, either through the use of indicator functions for fitting large residuals or by…

  10. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  11. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  12. Theoretical analysis of improved efficiency of silicon-wafer solar cells with textured nanotriangular grating structure

    NASA Astrophysics Data System (ADS)

    Zhang, Yaoju; Zheng, Jun; Zhao, Xuesong; Ruan, Xiukai; Cui, Guihua; Zhu, Haiyong; Dai, Yuxing

    2018-03-01

    A practical model of crystalline silicon-wafer solar cells is proposed in order to enhance the light absorption and improve the conversion efficiency of silicon solar cells. In the model, the front surface of the silicon photovoltaic film is designed to be a textured-triangular-grating (TTG) structure, and the ITO contact film and the antireflection coating (ARC) of glass are coated on the TTG surface of silicon solar cells. The optical absorption spectrum of solar cells are simulated by applying the finite difference time domain method. Electrical parameters of the solar cells are calculated using two models with and without carrier loss. The effect of structure parameters on the performance of the TTG cell is discussed in detail. It is found that the thickness (tg) of the ARC, period (p) of grating, and base angle (θ) of triangle have a crucial influence on the conversion efficiency. The optimal structure of the TTG cell is designed. The TTG solar cell can produce higher efficiency in a wide range of solar incident angle and the average efficiency of the optimal TTG cell over 7:30-16:30 time of day is 8% higher than that of the optimal plane solar cell. In addition, the study shows that the bulk recombination of carriers has an influence on the conversion efficiency of the cell, the conversion efficiency of the actual solar cell with carrier recombination is reduced by 20.0% of the ideal cell without carrier recombination.

  13. Hong Kong Hospital Authority resource efficiency evaluation: Via a novel DEA-Malmquist model and Tobit regression model.

    PubMed

    Guo, Hainan; Zhao, Yang; Niu, Tie; Tsui, Kwok-Leung

    2017-01-01

    The Hospital Authority (HA) is a statutory body managing all the public hospitals and institutes in Hong Kong (HK). In recent decades, Hong Kong Hospital Authority (HKHA) has been making efforts to improve the healthcare services, but there still exist some problems like unfair resource allocation and poor management, as reported by the Hong Kong medical legislative committee. One critical consequence of these problems is low healthcare efficiency of hospitals, leading to low satisfaction among patients. Moreover, HKHA also suffers from the conflict between limited resource and growing demand. An effective evaluation of HA is important for resource planning and healthcare decision making. In this paper, we propose a two-phase method to evaluate HA efficiency for reducing healthcare expenditure and improving healthcare service. Specifically, in Phase I, we measure the HKHA efficiency changes from 2000 to 2013 by applying a novel DEA-Malmquist index with undesirable factors. In Phase II, we further explore the impact of some exogenous factors (e.g., population density) on HKHA efficiency by Tobit regression model. Empirical results show that there are significant differences between the efficiencies of different hospitals and clusters. In particular, it is found that the public hospital serving in a richer district has a relatively lower efficiency. To a certain extent, this reflects the socioeconomic reality in HK that people with better economic condition prefers receiving higher quality service from the private hospitals.

  14. Hong Kong Hospital Authority resource efficiency evaluation: Via a novel DEA-Malmquist model and Tobit regression model

    PubMed Central

    Guo, Hainan; Zhao, Yang; Niu, Tie; Tsui, Kwok-Leung

    2017-01-01

    The Hospital Authority (HA) is a statutory body managing all the public hospitals and institutes in Hong Kong (HK). In recent decades, Hong Kong Hospital Authority (HKHA) has been making efforts to improve the healthcare services, but there still exist some problems like unfair resource allocation and poor management, as reported by the Hong Kong medical legislative committee. One critical consequence of these problems is low healthcare efficiency of hospitals, leading to low satisfaction among patients. Moreover, HKHA also suffers from the conflict between limited resource and growing demand. An effective evaluation of HA is important for resource planning and healthcare decision making. In this paper, we propose a two-phase method to evaluate HA efficiency for reducing healthcare expenditure and improving healthcare service. Specifically, in Phase I, we measure the HKHA efficiency changes from 2000 to 2013 by applying a novel DEA-Malmquist index with undesirable factors. In Phase II, we further explore the impact of some exogenous factors (e.g., population density) on HKHA efficiency by Tobit regression model. Empirical results show that there are significant differences between the efficiencies of different hospitals and clusters. In particular, it is found that the public hospital serving in a richer district has a relatively lower efficiency. To a certain extent, this reflects the socioeconomic reality in HK that people with better economic condition prefers receiving higher quality service from the private hospitals. PMID:28886087

  15. Process-aware EHR BPM systems: two prototypes and a conceptual framework.

    PubMed

    Webster, Charles; Copenhaver, Mark

    2010-01-01

    Systematic methods to improve the effectiveness and efficiency of electronic health record-mediated processes will be key to EHRs playing an important role in the positive transformation of healthcare. Business process management (BPM) systematically optimizes process effectiveness, efficiency, and flexibility. Therefore BPM offers relevant ideas and technologies. We provide a conceptual model based on EHR productivity and negative feedback control that links EHR and BPM domains, describe two EHR BPM prototype modules, and close with the argument that typical EHRs must become more process-aware if they are to take full advantage of BPM ideas and technology. A prediction: Future extensible clinical groupware will coordinate delivery of EHR functionality to teams of users by combining modular components with executable process models whose usability (effectiveness, efficiency, and user satisfaction) will be systematically improved using business process management techniques.

  16. Improved community model for social networks based on social mobility

    NASA Astrophysics Data System (ADS)

    Lu, Zhe-Ming; Wu, Zhen; Luo, Hao; Wang, Hao-Xian

    2015-07-01

    This paper proposes an improved community model for social networks based on social mobility. The relationship between the group distribution and the community size is investigated in terms of communication rate and turnover rate. The degree distributions, clustering coefficients, average distances and diameters of networks are analyzed. Experimental results demonstrate that the proposed model possesses the small-world property and can reproduce social networks effectively and efficiently.

  17. QML-AiNet: An immune network approach to learning qualitative differential equation models

    PubMed Central

    Pang, Wei; Coghill, George M.

    2015-01-01

    In this paper, we explore the application of Opt-AiNet, an immune network approach for search and optimisation problems, to learning qualitative models in the form of qualitative differential equations. The Opt-AiNet algorithm is adapted to qualitative model learning problems, resulting in the proposed system QML-AiNet. The potential of QML-AiNet to address the scalability and multimodal search space issues of qualitative model learning has been investigated. More importantly, to further improve the efficiency of QML-AiNet, we also modify the mutation operator according to the features of discrete qualitative model space. Experimental results show that the performance of QML-AiNet is comparable to QML-CLONALG, a QML system using the clonal selection algorithm (CLONALG). More importantly, QML-AiNet with the modified mutation operator can significantly improve the scalability of QML and is much more efficient than QML-CLONALG. PMID:25648212

  18. QML-AiNet: An immune network approach to learning qualitative differential equation models.

    PubMed

    Pang, Wei; Coghill, George M

    2015-02-01

    In this paper, we explore the application of Opt-AiNet, an immune network approach for search and optimisation problems, to learning qualitative models in the form of qualitative differential equations. The Opt-AiNet algorithm is adapted to qualitative model learning problems, resulting in the proposed system QML-AiNet. The potential of QML-AiNet to address the scalability and multimodal search space issues of qualitative model learning has been investigated. More importantly, to further improve the efficiency of QML-AiNet, we also modify the mutation operator according to the features of discrete qualitative model space. Experimental results show that the performance of QML-AiNet is comparable to QML-CLONALG, a QML system using the clonal selection algorithm (CLONALG). More importantly, QML-AiNet with the modified mutation operator can significantly improve the scalability of QML and is much more efficient than QML-CLONALG.

  19. Theoretical and software considerations for nonlinear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  20. Research on the Integration of Bionic Geometry Modeling and Simulation of Robot Foot Based on Characteristic Curve

    NASA Astrophysics Data System (ADS)

    He, G.; Zhu, H.; Xu, J.; Gao, K.; Zhu, D.

    2017-09-01

    The bionic research of shape is an important aspect of the research on bionic robot, and its implementation cannot be separated from the shape modeling and numerical simulation of the bionic object, which is tedious and time-consuming. In order to improve the efficiency of shape bionic design, the feet of animals living in soft soil and swamp environment are taken as bionic objects, and characteristic skeleton curve, section curve, joint rotation variable, position and other parameters are used to describe the shape and position information of bionic object’s sole, toes and flipper. The geometry modeling of the bionic object is established by using the parameterization of characteristic curves and variables. Based on this, the integration framework of parametric modeling and finite element modeling, dynamic analysis and post-processing of sinking process in soil is proposed in this paper. The examples of bionic ostrich foot and bionic duck foot are also given. The parametric modeling and integration technique can achieve rapid improved design based on bionic object, and it can also greatly improve the efficiency and quality of robot foot bionic design, and has important practical significance to improve the level of bionic design of robot foot’s shape and structure.

  1. Improvement of productivity in low volume production industry layout by using witness simulation software

    NASA Astrophysics Data System (ADS)

    Jaffrey, V.; Mohamed, N. M. Z. N.; Rose, A. N. M.

    2017-10-01

    In almost all manufacturing industry, increased productivity and better efficiency of the production line are the most important goals. Most factories especially small scale factory has less awareness of manufacturing system optimization and lack of knowledge about it and uses the traditional way of management. Problems that are commonly identified in the factory are a high idle time of labour and also small production. This study is done in a Small and Medium Enterprises (SME) low volume production company. Data collection and problems affecting productivity and efficiency are identified. In this study, Witness simulation software is being used to simulate the layout and the output is focusing on the improvement of layout in terms of productivity and efficiency. In this study, the layout is rearranged by reducing the travel time from a workstation to another workstation. Then, the improved layout is modelled and the machine and labour statistic of both, original and improved layout is taken. Productivity and efficiency are calculated for both layout and then being compared.

  2. Increasing radiology capacity within the lung cancer pathway: centralised work-based support for trainee chest X-ray reporting radiographers.

    PubMed

    Woznitza, Nick; Steele, Rebecca; Piper, Keith; Burke, Stephen; Rowe, Susan; Bhowmik, Angshu; Maughn, Sue; Springett, Kate

    2018-05-27

    Diagnostic capacity and time to diagnosis are frequently identified as a barrier to improving cancer patient outcomes. Maximising the contribution of the medical imaging workforce, including reporting radiographers, is one way to improve service delivery. An efficient and effective centralised model of workplace training support was designed for a cohort of trainee chest X-ray (CXR) reporting radiographers. A comprehensive schedule of tutorials was planned and aligned with the curriculum of a post-graduate certificate in CXR reporting. Trainees were supported via a hub and spoke model (centralised training model), with the majority of education provided by a core group of experienced CXR reporting radiographers. Trainee and departmental feedback on the model was obtained using an online survey. Fourteen trainees were recruited from eight National Health Service Trusts across London. Significant efficiencies of scale were possible with centralised support (48 h) compared to traditional workplace support (348 h). Trainee and manager feedback overall was positive. Trainees and managers both reported good trainee support, translation of learning to practice and increased confidence. Logistics, including trainee travel and release, were identified as areas for improvement. Centralised workplace training support is an effective and efficient method to create sustainable diagnostic capacity and support improvements in the lung cancer pathway. © 2018 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.

  3. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  4. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  5. Evaluation of performance and impacts of maternal and child health hospital services using Data Envelopment Analysis in Guangxi Zhuang Autonomous Region, China: a comparison study among poverty and non-poverty county level hospitals.

    PubMed

    Wang, Xuan; Luo, Hongye; Qin, Xianjin; Feng, Jun; Gao, Hongda; Feng, Qiming

    2016-08-23

    As the core of the county-level Maternal and Child Health Hospitals (MCHH) in rural areas of China, the service efficiency affects the fairness and availability of healthcare services. This study aims to identify the determinants of hospital efficiency and explore how to improve the performance of MCHH in terms of productivity and efficiency. Data was collected from a sample of 32 county-level MCHHs of Guangxi in 2014. Firstly, we specified and measured the indicators of the inputs and outputs which represent hospital resources expended and its profiles respectively. Then we estimated the efficiency scores using Data Envelopment Analysis (DEA) for each hospital. Efficiency scores were decomposed into technical, scale and congestion components, and the potential output increases and/or input reductions were also estimated in this model, which would make relatively inefficient hospitals more efficient. In the second stage, the estimated efficiency scores are regressed against hospital external and internal environment factors using a Tobit model. We used DEAP (V2.1) and R for data analysis. The average scores of technical efficiency, net technical efficiency (managerial efficiency) and scale efficiency of the hospitals were 0.875, 0.922 and 0.945, respectively. Half of the hospitals were efficient, and 9.4 % and 40.6 % were weakly efficient and inefficient, respectively. Among the low-productiveness hospitals, 61.1 % came from poor counties (Poor counties in this article are in the list of key poverty-stricken counties at the national level, published by The State Council Leading Group Office of Poverty Alleviation and Development, 2012). The total input indicated that redundant medical resources in poverty areas were significantly higher than those in non-poverty areas. The Tobit regression model showed that the technical efficiency was proportional to the total annual incomes, the number of discharge patients, and the number of outpatient and emergency visits, while it was inversely proportional to total expenditure and the actual number of open beds. Technical efficiency was not associated with number of health care workers. The overall operational efficiency of the county-level MCHHs in Guangxi was low and needs to be improved. Regional economic differences affect the performances of hospitals. Health administrations should adjust and optimize the resource investments for the different areas. For the hospitals in poverty areas, policy-makers should not only consider the hardware facilities investment, but also the introduction of advanced techniques and high-level medical personnel to improve their technical efficiency.

  6. The effect of state-level funding on energy efficiency outcomes

    NASA Astrophysics Data System (ADS)

    Downs, Anna

    Increasingly, states are formalizing energy efficiency policies. In 2010, states required utilities to budget $5.5 billion through ratepayer-funded energy efficiency programs, investing in both electricity and natural gas programs. However the size and spread of energy efficiency programs was strikingly different from state to state. This paper examines how far each dollar of state-level energy efficiency funding goes in producing efficiency gains. Many states have also pursued innovative policy actions to conserve electricity. Measures of policy effort are also included in this study, along with average electricity prices. The only variable that is consistently correlated with energy usage intensity across all models is electricity price. As politicians at local, state, and Federal levels continue to push for improved energy efficiency, the models in this paper provide a convincing impetus for focusing on strategies that raise energy prices.

  7. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  8. 7 CFR 272.10 - ADP/CIS Model Plan.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... those which result in effective programs or in cost effective reductions in errors and improvements in management efficiency, such as decreases in program administrative costs. Thus, for those State agencies which operate exceptionally efficient and effective programs, a lesser degree of automation may be...

  9. Fiber-coupling efficiency of Gaussian-Schell model beams through an ocean to fiber optical communication link

    NASA Astrophysics Data System (ADS)

    Hu, Beibei; Shi, Haifeng; Zhang, Yixin

    2018-06-01

    We theoretically study the fiber-coupling efficiency of Gaussian-Schell model beams propagating through oceanic turbulence. The expression of the fiber-coupling efficiency is derived based on the spatial power spectrum of oceanic turbulence and the cross-spectral density function. Our work shows that the salinity fluctuation has a greater impact on the fiber-coupling efficiency than temperature fluctuation does. We can select longer λ in the "ocean window" and higher spatial coherence of light source to improve the fiber-coupling efficiency of the communication link. We also can achieve the maximum fiber-coupling efficiency by choosing design parameter according specific oceanic turbulence condition. Our results are able to help the design of optical communication link for oceanic turbulence to fiber sensor.

  10. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    NASA Astrophysics Data System (ADS)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  11. Failsafe modes in incomplete minority game

    NASA Astrophysics Data System (ADS)

    Yao, Xiaobo; Wan, Shaolong; Chen, Wen

    2009-09-01

    We make a failsafe extension to the incomplete minority game model, give a brief analysis on how incompleteness will effect system efficiency. Simulations that limited incompleteness in strategies can improve the system efficiency. Among three failsafe modes, the “Back-to-Best” mode brings most significant improvement and keeps the system efficiency in a long range of incompleteness. A simple analytic formula has a trend which matches simulation results. The IMMG model is used to study the effect of distribution, and we find that there is one junction point in each series of curves, at which system efficiency is not influenced by the distribution of incompleteness. When pIbar > the concentration of incompleteness weakens the effect. On the other side of , concentration will be helpful. When pI is close to zero agents using incomplete strategies have on average better profits than those using standard strategies, and the “Back-to-Best” agents have a wider range of pI to win.

  12. Multiscale Modeling of Plasmon-Enhanced Power Conversion Efficiency in Nanostructured Solar Cells.

    PubMed

    Meng, Lingyi; Yam, ChiYung; Zhang, Yu; Wang, Rulin; Chen, GuanHua

    2015-11-05

    The unique optical properties of nanometallic structures can be exploited to confine light at subwavelength scales. This excellent light trapping is critical to improve light absorption efficiency in nanoscale photovoltaic devices. Here, we apply a multiscale quantum mechanics/electromagnetics (QM/EM) method to model the current-voltage characteristics and optical properties of plasmonic nanowire-based solar cells. The QM/EM method features a combination of first-principles quantum mechanical treatment of the photoactive component and classical description of electromagnetic environment. The coupled optical-electrical QM/EM simulations demonstrate a dramatic enhancement for power conversion efficiency of nanowire solar cells due to the surface plasmon effect of nanometallic structures. The improvement is attributed to the enhanced scattering of light into the photoactive layer. We further investigate the optimal configuration of the nanostructured solar cell. Our QM/EM simulation result demonstrates that a further increase of internal quantum efficiency can be achieved by scattering light into the n-doped region of the device.

  13. Health economics, equity, and efficiency: are we almost there?

    PubMed

    Ferraz, Marcos Bosi

    2015-01-01

    Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health.

  14. Health economics, equity, and efficiency: are we almost there?

    PubMed Central

    Ferraz, Marcos Bosi

    2015-01-01

    Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health. PMID:25709481

  15. Research on optimization of combustion efficiency of thermal power unit based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Qiongyang

    2018-04-01

    In order to improve the economic performance and reduce pollutant emissions of thermal power units, the characteristics of neural network in establishing boiler combustion model are analyzed based on the analysis of the main factors affecting boiler efficiency by using orthogonal method. In addition, on the basis of this model, the genetic algorithm is used to find the best control amount of the furnace combustion in a certain working condition. Through the genetic algorithm based on real number encoding and roulette selection is concluded: the best control quantity at a condition of furnace combustion can be combined with the boiler combustion system model for neural network training. The precision of the neural network model is further improved, and the basic work is laid for the research of the whole boiler combustion optimization system.

  16. MOSAIC : Model Of Sustainability And Integrated Corridors, phase 3 : comprehensive model calibration and validation and additional model enhancement.

    DOT National Transportation Integrated Search

    2015-02-01

    The Maryland State Highway Administration (SHA) has initiated major planning efforts to improve transportation : efficiency, safety, and sustainability on critical highway corridors through its Comprehensive Highway Corridor : (CHC) program. This pro...

  17. Optimizing efficiency and operations at a California safety-net endoscopy center: a modeling and simulation approach.

    PubMed

    Day, Lukejohn W; Belson, David; Dessouky, Maged; Hawkins, Caitlin; Hogan, Michael

    2014-11-01

    Improvements in endoscopy center efficiency are needed, but scant data are available. To identify opportunities to improve patient throughput while balancing resource use and patient wait times in a safety-net endoscopy center. Safety-net endoscopy center. Outpatients undergoing endoscopy. A time and motion study was performed and a discrete event simulation model constructed to evaluate multiple scenarios aimed at improving endoscopy center efficiency. Procedure volume and patient wait time. Data were collected on 278 patients. Time and motion study revealed that 53.8 procedures were performed per week, with patients spending 2.3 hours at the endoscopy center. By using discrete event simulation modeling, a number of proposed changes to the endoscopy center were assessed. Decreasing scheduled endoscopy appointment times from 60 to 45 minutes led to a 26.4% increase in the number of procedures performed per week, but also increased patient wait time. Increasing the number of endoscopists by 1 each half day resulted in increased procedure volume, but there was a concomitant increase in patient wait time and nurse utilization exceeding capacity. By combining several proposed scenarios together in the simulation model, the greatest improvement in performance metrics was created by moving patient endoscopy appointments from the afternoon to the morning. In this simulation at 45- and 40-minute appointment times, procedure volume increased by 30.5% and 52.0% and patient time spent in the endoscopy center decreased by 17.4% and 13.0%, respectively. The predictions of the simulation model were found to be accurate when compared with actual changes implemented in the endoscopy center. Findings may not be generalizable to non-safety-net endoscopy centers. The combination of minor, cost-effective changes such as reducing appointment times, minimizing and standardizing recovery time, and making small increases in preprocedure ancillary staff maximized endoscopy center efficiency across a number of performance metrics. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  18. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    NASA Astrophysics Data System (ADS)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  19. Analytical modeling of relative luminescence efficiency of Al2O3:C optically stimulated luminescence detectors exposed to high-energy heavy charged particles.

    PubMed

    Sawakuchi, Gabriel O; Yukihara, Eduardo G

    2012-01-21

    The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.

  20. Reducing Bottlenecks to Improve the Efficiency of the Lung Cancer Care Delivery Process: A Process Engineering Modeling Approach to Patient-Centered Care.

    PubMed

    Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U

    2017-12-01

    The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.

  1. Mean-field and linear regime approach to magnetic hyperthermia of core-shell nanoparticles: can tiny nanostructures fight cancer?

    NASA Astrophysics Data System (ADS)

    Carrião, Marcus S.; Bakuzis, Andris F.

    2016-04-01

    The phenomenon of heat dissipation by magnetic materials interacting with an alternating magnetic field, known as magnetic hyperthermia, is an emergent and promising therapy for many diseases, mainly cancer. Here, a magnetic hyperthermia model for core-shell nanoparticles is developed. The theoretical calculation, different from previous models, highlights the importance of heterogeneity by identifying the role of surface and core spins on nanoparticle heat generation. We found that the most efficient nanoparticles should be obtained by selecting materials to reduce the surface to core damping factor ratio, increasing the interface exchange parameter and tuning the surface to core anisotropy ratio for each material combination. From our results we propose a novel heat-based hyperthermia strategy with the focus on improving the heating efficiency of small sized nanoparticles instead of larger ones. This approach might have important implications for cancer treatment and could help improving clinical efficacy.The phenomenon of heat dissipation by magnetic materials interacting with an alternating magnetic field, known as magnetic hyperthermia, is an emergent and promising therapy for many diseases, mainly cancer. Here, a magnetic hyperthermia model for core-shell nanoparticles is developed. The theoretical calculation, different from previous models, highlights the importance of heterogeneity by identifying the role of surface and core spins on nanoparticle heat generation. We found that the most efficient nanoparticles should be obtained by selecting materials to reduce the surface to core damping factor ratio, increasing the interface exchange parameter and tuning the surface to core anisotropy ratio for each material combination. From our results we propose a novel heat-based hyperthermia strategy with the focus on improving the heating efficiency of small sized nanoparticles instead of larger ones. This approach might have important implications for cancer treatment and could help improving clinical efficacy. Electronic supplementary information (ESI) available: Unit cells per region calculation; core-shell Hamiltonian; magnetisation description functions; energy argument of Brillouin function; polydisperse models; details of experimental procedure; LRT versus core-shell model; model calculation software; and shell thickness study. See DOI: 10.1039/C5NR09093H

  2. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  3. Improving evapotranspiration processes in distrubing hydrological models using Remote Sensing derived ET products.

    NASA Astrophysics Data System (ADS)

    Abitew, T. A.; van Griensven, A.; Bauwens, W.

    2015-12-01

    Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved agricultural management practices or by providing improved meteorological data.

  4. Predicting the oral uptake efficiency of chemicals in mammals: Combining the hydrophilic and lipophilic range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Connor, Isabel A., E-mail: i.oconnor@science.ru.nl; Huijbregts, Mark A.J., E-mail: m.huijbregts@science.ru.nl; Ragas, Ad M.J., E-mail: a.ragas@science.ru.nl

    Environmental risk assessment requires models for estimating the bioaccumulation of untested compounds. So far, bioaccumulation models have focused on lipophilic compounds, and only a few have included hydrophilic compounds. Our aim was to extend an existing bioaccumulation model to estimate the oral uptake efficiency of pollutants in mammals for compounds over a wide K{sub ow} range with an emphasis on hydrophilic compounds, i.e. compounds in the lower K{sub ow} range. Usually, most models use octanol as a single surrogate for the membrane and thus neglect the bilayer structure of the membrane. However, compounds with polar groups can have different affinitiesmore » for the different membrane regions. Therefore, an existing bioaccumulation model was extended by dividing the diffusion resistance through the membrane into an outer and inner membrane resistance, where the solvents octanol and heptane were used as surrogates for these membrane regions, respectively. The model was calibrated with uptake efficiencies of environmental pollutants measured in different mammals during feeding studies combined with human oral uptake efficiencies of pharmaceuticals. The new model estimated the uptake efficiency of neutral (RMSE = 14.6) and dissociating (RMSE = 19.5) compounds with logK{sub ow} ranging from − 10 to + 8. The inclusion of the K{sub hw} improved uptake estimation for 33% of the hydrophilic compounds (logK{sub ow} < 0) (r{sup 2} = 0.51, RMSE = 22.8) compared with the model based on K{sub ow} only (r{sup 2} = 0.05, RMSE = 34.9), while hydrophobic compounds (logK{sub ow} > 0) were estimated equally by both model versions with RMSE = 15.2 (K{sub ow} and K{sub hw}) and RMSE = 15.7 (K{sub ow} only). The model can be used to estimate the oral uptake efficiency for both hydrophilic and hydrophobic compounds. -- Highlights: ► A mechanistic model was developed to estimate oral uptake efficiency. ► Model covers wide logK{sub ow} range (- 10 to + 8) and several mammalian species. ► K{sub ow} and the heptane water partition coefficient K{sub hw} were combined. ► K{sub ow} and K{sub hw} reflect the inner and the outer membrane diffusion resistance. ► Combining K{sub ow} and K{sub hw} improved uptake estimation for hydrophilic compounds.« less

  5. An efficient descriptor model for designing materials for solar cells

    NASA Astrophysics Data System (ADS)

    Alharbi, Fahhad H.; Rashkeev, Sergey N.; El-Mellouhi, Fedwa; Lüthi, Hans P.; Tabet, Nouar; Kais, Sabre

    2015-11-01

    An efficient descriptor model for fast screening of potential materials for solar cell applications is presented. It works for both excitonic and non-excitonic solar cells materials, and in addition to the energy gap it includes the absorption spectrum (α(E)) of the material. The charge transport properties of the explored materials are modelled using the characteristic diffusion length (Ld) determined for the respective family of compounds. The presented model surpasses the widely used Scharber model developed for bulk heterojunction solar cells. Using published experimental data, we show that the presented model is more accurate in predicting the achievable efficiencies. To model both excitonic and non-excitonic systems, two different sets of parameters are used to account for the different modes of operation. The analysis of the presented descriptor model clearly shows the benefit of including α(E) and Ld in view of improved screening results.

  6. Microfluidic high-throughput selection of microalgal strains with superior photosynthetic productivity using competitive phototaxis

    PubMed Central

    Kim, Jaoon Young Hwan; Kwak, Ho Seok; Sung, Young Joon; Choi, Hong Il; Hong, Min Eui; Lim, Hyun Seok; Lee, Jae-Hyeok; Lee, Sang Yup; Sim, Sang Jun

    2016-01-01

    Microalgae possess great potential as a source of sustainable energy, but the intrinsic inefficiency of photosynthesis is a major challenge to realize this potential. Photosynthetic organisms evolved phototaxis to find optimal light condition for photosynthesis. Here we report a microfluidic screening using competitive phototaxis of the model alga, Chlamydomonas reinhardtii, for rapid isolation of strains with improved photosynthetic efficiencies. We demonstrated strong relationship between phototaxis and photosynthetic efficiency by quantitative analysis of phototactic response at the single-cell level using a microfluidic system. Based on this positive relationship, we enriched the strains with improved photosynthetic efficiency by isolating cells showing fast phototactic responses from a mixture of 10,000 mutants, thereby greatly improving selection efficiency over 8 fold. Among 147 strains isolated after screening, 94.6% showed improved photoautotrophic growth over the parental strain. Two mutants showed much improved performances with up to 1.9- and 8.1-fold increases in photoautotrophic cell growth and lipid production, respectively, a substantial improvement over previous approaches. We identified candidate genes that might be responsible for fast phototactic response and improved photosynthesis, which can be useful target for further strain engineering. Our approach provides a powerful screening tool for rapid improvement of microalgal strains to enhance photosynthetic productivity. PMID:26852806

  7. The effect of economic factors and energy efficiency programs on residential electricity consumption

    NASA Astrophysics Data System (ADS)

    Sakai, Mihoko

    Many countries have implemented policies to correct market and behavioral failures that lead to inefficient energy use. It is important to know what factors and policies can effectively overcome such failures and improve energy efficiency; however, a comprehensive analysis has been difficult because of data limitations. Using state scores compiled by American organizations recently, and adopting fixed-effects regression models, I analyze the joint impacts of relevant factors and policy programs on residential electricity consumption in each U.S. state. The empirical results reveal that increases in electricity price have small and negative effects, and increases in personal income have positive effects on residential electricity sales per capita (a measure of energy efficiency). The results suggest that it may take time for economic factors to affect electricity sales. The effects of personal income suggest the difficulty of controlling residential electricity consumption; however, they also imply that there is some room in households to reduce electricity use. The study also finds that programs and budgets of several policies seem to be associated with electricity sales. The estimates from a model including interaction terms suggest the importance of including multiple policies when analyzing and designing policies to address electricity efficiency. The results also imply the possibility of rebound effects of some policies, whereby improvements in energy efficiency lead to increases in energy consumption due to the associated lower per unit cost. Future studies should analyze both short-term and long-term effects of economic factors and policies, based on improved and accumulated time series and panel data, in order to design more effective policies for improving residential electricity efficiency.

  8. Satellite-based terrestrial production efficiency modeling

    PubMed Central

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-01-01

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based biomass measurements to improve Ra estimation; and satellite-based soil moisture data could improve determination of soil water stress. PMID:19765285

  9. Analysis of regional total factor energy efficiency in China under environmental constraints: based on undesirable-minds and DEA window model

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Li, Deshan; Li, Shuangqiang; Jiang, Hanyu; Shen, Yuqing

    2017-06-01

    With China’s entrance into the new economy, the improvement of energy efficiency has become an important indicator to measure the quality of ecological civilization construction and economic development. According to the panel data of Chinese regions in 1996-2014, the nearest distance to the efficient frontier of Undesirable-MinDS Xeon model and DEA window model have been used to calculate the total factor energy efficiency of China’s regions. Study found that: Under environmental constraints, China’s total factor energy efficiency has increased after the first drop in the overall 1996-2014, and then increases again. And the difference between the regions is very large, showing a characteristic of “the east is the highest, the west is lower, and lowest is in the central” finally, this paper puts forward relevant policy suggestions.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hyeokjin; Chen, Hua; Maksimovic, Dragan

    An experimental 30 kW boost composite converter is described in this paper. The composite converter architecture, which consists of a buck module, a boost module, and a dual active bridge module that operates as a DC transformer (DCX), leads to substantial reductions in losses at partial power points, and to significant improvements in weighted efficiency in applications that require wide variations in power and conversion ratio. A comprehensive loss model is developed, accounting for semiconductor conduction and switching losses, capacitor losses, as well as dc and ac losses in magnetic components. Based on the developed loss model, the module andmore » system designs are optimized to maximize efficiency at a 50% power point. Experimental results for the 30 kW prototype demonstrate 98.5%peak efficiency, very high efficiency over wide ranges of power and voltage conversion ratios, as well as excellent agreements between model predictions and measured efficiency curves.« less

  11. Improving production efficiency through genetic selection

    USDA-ARS?s Scientific Manuscript database

    The goal of dairy cattle breeding is to increase productivity and efficiency by means of genetic selection. This is possible because related animals share some of their DNA in common, and we can use statistical models to predict the genetic merit animals based on the performance of their relatives. ...

  12. Efficient statistical mapping of avian count data

    USGS Publications Warehouse

    Royle, J. Andrew; Wikle, C.K.

    2005-01-01

    We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.

  13. Revealing driving factors of China's PM2.5 pollution

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Zhao, H.; Zhang, Q.; Geng, G.; Tong, D.; Peng, L.; He, K.

    2017-12-01

    China's rapid economic development and intensive energy consumption are deteriorating the air quality significantly. Understanding the key driving factors behind China's growing emissions of air pollutants and the accompanying PM2.5 pollution is critical for the development of China's clean air policies and also provides insight into how other emerging economies may develop a clear sky future. Here we reveal the socioeconomic drivers of the variations of China's PM2.5 concentrations during 2002-2012 by using an interdisciplinary framework that integrates an emission inventory model, an index decomposition analysis model, and a regional air quality model. The decomposition results demostrate that the improvements in emission efficiency and energy efficiency failed to offset the increased emissions of both primary PM2.5 and gaseous PM2.5 precursors (including SO2 NOx, and volatile organic compounds) triggered by the surging economic growth during 2002-2012. During the same time, the effects of energy structure, production structure and population growth were relatively less significant to all pollutants, which indicates the potential of large emission abatements through energy structure and production structure adjustment. Sensitivity simulations by the air quality model based on the provincial decomposition results also show that the economic growth have outpaced efficiency improvements in the increments of PM2.5 concentrations during the study years. As China continues to develop rapidly, future policies should promote further improvements in efficiency and accelerate the adjustments toward clean energy and production structures, which are critical for reducing China's emissions and alleviating the severe PM2.5 pollution.

  14. Avoided electricity subsidy payments can finance substantial appliance efficiency incentive programs: Case study of Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leventis, Greg; Gopal, Anand; Rue du Can, Stephane de la

    Numerous countries use taxpayer funds to subsidize residential electricity for a variety of socioeconomic objectives. These subsidies lower the value of energy efficiency to the consumer while raising it for the government. Further, while it would be especially helpful to have stringent Minimum Energy Performance Standards (MEPS) for appliances and buildings in this environment, they are hard to strengthen without imposing a cost on ratepayers. In this secondbest world, where the presence of subsidies limits the government’s ability to strengthen standards, we find that avoided subsidies are a readily available source of financing for energy efficiency incentive programs. Here, wemore » introduce the LBNL Energy Efficiency Revenue Analysis (LEERA) model to estimate the appliance efficiency improvements that can be achieved in Mexico by the revenue neutral financing of incentive programs from avoided subsidy payments. LEERA uses the detailed techno-economic analysis developed by LBNL for the Super-efficient Equipment and Appliance Deployment (SEAD) Initiative to calculate the incremental costs of appliance efficiency improvements. We analyze Mexico’s tariff structures and the long-run marginal cost of supply to calculate the marginal savings for the government from appliance efficiency. We find that avoided subsidy payments alone can finance incentive programs that cover the full incremental cost of refrigerators that are 27% more efficient and TVs that are 32% more efficient than baseline models. We find less substantial market transformation potential for room ACs primarily because AC energy savings occur at less subsidized tariffs.« less

  15. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  16. Exploring SWAp's contribution to the efficient allocation and use of resources in the health sector in Zambia.

    PubMed

    Chansa, Collins; Sundewall, Jesper; McIntyre, Di; Tomson, Göran; Forsberg, Birger C

    2008-07-01

    Zambia introduced a sector-wide approach (SWAp) in the health sector in 1993. The goal was to improve efficiency in the use of domestic funds and externally sourced development assistance by integrating these into a joint sectoral framework. Over a decade into its existence, however, the SWAp remains largely unevaluated. This study explores whether the envisaged improvements have been achieved by studying developments in administrative, technical and allocative efficiency in the Zambian health sector from 1990-2006. A case study was conducted using interviews and analysis of secondary data. Respondents represented a cross-section of stakeholders in the Zambian health sector. Secondary data from 1990-2006 were collected for six indicators related to administrative, technical and allocative efficiency. The results showed small improvements in administrative efficiency. Transaction costs still appeared to be high despite the introduction of the SWAp. Indicators for technical efficiency showed a drop in hospital bed utilization rates and government share of funding for drugs. As for allocative efficiency, budget execution did not improve with the SWAp, although there were large variations between both donors and year. Funding levels had apparently improved at district level but declined for hospitals. Finally, the SWAp had not succeeded in bringing all external assistance together under a common framework. Despite strong commitment to implement the SWAp in Zambia, the envisaged efficiency improvements do not seem to have been attained. Possible explanations could be that the SWAp has not been fully developed or that not all parties have completely embraced it. SWAp is not ruled out as a coordination model, but the current setup in Zambia has not proved to be fully effective.

  17. The productivity and cost-efficiency of models for involving nurse practitioners in primary care: a perspective from queueing analysis.

    PubMed

    Liu, Nan; D'Aunno, Thomas

    2012-04-01

    To develop simple stylized models for evaluating the productivity and cost-efficiencies of different practice models to involve nurse practitioners (NPs) in primary care, and in particular to generate insights on what affects the performance of these models and how. The productivity of a practice model is defined as the maximum number of patients that can be accounted for by the model under a given timeliness-to-care requirement; cost-efficiency is measured by the corresponding annual cost per patient in that model. Appropriate queueing analysis is conducted to generate formulas and values for these two performance measures. Model parameters for the analysis are extracted from the previous literature and survey reports. Sensitivity analysis is conducted to investigate the model performance under different scenarios and to verify the robustness of findings. Employing an NP, whose salary is usually lower than a primary care physician, may not be cost-efficient, in particular when the NP's capacity is underutilized. Besides provider service rates, workload allocation among providers is one of the most important determinants for the cost-efficiency of a practice model involving NPs. Capacity pooling among providers could be a helpful strategy to improve efficiency in care delivery. The productivity and cost-efficiency of a practice model depend heavily on how providers organize their work and a variety of other factors related to the practice environment. Queueing theory provides useful tools to take into account these factors in making strategic decisions on staffing and panel size selection for a practice model. © Health Research and Educational Trust.

  18. Strategies for Energy Efficient Resource Management of Hybrid Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong; Supinski, Bronis de; Schulz, Martin

    2013-01-01

    Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less

  19. Improving cardiovascular care through outpatient cardiac rehabilitation: an analysis of payment models that would improve quality and promote use.

    PubMed

    Mead, Holly; Grantham, Sarah; Siegel, Bruce

    2014-01-01

    Much attention has been paid to improving the care of patients with cardiovascular disease by focusing attention on delivery system redesign and payment reforms that encompass the healthcare spectrum, from an acute episode to maintenance of care. However, 1 area of cardiovascular disease care that has received little attention in the advancement of quality is cardiac rehabilitation (CR), a comprehensive secondary prevention program that is significantly underused despite evidence-based guidelines that recommending its use. The purpose of this article was to analyze the applicability of 2 payment and reimbursement models-pay-for-performance and bundled payments for episodes of care--that can promote the use of CR. We conclude that a payment model combining elements of both pay-for-performance and episodes of care would increase the use of CR, which would both improve quality and increase efficiency in cardiac care. Specific elements would need to be clearly defined, however, including: (a) how an episode is defined, (b) how to hold providers accountable for the care they provider, (c) how to encourage participation among CR providers, and (d) how to determine an equitable distribution of payment. Demonstrations testing new payment models must be implemented to generate empirical evidence that a melded pay-for-performance and episode-based care payment model will improve quality and efficiency.

  20. Business Model Innovation: A Blueprint for Higher Education

    ERIC Educational Resources Information Center

    Flanagan, Christine

    2012-01-01

    Business model innovation is one of the most challenging components of 21st-century leadership. Making incremental improvements to a business model--creating new efficiencies, expanding into adjacent markets--is hard enough. Developing and experimenting with new business models that truly transform how an institution delivers value (while…

  1. A practical model for the train-set utilization: The case of Beijing-Tianjin passenger dedicated line in China

    PubMed Central

    Li, Xiaomeng; Yang, Zhuo

    2017-01-01

    As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933

  2. The first of a series of high efficiency, high bmep, turbocharged two-stroke cycle diesel engines; the general motors EMD 645FB engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotlin, J.J.; Dunteman, N.R.; Scott, D.I.

    1983-01-01

    The current Electro-Motive Division 645 Series turbocharged engines are the Model FB and EC. The FB engine combines the highest thermal efficiency with the highest specific output of any EMD engine to date. The FB Series incorporates 16:1 compression ratio with a fire ring piston and an improved turbocharger design. Engine components included in the FB engine provide very high output levels with exceptional reliability. This paper also describes the performance of the lower rated Model EC engine series which feature high thermal efficiency and utilize many engine components well proven in service and basic to the Model FB Series.

  3. Function modeling improves the efficiency of spatial modeling using big data from remote sensing

    Treesearch

    John Hogland; Nathaniel Anderson

    2017-01-01

    Spatial modeling is an integral component of most geographic information systems (GISs). However, conventional GIS modeling techniques can require substantial processing time and storage space and have limited statistical and machine learning functionality. To address these limitations, many have parallelized spatial models using multiple coding libraries and have...

  4. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.

  5. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes

    PubMed Central

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-01-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement. PMID:26178332

  6. The evaluation model of the enterprise energy efficiency based on DPSR.

    PubMed

    Wei, Jin-Yu; Zhao, Xiao-Yu; Sun, Xue-Shan

    2017-05-08

    The reasonable evaluation of the enterprise energy efficiency is an important work in order to reduce the energy consumption. In this paper, an effective energy efficiency evaluation index system is proposed based on DPSR (Driving forces-Pressure-State-Response) with the consideration of the actual situation of enterprises. This index system which covers multi-dimensional indexes of the enterprise energy efficiency can reveal the complete causal chain which includes the "driver forces" and "pressure" of the enterprise energy efficiency "state" caused by the internal and external environment, and the ultimate enterprise energy-saving "response" measures. Furthermore, the ANP (Analytic Network Process) and cloud model are used to calculate the weight of each index and evaluate the energy efficiency level. The analysis of BL Company verifies the feasibility of this index system and also provides an effective way to improve the energy efficiency at last.

  7. Genome-wide association study for feed efficiency traits using SNP and haplotype models

    USDA-ARS?s Scientific Manuscript database

    Feed costs comprise the majority of variable expenses in beef cattle systems making feed efficiency an important economic consideration within the beef industry. Due to the expense of recording individual feed intake phenotypes, a genomic-enabled approach could be advantageous towards improving this...

  8. Consumption Taxes and Economic Efficiency with Idiosyncratic Wage Shocks

    ERIC Educational Resources Information Center

    Nishiyama, Shinichi; Smetters, Kent

    2005-01-01

    Fundamental tax reform is examined in an overlapping-generations model in which heterogeneous agents face idiosyncratic wage shocks and longevity uncertainty. A progressive income tax is replaced with a flat consumption tax. If idiosyncratic wage shocks are insurable (i.e., no risk), this reform improves (interim) efficiency, a result consistent…

  9. Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures

    ERIC Educational Resources Information Center

    Wilson, Jeffery A.

    2012-01-01

    Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…

  10. Evaluation of the performance of national health systems in 2004-2011: An analysis of 173 countries.

    PubMed

    Sun, Daxin; Ahn, Haksoon; Lievens, Tomas; Zeng, Wu

    2017-01-01

    In an effort to improve health service delivery and achieve better health outcomes, the World Health Organization (WHO) has called for improved efficiency of health care systems to better use the available funding. This study aims to examine the efficiency of national health systems using longitudinal country-level data. Data on health spending per capita, infant mortality rate (IMR), under 5 mortality rate (U5MR), and life expectancy (LE) were collected from or imputed for 173 countries from 2004 through 2011. Data envelopment analyses were used to evaluate the efficiency and regression models were constructed to examine the determinants of efficiency. The average efficiency of the national health system, when examined yearly, was 78.9%, indicating a potential saving of 21.1% of health spending per capita to achieve the same level of health status for children and the entire population, if all countries performed as well as their peers. Additionally, the efficiency of the national health system varied widely among countries. On average, Africa had the lowest efficiency of 67%, while West Pacific countries had the highest efficiency of 86%. National economic status, HIV/AIDS prevalence, health financing mechanisms and governance were found to be statistically associated with the efficiency of national health systems. Taking health financing as an example, a 1% point increase of social security expenses as a percentage of total health expenditure correlated to a 1.9% increase in national health system efficiency. The study underscores the need to enhance efficiency of national health systems to meet population health needs, and highlights the importance of health financing and governance in improving the efficiency of health systems, to ultimately improve health outcomes.

  11. Enhancing thermoelectric properties through a three-terminal benzene molecule

    NASA Astrophysics Data System (ADS)

    Sartipi, Z.; Vahedi, J.

    2018-05-01

    The thermoelectric transport through a benzene molecule with three metallic terminals is discussed. Using general local and non-local transport coefficients, we investigated different conductance and thermopower coefficients within the linear response regime. Based on the Onsager coefficients which depend on the number of terminal efficiencies, efficiency at maximum power is also studied. In the three-terminal setup with tuning temperature differences, a great enhancement of the figure of merit is observed. Results also show that the third terminal model can be useful in improving the efficiency at maximum output power compared to the two-terminal model.

  12. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less

  13. SCM: A method to improve network service layout efficiency with network evolution.

    PubMed

    Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng

    2017-01-01

    Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of "software defined network + network function virtualization" (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.

  14. Bayesian hierarchical models for smoothing in two-phase studies, with application to small area estimation.

    PubMed

    Ross, Michelle; Wakefield, Jon

    2015-10-01

    Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.

  15. Clinic Workflow Simulations using Secondary EHR Data

    PubMed Central

    Hribar, Michelle R.; Biermann, David; Read-Brown, Sarah; Reznick, Leah; Lombardi, Lorinna; Parikh, Mansi; Chamberlain, Winston; Yackel, Thomas R.; Chiang, Michael F.

    2016-01-01

    Clinicians today face increased patient loads, decreased reimbursements and potential negative productivity impacts of using electronic health records (EHR), but have little guidance on how to improve clinic efficiency. Discrete event simulation models are powerful tools for evaluating clinical workflow and improving efficiency, particularly when they are built from secondary EHR timing data. The purpose of this study is to demonstrate that these simulation models can be used for resource allocation decision making as well as for evaluating novel scheduling strategies in outpatient ophthalmology clinics. Key findings from this study are that: 1) secondary use of EHR timestamp data in simulation models represents clinic workflow, 2) simulations provide insight into the best allocation of resources in a clinic, 3) simulations provide critical information for schedule creation and decision making by clinic managers, and 4) simulation models built from EHR data are potentially generalizable. PMID:28269861

  16. Aerothermal modeling program, phase 2. Element B: Flow interaction experiment

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Murthy, S. N. B.; Sullivan, J. P.

    1986-01-01

    The design process was improved and the efficiency, life, and maintenance costs of the turbine engine hot section was enhanced. Recently, there has been much emphasis on the need for improved numerical codes for the design of efficient combustors. For the development of improved computational codes, there is a need for an experimentally obtained data base to be used at test cases for the accuracy of the computations. The purpose of Element-B is to establish a benchmark quality velocity and scalar measurements of the flow interaction of circular jets with swirling flow typical of that in the dome region of annular combustor. In addition to the detailed experimental effort, extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current and advanced turbulence and scalar transport models.

  17. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  18. Analyzing Whether Countries Are Equally Efficient at Improving Longevity for Men and Women

    PubMed Central

    Nandi, Arijit; Mendoza Rodríguez, José M.; Heymann, Jody

    2014-01-01

    Objectives. We examined the efficiency of country-specific health care spending in improving life expectancies for men and women. Methods. We estimated efficiencies of health care spending for 27 Organisation for Economic Co-operation and Development (OECD) countries during the period 1991 to 2007 using multivariable regression models, including country fixed-effects and controlling for time-varying levels of national social expenditures, economic development, and health behaviors. Results. Findings indicated robust differences in health-spending efficiency. A 1% annual increase in health expenditures was associated with percent changes in life expectancy ranging from 0.020 in the United States (95% confidence interval [CI] = 0.008, 0.032) to 0.121 in Germany (95% CI = 0.099, 0.143). Health-spending increases were associated with greater life expectancy improvements for men than for women in nearly every OECD country. Conclusions. This is the first study to our knowledge to estimate the effect of country-specific health expenditures on life expectancies of men and women. Future work understanding the determinants of these differences has the potential to improve the overall efficiency and equity of national health systems. PMID:24328639

  19. Design Guidelines for High-Performance Particle-Based Photoanodes for Water Splitting: Lanthanum Titanium Oxynitride as a Model.

    PubMed

    Landsmann, Steve; Maegli, Alexandra E; Trottmann, Matthias; Battaglia, Corsin; Weidenkaff, Anke; Pokrant, Simone

    2015-10-26

    Semiconductor powders are perfectly suited for the scalable fabrication of particle-based photoelectrodes, which can be used to split water using the sun as a renewable energy source. This systematic study is focused on variation of the electrode design using LaTiO2 N as a model system. We present the influence of particle morphology on charge separation and transport properties combined with post-treatment procedures, such as necking and size-dependent co-catalyst loading. Five rules are proposed to guide the design of high-performance particle-based photoanodes by adding or varying several process steps. We also specify how much efficiency improvement can be achieved using each of the steps. For example, implementation of a connectivity network and surface area enhancement leads to thirty times improvement in efficiency and co-catalyst loading achieves an improvement in efficiency by a factor of seven. Some of these guidelines can be adapted to non-particle-based photoelectrodes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Microclimate landscape design at southern integrated terminal Bandar Tasik Selatan, Kuala Lumpur

    NASA Astrophysics Data System (ADS)

    Phin, L. H.; Krisantia, I.

    2018-01-01

    Bandar Tasik Selatan is the integrated transport terminal has high energy consuming, high carbon emission and poor linkage. However, microclimate can be reduced through landscape design. This paper is a study to achieve energy efficiency and improve microclimate in the urban area. The research area is at Southern integrated terminal Bandar Tasik Selatan Kuala Lumpur Malaysia. It is carried out through a case study and microclimate analyzed using System Modeling method. System modelling using in this research is system energy budget of the microclimate at a site is a balance between the radiant energy supplied and the energy removed by all consumers. The finding indicated the microclimatic components that can be modified through landscape design are solar radiation, wind and precipitation can create thermal comfort, energy efficiency and others benefits.Through this research, provide more green space to achieve energy efficiency and improve microclimate of the site, introducing vertical landscape and proper planting selection to improve air quality, introducing green energy as part of the source of power supply and to promote integration of terminal building and rail systems by unify them using softscape

  1. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  2. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  3. Teaching Improvement Model Designed with DEA Method and Management Matrix

    ERIC Educational Resources Information Center

    Montoneri, Bernard

    2014-01-01

    This study uses student evaluation of teachers to design a teaching improvement matrix based on teaching efficiency and performance by combining management matrix and data envelopment analysis. This matrix is designed to formulate suggestions to improve teaching. The research sample consists of 42 classes of freshmen following a course of English…

  4. Nonparametric Transfer Function Models

    PubMed Central

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  5. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  6. Supernovae Discovery Efficiency

    NASA Astrophysics Data System (ADS)

    John, Colin

    2018-01-01

    Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.

  7. Essays in energy, environment and technological change

    NASA Astrophysics Data System (ADS)

    Zhou, Yichen Christy

    This dissertation studies technological change in the context of energy and environmental economics. Technology plays a key role in reducing greenhouse gas emissions from the transportation sector. Chapter 1 estimates a structural model of the car industry that allows for endogenous product characteristics to investigate how gasoline taxes, R&D subsidies and competition affect fuel efficiency and vehicle prices in the medium-run, both through car-makers' decisions to adopt technologies and through their investments in knowledge capital. I use technology adoption and automotive patents data for 1986-2006 to estimate this model. I show that 92% of fuel efficiency improvements between 1986 and 2006 were driven by technology adoption, while the role of knowledge capital is largely to reduce the marginal production costs of fuel-efficient cars. A counterfactual predicts that an additional 1/gallon gasoline tax in 2006 would have increased the technology adoption rate, and raised average fuel efficiency by 0.47 miles/gallon, twice the annual fuel efficiency improvement in 2003-2006. An R&D subsidy that would reduce the marginal cost of knowledge capital by 25% in 2006 would have raised investment in knowledge capital. This subsidy would have raised fuel efficiency only by 0.06 miles/gallon in 2006, but would have increased variable profits by 2.3 billion over all firms that year. Passenger vehicle fuel economy standards in the United States will require substantial improvements in new vehicle fuel economy over the next decade. Economic theory suggests that vehicle manufacturers adopt greater fuel-saving technologies for vehicles with larger market size. Chapter 2 documents a strong connection between market size, measured by sales, and technology adoption. Using variation consumer demographics and purchasing pattern to account for the endogeneity of market size, we find that a 10 percent increase in market size raises vehicle fuel efficiency by 0.3 percent, as compared to a mean improvement of 1.4 percent per year over 1997-2013. Historically, fuel price and demographic-driven market size changes have had large effects on technology adoption. Furthermore, fuel taxes would induce firms to adopt fuel-saving technologies on their most efficient cars, thereby polarizing the fuel efficiency distribution of the new vehicle fleet.

  8. Experience with a vectorized general circulation weather model on Star-100

    NASA Technical Reports Server (NTRS)

    Soll, D. B.; Habra, N. R.; Russell, G. L.

    1977-01-01

    A version of an atmospheric general circulation model was vectorized to run on a CDC STAR 100. The numerical model was coded and run in two different vector languages, CDC and LRLTRAN. A factor of 10 speed improvement over an IBM 360/95 was realized. Efficient use of the STAR machine required some redesigning of algorithms and logic. This precludes the application of vectorizing compilers on the original scalar code to achieve the same results. Vector languages permit a more natural and efficient formulation for such numerical codes.

  9. Measuring the performance of Internet companies using a two-stage data envelopment analysis model

    NASA Astrophysics Data System (ADS)

    Cao, Xiongfei; Yang, Feng

    2011-05-01

    In exploring the business operation of Internet companies, few researchers have used data envelopment analysis (DEA) to evaluate their performance. Since the Internet companies have a two-stage production process: marketability and profitability, this study employs a relational two-stage DEA model to assess the efficiency of the 40 dot com firms. The results show that our model performs better in measuring efficiency, and is able to discriminate the causes of inefficiency, thus helping business management to be more effective through providing more guidance to business performance improvement.

  10. Polyglutamine Disease Modeling: Epitope Based Screen for Homologous Recombination using CRISPR/Cas9 System.

    PubMed

    An, Mahru C; O'Brien, Robert N; Zhang, Ningzhe; Patra, Biranchi N; De La Cruz, Michael; Ray, Animesh; Ellerby, Lisa M

    2014-04-15

    We have previously reported the genetic correction of Huntington's disease (HD) patient-derived induced pluripotent stem cells using traditional homologous recombination (HR) approaches. To extend this work, we have adopted a CRISPR-based genome editing approach to improve the efficiency of recombination in order to generate allelic isogenic HD models in human cells. Incorporation of a rapid antibody-based screening approach to measure recombination provides a powerful method to determine relative efficiency of genome editing for modeling polyglutamine diseases or understanding factors that modulate CRISPR/Cas9 HR.

  11. The origin of consistent protein structure refinement from structural averaging.

    PubMed

    Park, Hahnbeom; DiMaio, Frank; Baker, David

    2015-06-02

    Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. HOOPER BAY HOUSING ANALYSIS AND ENERGY FEASIBILITY REPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SEA LION CORPORATION; COLD CLIMATE HOUSING RESEARCH CENTER; SOLUTIONS FOR HEALTHY BREATHING

    2012-12-30

    Sea Lion applied for and received a grant from the Department of Energy (DOE) towards this end titled Energy Efficiency Development and Deployment in Indian Country. The initial objectives of the Hooper Bay Energy Efficiency Feasibility Study were to demonstrate a 30% reduction in residential/commercial energy usage and identify the economic benefits of implementing energy efficiency measures to the Tribe through: (1) partnering with Whitney Construction and Solutions for Healthy Breathing in the training and hire of 2 local energy assessors to conduct energy audits of 9 representative housing models and 2 commercial units in the community. These homes aremore » representative of 52 homes constructed across different eras. (2) partnering with Cold Climate Housing Research Center to document current electrical and heating energy consumption and analyze data for a final feasibility report (3) assessing the economics of electricity & heating fuel usage; (4) projecting energy savings or fossil fuel reduction by modeling of improvement scenarios and cost feasibility The following two objectives will be completed after the publication of this report: (5) the development of materials lists for energy efficiency improvements (6) identifying financing options for the follow-up energy efficiency implementation phase.« less

  13. Relating ranging ecology, limb length, and locomotor economy in terrestrial animals.

    PubMed

    Pontzer, Herman

    2012-03-07

    Ecomorphological analyses have identified a number of important evolutionary trends in vertebrate limb design, but the relationships between daily travel distance, locomotor ecology, and limb length in terrestrial animals remain poorly understood. In this paper I model the net rate of energy intake as a function of foraging efficiency, and thus of locomotor economy; improved economy leads to greater net energy intake. However, the relationship between locomotor economy and net intake is highly dependent on foraging efficiency; only species with low foraging efficiencies experience strong selection pressure for improved locomotor economy and increased limb length. Examining 237 terrestrial species, I find that nearly all taxa obtain sufficiently high foraging efficiencies that selection for further increases in economy is weak. Thus selection pressures for increased economy and limb length among living terrestrial animals may be relatively weak and similar in magnitude across ecologically diverse species. The Economy Selection Pressure model for locomotor economy may be useful in investigating the evolution of limb design in early terrestrial taxa and the coevolution of foraging ecology and locomotor anatomy in lineages with low foraging efficiencies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  15. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    PubMed Central

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  16. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    PubMed

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  17. Advanced Control Considerations for Turbofan Engine Design

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey T.; Chicatelli, Amy

    2016-01-01

    This paper covers the application of a model-based engine control (MBEC) methodology featuring a self tuning on-board model for an aircraft turbofan engine simulation. The nonlinear engine model is capable of modeling realistic engine performance, allowing for a verification of the advanced control methodology over a wide range of operating points and life cycle conditions. The on-board model is a piece-wise linear model derived from the nonlinear engine model and updated using an optimal tuner Kalman Filter estimation routine, which enables the on-board model to self-tune to account for engine performance variations. MBEC is used here to show how advanced control architectures can improve efficiency during the design phase of a turbofan engine by reducing conservative operability margins. The operability margins that can be reduced, such as stall margin, can expand the engine design space and offer potential for efficiency improvements. Application of MBEC architecture to a nonlinear engine simulation is shown to reduce the thrust specific fuel consumption by approximately 1% over the baseline design, while maintaining safe operation of the engine across the flight envelope.

  18. Improved representations of coupled soil–canopy processes in the CABLE land surface model (Subversion revision 3432)

    DOE PAGES

    Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.; ...

    2016-09-07

    CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil–canopy processes by introducing an alternative hydrology model with amore » physically accurate representation of coupled energy and water fluxes at the soil–air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Here, marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.« less

  19. Risk Prediction Score for HIV Infection: Development and Internal Validation with Cross-Sectional Data from Men Who Have Sex with Men in China.

    PubMed

    Yin, Lu; Zhao, Yuejuan; Peratikos, Meridith Blevins; Song, Liang; Zhang, Xiangjun; Xin, Ruolei; Sun, Zheya; Xu, Yunan; Zhang, Li; Hu, Yifei; Hao, Chun; Ruan, Yuhua; Shao, Yiming; Vermund, Sten H; Qian, Han-Zhu

    2018-05-21

    Receptive anal intercourse, multiple partners, condomless sex, sexually transmitted infections (STIs), and drug/alcohol addiction are familiar factors that correlate with increased human immunodeficiency virus (HIV) risk among men who have sex with men (MSM). To improve estimation to HIV acquisition, we created a composite score using questions from routine survey of 3588 MSM in Beijing, China. The HIV prevalence was 13.4%. A risk scoring tool using penalized maximum likelihood multivariable logistic regression modeling was developed, deploying backward step-down variable selection to obtain a reduced-form model. The full penalized model included 19 sexual predictors, while the reduced-form model had 12 predictors. Both models calibrated well; bootstrap-corrected c-indices were 0.70 (full model) and 0.71 (reduced-form model). Non-Beijing residence, short-term living in Beijing, illegal drug use, multiple male sexual partners, receptive anal sex, inconsistent condom use, alcohol consumption before sex, and syphilis infection were the strongest predictors of HIV infection. Discriminating higher-risk MSM for targeted HIV prevention programming using a validated risk score could improve the efficiency of resource deployment for educational and risk reduction programs. A valid risk score can also identify higher risk persons into prevention and vaccine clinical trials, which would improve trial cost-efficiency.

  20. Improved representations of coupled soil–canopy processes in the CABLE land surface model (Subversion revision 3432)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.

    CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil–canopy processes by introducing an alternative hydrology model with amore » physically accurate representation of coupled energy and water fluxes at the soil–air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Here, marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.« less

  1. Improved representations of coupled soil-canopy processes in the CABLE land surface model (Subversion revision 3432)

    NASA Astrophysics Data System (ADS)

    Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.; Harman, Ian N.

    2016-09-01

    CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil-canopy processes by introducing an alternative hydrology model with a physically accurate representation of coupled energy and water fluxes at the soil-air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.

  2. Improving the efficiency of the cardiac catheterization laboratories through understanding the stochastic behavior of the scheduled procedures.

    PubMed

    Stepaniak, Pieter S; Soliman Hamad, Mohamed A; Dekker, Lukas R C; Koolen, Jacques J

    2014-01-01

    In this study, we sought to analyze the stochastic behavior of Catherization Laboratories (Cath Labs) procedures in our institution. Statistical models may help to improve estimated case durations to support management in the cost-effective use of expensive surgical resources. We retrospectively analyzed all the procedures performed in the Cath Labs in 2012. The duration of procedures is strictly positive (larger than zero) and has mostly a large minimum duration. Because of the strictly positive character of the Cath Lab procedures, a fit of a lognormal model may be desirable. Having a minimum duration requires an estimate of the threshold (shift) parameter of the lognormal model. Therefore, the 3-parameter lognormal model is interesting. To avoid heterogeneous groups of observations, we tested every group-cardiologist-procedure combination for the normal, 2- and 3-parameter lognormal distribution. The total number of elective and emergency procedures performed was 6,393 (8,186 h). The final analysis included 6,135 procedures (7,779 h). Electrophysiology (intervention) procedures fit the 3-parameter lognormal model 86.1% (80.1%). Using Friedman test statistics, we conclude that the 3-parameter lognormal model is superior to the 2-parameter lognormal model. Furthermore, the 2-parameter lognormal is superior to the normal model. Cath Lab procedures are well-modelled by lognormal models. This information helps to improve and to refine Cath Lab schedules and hence their efficient use.

  3. Multi-Scale Experiments to Evaluate Mobility Control Methods for Enhancing the Sweep Efficiency of Injected Subsurface Remediation Amendments

    DTIC Science & Technology

    2010-08-01

    petroleum industry. Moreover, heterogeneity control strategies can be applied to improve the efficiency of a variety of in situ remediation technologies...conditions that differ significantly from those found in environmental systems . Therefore many of the design criteria used by the petroleum industry for...were helpful in constructing numerical models in up-scaled systems (2-D tanks). The UTCHEM model was able to successfully simulate 2-D experimental

  4. The Productivity and Cost-Efficiency of Models for Involving Nurse Practitioners in Primary Care: A Perspective from Queueing Analysis

    PubMed Central

    Liu, Nan; D'Aunno, Thomas

    2012-01-01

    Objective To develop simple stylized models for evaluating the productivity and cost-efficiencies of different practice models to involve nurse practitioners (NPs) in primary care, and in particular to generate insights on what affects the performance of these models and how. Data Sources and Study Design The productivity of a practice model is defined as the maximum number of patients that can be accounted for by the model under a given timeliness-to-care requirement; cost-efficiency is measured by the corresponding annual cost per patient in that model. Appropriate queueing analysis is conducted to generate formulas and values for these two performance measures. Model parameters for the analysis are extracted from the previous literature and survey reports. Sensitivity analysis is conducted to investigate the model performance under different scenarios and to verify the robustness of findings. Principal Findings Employing an NP, whose salary is usually lower than a primary care physician, may not be cost-efficient, in particular when the NP's capacity is underutilized. Besides provider service rates, workload allocation among providers is one of the most important determinants for the cost-efficiency of a practice model involving NPs. Capacity pooling among providers could be a helpful strategy to improve efficiency in care delivery. Conclusions The productivity and cost-efficiency of a practice model depend heavily on how providers organize their work and a variety of other factors related to the practice environment. Queueing theory provides useful tools to take into account these factors in making strategic decisions on staffing and panel size selection for a practice model. PMID:22092009

  5. Short communication: Evaluation of the PREP10 energy-, protein-, and amino acid-allowable milk equations in comparison with the National Research Council model.

    PubMed

    White, Robin R; McGill, Tyler; Garnett, Rebecca; Patterson, Robert J; Hanigan, Mark D

    2017-04-01

    The objective of this work was to evaluate the precision and accuracy of the milk yield predictions made by the PREP10 model in comparison to those from the National Research Council (NRC) Nutrient Requirements of Dairy Cattle. The PREP10 model is a ration-balancing system that allows protein use efficiency to vary with production level. The model also has advanced AA supply and requirement calculations that enable estimation of AA-allowable milk (Milk AA ) based on 10 essential AA. A literature data set of 374 treatment means was collected and used to quantitatively evaluate the estimates of protein-allowable milk (Milk MP ) and energy-allowable milk yields from the NRC and PREP10 models. The PREP10 Milk AA prediction was also evaluated, as were both models' estimates of milk based on the most-limiting nutrient or the mean of the estimated milk yields. For most milk estimates compared, the PREP10 model had reduced root mean squared prediction error (RMSPE), improved concordance correlation coefficient, and reduced mean and slope bias in comparison to the NRC model. In particular, utilizing the variable protein use efficiency for milk production notably improved the estimate of Milk MP when compared with NRC. The PREP10 Milk MP estimate had an RMSPE of 18.2% (NRC = 25.7%), concordance correlation coefficient of 0.82% (NRC = 0.64), slope bias of -0.14 kg/kg of predicted milk (NRC = -0.34 kg/kg), and mean bias of -0.63 kg (NRC = -2.85 kg). The PREP10 estimate of Milk AA had slightly elevated RMSPE and mean and slope bias when compared with Milk MP . The PREP10 estimate of Milk AA was not advantageous when compared with Milk MP , likely because AA use efficiency for milk was constant whereas MP use was variable. Future work evaluating variable AA use efficiencies for milk production is likely to improve accuracy and precision of models of allowable milk. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Optimizing Chemical Reactions with Deep Reinforcement Learning.

    PubMed

    Zhou, Zhenpeng; Li, Xiaocheng; Zare, Richard N

    2017-12-27

    Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability.

  7. Energy-efficient container handling using hybrid model predictive control

    NASA Astrophysics Data System (ADS)

    Xin, Jianbin; Negenborn, Rudy R.; Lodewijks, Gabriel

    2015-11-01

    The performance of container terminals needs to be improved to adapt the growth of containers while maintaining sustainability. This paper provides a methodology for determining the trajectory of three key interacting machines for carrying out the so-called bay handling task, involving transporting containers between a vessel and the stacking area in an automated container terminal. The behaviours of the interacting machines are modelled as a collection of interconnected hybrid systems. Hybrid model predictive control (MPC) is proposed to achieve optimal performance, balancing the handling capacity and energy consumption. The underlying control problem is hereby formulated as a mixed-integer linear programming problem. Simulation studies illustrate that a higher penalty on energy consumption indeed leads to improved sustainability using less energy. Moreover, simulations illustrate how the proposed energy-efficient hybrid MPC controller performs under different types of uncertainties.

  8. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  9. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  10. Efficient use of historical data for genomic selection: a case study of rust resistance in wheat

    USDA-ARS?s Scientific Manuscript database

    Genomic selection (GS) is a new methodology that can improve wheat breeding efficiency. To implement GS, a training population (TP) with both phenotypic and genotypic data is required to train a statistical model used to predict genotyped selection candidates (SCs). Several factors impact prediction...

  11. CFD modeling to improve safe and efficient distribution of chlorine dioxide gas for packaging fresh produce

    USDA-ARS?s Scientific Manuscript database

    The efficiency of the packaging system in inactivating food borne pathogens and prolonging the shelf life of fresh-cut produce is influenced by the design of the package apart from material and atmospheric conditions. Three different designs were considered to determine a specific package design ens...

  12. How much can we gain from improved efficiency? An examination of performance of national HIV/AIDS programs and its determinants in low- and middle-income countries

    PubMed Central

    2012-01-01

    Background The economic downturn exacerbates the inadequacy of resources for combating the worldwide HIV/AIDS pandemic and amplifies the need to improve the efficiency of HIV/AIDS programs. Methods We used data envelopment analysis (DEA) to evaluate efficiency of national HIV/AIDS programs in transforming funding into services and implemented a Tobit model to identify determinants of the efficiency in 68 low- and middle-income countries. We considered the change from the lowest quartile to the average value of a variable a "notable" increase. Results Overall, the average efficiency in implementing HIV/AIDS programs was moderate (49.8%). Program efficiency varied enormously among countries with means by quartile of efficiency of 13.0%, 36.4%, 54.4% and 96.5%. A country's governance, financing mechanisms, and economic and demographic characteristics influence the program efficiency. For example, if countries achieved a notable increase in "voice and accountability" (e.g., greater participation of civil society in policy making), the efficiency of their HIV/AIDS programs would increase by 40.8%. For countries in the lowest quartile of per capita gross national income (GNI), a notable increase in per capita GNI would increase the efficiency of AIDS programs by 45.0%. Conclusions There may be substantial opportunity for improving the efficiency of AIDS services, by providing more services with existing resources. Actions beyond the health sector could be important factors affecting HIV/AIDS service delivery. PMID:22443135

  13. Empirical Study on Total Factor Productive Energy Efficiency in Beijing-Tianjin-Hebei Region-Analysis based on Malmquist Index and Window Model

    NASA Astrophysics Data System (ADS)

    Xu, Qiang; Ding, Shuai; An, Jingwen

    2017-12-01

    This paper studies the energy efficiency of Beijing-Tianjin-Hebei region and to finds out the trend of energy efficiency in order to improve the economic development quality of Beijing-Tianjin-Hebei region. Based on Malmquist index and window analysis model, this paper estimates the total factor energy efficiency in Beijing-Tianjin-Hebei region empirically by using panel data in this region from 1991 to 2014, and provides the corresponding political recommendations. The empirical result shows that, the total factor energy efficiency in Beijing-Tianjin-Hebei region increased from 1991 to 2014, mainly relies on advances in energy technology or innovation, and obvious regional differences in energy efficiency to exist. Throughout the window period of 24 years, the regional differences of energy efficiency in Beijing-Tianjin-Hebei region shrank. There has been significant convergent trend in energy efficiency after 2000, mainly depends on the diffusion and spillover of energy technologies.

  14. Perceptual learning improves visual performance in juvenile amblyopia.

    PubMed

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  15. Using Simulation to Examine the Effect of Physician Heterogeneity on the Operational Efficiency of an Overcrowded Hospital Emergency Department

    NASA Astrophysics Data System (ADS)

    Kuo, Y.-H.; Leung, J. M. Y.; Graham, C. A.

    2015-05-01

    In this paper, we present a case study of modelling and analyzing the patient flow of a hospital emergency department in Hong Kong. The emergency department is facing the challenge of overcrowding and the patients there usually experience a long waiting time. Our project team was requested by a senior consultant of the emergency department to analyze the patient flow and provide a decision support tool to help improve their operations. We adopt a simulation approach to mimic their daily operations. With the simulation model, we conduct a computational study to examine the effect of physician heterogeneity on the emergency department performance. We found that physician heterogeneity has a great impact on the operational efficiency and thus should be considered when developing simulation models. Our computational results show that, with the same average of service rates among the physicians, variation in the rates can improve overcrowding situation. This suggests that emergency departments may consider having some efficient physicians to speed up the overall service rate in return for more time for patients who need extra medical care.

  16. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    NASA Astrophysics Data System (ADS)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  17. A Hybrid Approach for Efficient Modeling of Medium-Frequency Propagation in Coal Mines

    PubMed Central

    Brocker, Donovan E.; Sieber, Peter E.; Waynert, Joseph A.; Li, Jingcheng; Werner, Pingjuan L.; Werner, Douglas H.

    2015-01-01

    An efficient procedure for modeling medium frequency (MF) communications in coal mines is introduced. In particular, a hybrid approach is formulated and demonstrated utilizing ideal transmission line equations to model MF propagation in combination with full-wave sections used for accurate simulation of local antenna-line coupling and other near-field effects. This work confirms that the hybrid method accurately models signal propagation from a source to a load for various system geometries and material compositions, while significantly reducing computation time. With such dramatic improvement to solution times, it becomes feasible to perform large-scale optimizations with the primary motivation of improving communications in coal mines both for daily operations and emergency response. Furthermore, it is demonstrated that the hybrid approach is suitable for modeling and optimizing large communication networks in coal mines that may otherwise be intractable to simulate using traditional full-wave techniques such as moment methods or finite-element analysis. PMID:26478686

  18. Mapping a Careflow Network to assess the connectedness of Connected Health.

    PubMed

    Carroll, Noel; Richardson, Ita

    2017-04-01

    Connected Health is an emerging and rapidly developing field which has the potential to transform healthcare service systems by increasing its safety, quality and overall efficiency. From a healthcare perspective, process improvement models have mainly focused on the static workflow viewpoint. The objective of this article is to study and model the dynamic nature of healthcare delivery, allowing us to identify where potential issues exist within the service system and to examine how Connected Health technological solutions may support service efficiencies. We explore the application of social network analysis (SNA) as a modelling technique which captures the dynamic nature of a healthcare service. We demonstrate how it can be used to map the 'Careflow Network' and guide Connected Health innovators to examine specific opportunities within the healthcare service. Our results indicate that healthcare technology must be correctly identified and implemented within the Careflow Network to enjoy improvements in service delivery. Oftentimes, prior to making the transformation to Connected Health, researchers use various modelling techniques that fail to identify where Connected Health innovation is best placed in a healthcare service network. Using SNA allows us to develop an understanding of the current operation of healthcare system within which they can effect change. It is important to identify and model the resource exchanges to ensure that the quality and safety of care are enhanced, efficiencies are increased and the overall healthcare service system is improved. We have shown that dynamic models allow us to study the exchange of resources. These are often intertwined within a socio-technical context in an informal manner and not accounted for in static models, yet capture a truer insight on the operations of a Careflow Network.

  19. Economic Modeling as a Component of Academic Strategic Planning.

    ERIC Educational Resources Information Center

    MacKinnon, Joyce; Sothmann, Mark; Johnson, James

    2001-01-01

    Computer-based economic modeling was used to enable a school of allied health to define outcomes, identify associated costs, develop cost and revenue models, and create a financial planning system. As a strategic planning tool, it assisted realistic budgeting and improved efficiency and effectiveness. (Contains 18 references.) (SK)

  20. Industrial Sector Energy Efficiency Modeling (ISEEM) Framework Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karali, Nihan; Xu, Tengfang; Sathaye, Jayant

    2012-12-12

    The goal of this study is to develop a new bottom-up industry sector energy-modeling framework with an agenda of addressing least cost regional and global carbon reduction strategies, improving the capabilities and limitations of the existing models that allows trading across regions and countries as an alternative.

  1. Measuring Efficiency of Health Systems of the Middle East and North Africa (MENA) Region Using Stochastic Frontier Analysis.

    PubMed

    Hamidi, Samer; Akinci, Fevzi

    2016-06-01

    The main purpose of this study is to measure the technical efficiency of twenty health systems in the Middle East and North Africa (MENA) region to inform evidence-based health policy decisions. In addition, the effects of alternative stochastic frontier model specification on the empirical results are examined. We conducted a stochastic frontier analysis to estimate the country-level technical efficiencies using secondary panel data for 20 MENA countries for the period of 1995-2012 from the World Bank database. We also tested the effect of alternative frontier model specification using three random-effects approaches: a time-invariant model where efficiency effects are assumed to be static with regard to time, and a time-varying efficiency model where efficiency effects have temporal variation, and one model to account for heterogeneity. The average estimated technical inefficiency of health systems in the MENA region was 6.9 % with a range of 5.7-7.9 % across the three models. Among the top performers, Lebanon, Qatar, and Morocco are ranked consistently high according to the three different inefficiency model specifications. On the opposite side, Sudan, Yemen and Djibouti ranked among the worst performers. On average, the two most technically efficient countries were Qatar and Lebanon. We found that the estimated technical efficiency scores vary substantially across alternative parametric models. Based on the findings reported in this study, most MENA countries appear to be operating, on average, with a reasonably high degree of technical efficiency compared with other countries in the region. However, there is evidence to suggest that there are considerable efficiency gains yet to be made by some MENA countries. Additional empirical research is needed to inform future health policies aimed at improving both the efficiency and sustainability of the health systems in the MENA region.

  2. 28 percent efficient GaAs concentrator solar cells

    NASA Technical Reports Server (NTRS)

    Macmillan, H. F.; Hamaker, H. C.; Kaminar, N. R.; Kuryla, M. S.; Ladle Ristow, M.

    1988-01-01

    AlGaAs/GaAs heteroface solar concentrator cells which exhibit efficiencies in excess of 27 percent at high solar concentrations (over 400 suns, AM1.5D, 100 mW/sq cm) have been fabricated with both n/p and p/n configurations. The best n/p cell achieved an efficiency of 28.1 percent around 400 suns, and the best p/n cell achieved an efficiency of 27.5 percent around 1000 suns. The high performance of these GaAs concentrator cells compared to earlier high-efficiency cells was due to improved control of the metal-organic chemical vapor deposition growth conditions and improved cell fabrication procedures (gridline definition and edge passivation). The design parameters of the solar cell structures and optimized grid pattern were determined with a realistic computer modeling program. An evaluation of the device characteristics and a discussion of future GaAs concentrator cell development are presented.

  3. The Chronic Care Model as vehicle for the development of disease management in Europe

    PubMed Central

    Spreeuwenberg, Cor

    2008-01-01

    The Chronic Care Model (Wagner, WHO) aims to improve the functioning and clinical situation of chronic patients by focussing on the patient, the practice team and the conditions that determine the functioning of the team. The patient is the most important actor who must be stimulated proactively by a competent, integrated practice team. Six interdependent conditional components are essential: health care organisation, delivery system design, community resources and policies, self-management support systems, decision support and clinical information systems. While the Chronic Care Model focuses on quality and effectiveness of care, disease management programmes underline more the efficiency of care. These programmes apply industrial management principles in health care. Information about process, structure and outcome is gathered and used systematically and human and material sources are used efficiently. There is evidence that the approaches of the Chronic Care Model and disease management can be integrated. Both approaches underline the need of information and focus on the patient as the main actor to improve and that a balance can be found between effectiveness and efficiency. Ideas will be given how the Chronic Care Model can be used as a framework for the development of a European way of disease management for people with a chronic condition.

  4. Dynamic modeling and verification of an energy-efficient greenhouse with an aquaponic system using TRNSYS

    NASA Astrophysics Data System (ADS)

    Amin, Majdi Talal

    Currently, there is no integrated dynamic simulation program for an energy efficient greenhouse coupled with an aquaponic system. This research is intended to promote the thermal management of greenhouses in order to provide sustainable food production with the lowest possible energy use and material waste. A brief introduction of greenhouses, passive houses, energy efficiency, renewable energy systems, and their applications are included for ready reference. An experimental working scaled-down energy-efficient greenhouse was built to verify and calibrate the results of a dynamic simulation model made using TRNSYS software. However, TRNSYS requires the aid of Google SketchUp to develop 3D building geometry. The simulation model was built following the passive house standard as closely as possible. The new simulation model was then utilized to design an actual greenhouse with Aquaponics. It was demonstrated that the passive house standard can be applied to improve upon conventional greenhouse performance, and that it is adaptable to different climates. The energy-efficient greenhouse provides the required thermal environment for fish and plant growth, while eliminating the need for conventional cooling and heating systems.

  5. The comparative impact of the market penetration of energy-efficient measures: A sensitivity analysis of its impact on minority households

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozinovich, L.V.; Poyer, D.A.; Anderson, J.L.

    1993-12-01

    A sensitivity study was made of the potential market penetration of residential energy efficiency as energy service ratio (ESR) improvements occurred in minority households, by age of house. The study followed a Minority Energy Assessment Model analysis of the National Energy Strategy projections of household energy consumption and prices, with majority, black, and Hispanic subgroup divisions. Electricity and total energy consumption and expenditure patterns were evaluated when the households` ESR improvement followed a logistic negative growth (i.e., market penetration) path. Earlier occurrence of ESR improvements meant greater discounted savings over the 22-year period.

  6. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    USGS Publications Warehouse

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  7. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  8. Improving Visualization and Interpretation of Metabolome-Wide Association Studies: An Application in a Population-Based Cohort Using Untargeted 1H NMR Metabolic Profiling.

    PubMed

    Castagné, Raphaële; Boulangé, Claire Laurence; Karaman, Ibrahim; Campanella, Gianluca; Santos Ferreira, Diana L; Kaluarachchi, Manuja R; Lehne, Benjamin; Moayyeri, Alireza; Lewis, Matthew R; Spagou, Konstantina; Dona, Anthony C; Evangelos, Vangelis; Tracy, Russell; Greenland, Philip; Lindon, John C; Herrington, David; Ebbels, Timothy M D; Elliott, Paul; Tzoulaki, Ioanna; Chadeau-Hyam, Marc

    2017-10-06

    1 H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1 H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies.

  9. Improving Visualization and Interpretation of Metabolome-Wide Association Studies: An Application in a Population-Based Cohort Using Untargeted 1H NMR Metabolic Profiling

    PubMed Central

    2017-01-01

    1H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies. PMID:28823158

  10. SCM: A method to improve network service layout efficiency with network evolution

    PubMed Central

    Zhao, Qi; Zhang, Chuanhao

    2017-01-01

    Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of “software defined network + network function virtualization” (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently. PMID:29267299

  11. Project Management Life Cycle Models to Improve Management in High-rise Construction

    NASA Astrophysics Data System (ADS)

    Burmistrov, Andrey; Siniavina, Maria; Iliashenko, Oksana

    2018-03-01

    The paper describes a possibility to improve project management in high-rise buildings construction through the use of various Project Management Life Cycle Models (PMLC models) based on traditional and agile project management approaches. Moreover, the paper describes, how the split the whole large-scale project to the "project chain" will create the factor for better manageability of the large-scale buildings project and increase the efficiency of the activities of all participants in such projects.

  12. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  13. An intelligent healthcare management system: a new approach in work-order prioritization for medical equipment maintenance requests.

    PubMed

    Hamdi, Naser; Oweis, Rami; Abu Zraiq, Hamzeh; Abu Sammour, Denis

    2012-04-01

    The effective maintenance management of medical technology influences the quality of care delivered and the profitability of healthcare facilities. Medical equipment maintenance in Jordan lacks an objective prioritization system; consequently, the system is not sensitive to the impact of equipment downtime on patient morbidity and mortality. The current work presents a novel software system (EQUIMEDCOMP) that is designed to achieve valuable improvements in the maintenance management of medical technology. This work-order prioritization model sorts medical maintenance requests by calculating a priority index for each request. Model performance was assessed by utilizing maintenance requests from several Jordanian hospitals. The system proved highly efficient in minimizing equipment downtime based on healthcare delivery capacity, and, consequently, patient outcome. Additionally, a preventive maintenance optimization module and an equipment quality control system are incorporated. The system is, therefore, expected to improve the reliability of medical equipment and significantly improve safety and cost-efficiency.

  14. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    NASA Astrophysics Data System (ADS)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  15. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    NASA Astrophysics Data System (ADS)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  16. Implementation of time-efficient adaptive sampling function design for improved undersampled MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Choi, Jinhyeok; Kim, Hyeonjin

    2016-12-01

    To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.

  17. Gas dynamic design of the pipe line compressor with 90% efficiency. Model test approval

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Rekstin, A.; Soldatova, K.

    2015-08-01

    Gas dynamic design of the pipe line compressor 32 MW was made for PAO SMPO (Sumy, Ukraine). The technical specification requires compressor efficiency of 90%. The customer offered favorable scheme - single-stage design with console impeller and axial inlet. The authors used the standard optimization methodology of 2D impellers. The original methodology of internal scroll profiling was used to minimize efficiency losses. Radically improved 5th version of the Universal modeling method computer programs was used for precise calculation of expected performances. The customer fulfilled model tests in a 1:2 scale. Tests confirmed the calculated parameters at the design point (maximum efficiency of 90%) and in the whole range of flow rates. As far as the authors know none of compressors have achieved such efficiency. The principles and methods of gas-dynamic design are presented below. The data of the 32 MW compressor presented by the customer in their report at the 16th International Compressor conference (September 2014, Saint- Petersburg) and later transferred to the authors.

  18. Defining and Assessing Quality Improvement Outcomes: A Framework for Public Health

    PubMed Central

    Nawaz, Saira; Thomas, Craig; Young, Andrea

    2015-01-01

    We describe an evidence-based framework to define and assess the impact of quality improvement (QI) in public health. Developed to address programmatic and research-identified needs for articulating the value of public health QI in aggregate, this framework proposes a standardized set of measures to monitor and improve the efficiency and effectiveness of public health programs and operations. We reviewed the scientific literature and analyzed QI initiatives implemented through the Centers for Disease Control and Prevention’s National Public Health Improvement Initiative to inform the selection of 5 efficiency and 8 effectiveness measures. This framework provides a model for identifying the types of improvement outcomes targeted by public health QI efforts and a means to understand QI’s impact on the practice of public health. PMID:25689185

  19. A performance improvement case study in aircraft maintenance and its implications for hazard identification.

    PubMed

    Ward, Marie; McDonald, Nick; Morrison, Rabea; Gaynor, Des; Nugent, Tony

    2010-02-01

    Aircraft maintenance is a highly regulated, safety critical, complex and competitive industry. There is a need to develop innovative solutions to address process efficiency without compromising safety and quality. This paper presents the case that in order to improve a highly complex system such as aircraft maintenance, it is necessary to develop a comprehensive and ecologically valid model of the operational system, which represents not just what is meant to happen, but what normally happens. This model then provides the backdrop against which to change or improve the system. A performance report, the Blocker Report, specific to aircraft maintenance and related to the model was developed gathering data on anything that 'blocks' task or check performance. A Blocker Resolution Process was designed to resolve blockers and improve the current check system. Significant results were obtained for the company in the first trial and implications for safety management systems and hazard identification are discussed. Statement of Relevance: Aircraft maintenance is a safety critical, complex, competitive industry with a need to develop innovative solutions to address process and safety efficiency. This research addresses this through the development of a comprehensive and ecologically valid model of the system linked with a performance reporting and resolution system.

  20. The Research and Evaluation of Road Environment in the Block of City Based on 3-D Streetscape Data

    NASA Astrophysics Data System (ADS)

    Guan, L.; Ding, Y.; Ge, J.; Yang, H.; Feng, X.; Chen, P.

    2018-04-01

    This paper focus on the problem of the street environment of block unit, based on making clear the acquisition mode and characteristics of 3D streetscape data, the paper designs the assessment model of regional block unit based on 3D streetscape data. The 3D streetscape data with the aid of oblique photogrammetry surveying and mobile equipment, will greatly improve the efficiency and accuracy of urban regional assessment, and expand the assessment scope. Based on the latest urban regional assessment model, with the street environment assessment model of the current situation, this paper analyzes the street form and street environment assessment of current situation in the typical area of Beijing. Through the street environment assessment of block unit, we found that in the megacity street environment assessment model of block unit based on 3D streetscape data has greatly help to improve the assessment efficiency and accuracy. At the same time, motor vehicle lane, green shade deficiency, bad railings and street lost situation is still very serious in Beijing, the street environment improvement of the block unit is still a heavy task. The research results will provide data support for urban fine management and urban design, and provide a solid foundation for the improvement of city image.

  1. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  2. Fundamental Challenges for Modeling Electrochemical Energy Storage Systems at the Atomic Scale.

    PubMed

    Groß, Axel

    2018-04-23

    There is a strong need to improve the efficiency of electrochemical energy storage, but progress is hampered by significant technological and scientific challenges. This review describes the potential contribution of atomic-scale modeling to the development of more efficient batteries, with a particular focus on first-principles electronic structure calculations. Numerical and theoretical obstacles are discussed, along with ways to overcome them, and some recent examples are presented illustrating the insights into electrochemical energy storage that can be gained from quantum chemical studies.

  3. From Push to Pull: Barriers to MALSP Modernization

    DTIC Science & Technology

    2013-03-01

    gain efficiencies and/or effectiveness in the MALSP II processes” (p. 1). CPI is the English term for the Japanese business model Kaizen . According...to Hudgik (n.d.), “ Kaizen was created in Japan following World War II. The word Kaizen means ‘continuous improvement.’ It comes from the Japanese...words 改 (‘kai’), which means ‘change’ or ‘to correct,’ and 善 (‘zen’) which means ‘good.’” The Kaizen business model seeks to maximize efficiencies of

  4. Search for Directed Networks by Different Random Walk Strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long

    2012-03-01

    A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.

  5. Enhancing Efficiency of Perovskite Solar Cells via Surface Passivation with Graphene Oxide Interlayer.

    PubMed

    Li, Hao; Tao, Leiming; Huang, Feihong; Sun, Qiang; Zhao, Xiaojuan; Han, Junbo; Shen, Yan; Wang, Mingkui

    2017-11-08

    Perovskite solar cells have been demonstrated as promising low-cost and highly efficient next-generation solar cells. Enhancing V OC by minimization the interfacial recombination kinetics can further improve device performance. In this work, we for the first time reported on surface passivation of perovskite layers with chemical modified graphene oxides, which act as efficient interlayer to reduce interfacial recombination and enhance hole extraction as well. Our modeling points out that the passivation effect mainly comes from the interaction between functional group (4-fluorophenyl) and under-coordinated Pb ions. The resulting perovskite solar cells achieved high efficient power conversion efficiency of 18.75% with enhanced high open circuit V OC of 1.11 V. Ultrafast spectroscopy, photovoltage/photocurrent transient decay, and electronic impedance spectroscopy characterizations reveal the effective passivation effect and the energy loss mechanism. This work sheds light on the importance of interfacial engineering on the surface of perovskite layers and provides possible ways to improve device efficiency.

  6. [Impact of the funding reform of teaching hospitals in Brazil].

    PubMed

    Lobo, M S C; Silva, A C M; Lins, M P E; Fiszman, R

    2009-06-01

    To assess the impact of funding reform on the productivity of teaching hospitals. Based on the Information System of Federal University Hospitals of Brazil, 2003 and 2006 efficiency and productivity were measured using frontier methods with a linear programming technique, data envelopment analysis, and input-oriented variable returns to scale model. The Malmquist index was calculated to detect changes during the study period: 'technical efficiency change,' or the relative variation of the efficiency of each unit; and 'technological change' after frontier shift. There was 51% mean budget increase and improvement of technical efficiency of teaching hospitals (previously 11, 17 hospitals reached the empirical efficiency frontier) but the same was not seen for the technology frontier. Data envelopment analysis set benchmark scores for each inefficient unit (before and after reform) and there was a positive correlation between technical efficiency and teaching intensity and dedication. The reform promoted management improvements but there is a need of further follow-up to assess the effectiveness of funding changes.

  7. Efficiency and productivity of hospitals in Vietnam.

    PubMed

    Pham, Thuy Linh

    2011-01-01

    The purpose of this paper is to examine the relative efficiency and productivity of hospitals during the health reform process. Data envelopment analyses method (DEA) with the input-oriented variable-returns-to-scale model was used to calculate efficiency scores. Malmquist total factor productivity index approach was then employed to calculate productivity of hospitals. Data of 101 hospitals was extracted from databases of the Ministry of Health, Vietnam from the years 1998 to 2006. There was evidence of improvement in overall technical efficiency from 65 per cent in 1998 to 76 per cent in 2006. Hospitals' productivity progressed around 1.4 per cent per year, which was mainly due to the technical efficiency improvement. Furthermore, provincial hospitals were more technically efficient than their central counterparts and hospitals located in different regions performed differently. The paper provides an insight in the performance of Vietnamese public hospitals that has been rarely examined before and contributes to the existing literature of hospital performance in developing countries

  8. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  9. Implementing a trustworthy cost-accounting model.

    PubMed

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  10. Kaizen: a process improvement model for the business of health care and perioperative nursing professionals.

    PubMed

    Tetteh, Hassan A

    2012-01-01

    Kaizen is a proven management technique that has a practical application for health care in the context of health care reform and the 2010 Institute of Medicine landmark report on the future of nursing. Compounded productivity is the unique benefit of kaizen, and its principles are change, efficiency, performance of key essential steps, and the elimination of waste through small and continuous process improvements. The kaizen model offers specific instruction for perioperative nurses to achieve process improvement in a five-step framework that includes teamwork, personal discipline, improved morale, quality circles, and suggestions for improvement. Published by Elsevier Inc.

  11. Heat transfer analysis of a lab scale solar receiver using the discrete ordinates model

    NASA Astrophysics Data System (ADS)

    Dordevich, Milorad C. W.

    This thesis documents the development, implementation and simulation outcomes of the Discrete Ordinates Radiation Model in ANSYS FLUENT simulating the radiative heat transfer occurring in the San Diego State University lab-scale Small Particle Heat Exchange Receiver. In tandem, it also serves to document how well the Discrete Ordinates Radiation Model results compared with those from the in-house developed Monte Carlo Ray Trace Method in a number of simplified geometries. The secondary goal of this study was the inclusion of new physics, specifically buoyancy. Implementation of an additional Monte Carlo Ray Trace Method software package known as VEGAS, which was specifically developed to model lab scale solar simulators and provide directional, flux and beam spread information for the aperture boundary condition, was also a goal of this study. Upon establishment of the model, test cases were run to understand the predictive capabilities of the model. It was shown that agreement within 15% was obtained against laboratory measurements made in the San Diego State University Combustion and Solar Energy Laboratory with the metrics of comparison being the thermal efficiency and outlet, wall and aperture quartz temperatures. Parametric testing additionally showed that the thermal efficiency of the system was very dependent on the mass flow rate and particle loading. It was also shown that the orientation of the small particle heat exchange receiver was important in attaining optimal efficiency due to the fact that buoyancy induced effects could not be neglected. The analyses presented in this work were all performed on the lab-scale small particle heat exchange receiver. The lab-scale small particle heat exchange receiver is 0.38 m in diameter by 0.51 m tall and operated with an input irradiation flux of 3 kWth and a nominal mass flow rate of 2 g/s with a suspended particle mass loading of 2 g/m3. Finally, based on acumen gained during the implementation and development of the model, a new and improved design was simulated to predict how the efficiency within the small particle heat exchange receiver could be improved through a few simple internal geometry design modifications. It was shown that the theoretical calculated efficiency of the small particle heat exchange receiver could be improved from 64% to 87% with adjustments to the internal geometry, mass flow rate, and mass loading.

  12. Modelling oxygen transfer using dynamic alpha factors.

    PubMed

    Jiang, Lu-Man; Garrido-Baserba, Manel; Nolasco, Daniel; Al-Omari, Ahmed; DeClippeleir, Haydee; Murthy, Sudhir; Rosso, Diego

    2017-11-01

    Due to the importance of wastewater aeration in meeting treatment requirements and due to its elevated energy intensity, it is important to describe the real nature of an aeration system to improve design and specification, performance prediction, energy consumption, and process sustainability. Because organic loadings drive aeration efficiency to its lowest value when the oxygen demand (energy) is the highest, the implications of considering their dynamic nature on energy costs are of utmost importance. A dynamic model aimed at identifying conservation opportunities is presented. The model developed describes the correlation between the COD concentration and the α factor in activated sludge. Using the proposed model, the aeration efficiency is calculated as a function of the organic loading (i.e. COD). This results in predictions of oxygen transfer values that are more realistic than the traditional method of assuming constant α values. The model was applied to two water resource recovery facilities, and was calibrated and validated with time-sensitive databases. Our improved aeration model structure increases the quality of prediction of field data through the recognition of the dynamic nature of the alpha factor (α) as a function of the applied oxygen demand. For the cases presented herein, the model prediction of airflow improved by 20-35% when dynamic α is used. The proposed model offers a quantitative tool for the prediction of energy demand and for minimizing aeration design uncertainty. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples

    PubMed Central

    Selker, Harry P.; Leslie, Laurel K.

    2015-01-01

    Abstract There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in‐person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. PMID:26332869

  14. The application of a "6S Lean" initiative to improve workflow for emergency eye examination rooms.

    PubMed

    Nazarali, Samir; Rayat, Jaspreet; Salmonson, Hilary; Moss, Theodora; Mathura, Pamela; Damji, Karim F

    2017-10-01

    Ophthalmology residents on call at the Royal Alexandra Hospital identified workplace disorganization and lack of standardization in emergency eye examination rooms as an impediment to efficient patient treatment. The aim of the study was to use the "6S Lean" model to improve workflow in eye examination rooms at the Royal Alexandra Hospital. With the assistance of quality improvement consultants, the "6S Lean" model was applied to the current operation of the emergency eye clinic examination rooms. This model, considering 8 waste categories, was then used to recommend and implement changes to the examination rooms and to workplace protocols to enhance efficiency and safety. Eye examination rooms were improved with regards to setup, organization of supplies, inventory control, and maintenance. All targets were achieved, and the 5S audit checklist score increased by 33 points from 44 to 77. Implementation of the 6S methodology is a simple approach that removes inefficiencies from the workplace. The ophthalmology clinic removed waste from all 8 waste categories, increased audit results, mitigated patient and resident safety risks, and ultimately redirected resident time back to patient care delivery. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  15. AIR QUALITY MODELING AT NEIGHBORHOOD SCALES TO IMPROVE HUMAN EXPOSURE ASSESSMENT

    EPA Science Inventory

    Air quality modeling is an integral component of risk assessment and of subsequent development of effective and efficient management of air quality. Urban areas introduce of fresh sources of pollutants into regional background producing significant spatial variability of the co...

  16. Computational Modeling in Concert with Laboratory Studies: Application to B Cell Differentiation

    EPA Science Inventory

    Remediation is expensive, so accurate prediction of dose-response is important to help control costs. Dose response is a function of biological mechanisms. Computational models of these mechanisms improve the efficiency of research and provide the capability for prediction.

  17. Development and Implementation of Efficiency-Improving Analysis Methods for the SAGE III on ISS Thermal Model Originating

    NASA Technical Reports Server (NTRS)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; Scola, Salvatore; Tobin, Steven; McLeod, Shawn; Mannu, Sergio; Guglielmo, Corrado; Moeller, Timothy

    2013-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle in 2015. A detailed thermal model of the SAGE III payload has been developed in Thermal Desktop (TD). Several novel methods have been implemented to facilitate efficient payload-level thermal analysis, including the use of a design of experiments (DOE) methodology to determine the worst-case orbits for SAGE III while on ISS, use of TD assemblies to move payloads from the Dragon trunk to the Enhanced Operational Transfer Platform (EOTP) to its final home on the Expedite the Processing of Experiments to Space Station (ExPRESS) Logistics Carrier (ELC)-4, incorporation of older models in varying unit sets, ability to change units easily (including hardcoded logic blocks), case-based logic to facilitate activating heaters and active elements for varying scenarios within a single model, incorporation of several coordinate frames to easily map to structural models with differing geometries and locations, and streamlined results processing using an Excel-based text file plotter developed in-house at LaRC. This document presents an overview of the SAGE III thermal model and describes the development and implementation of these efficiency-improving analysis methods.

  18. Patient Populations, Clinical Associations, and System Efficiency in Healthcare Delivery System

    NASA Astrophysics Data System (ADS)

    Liu, Yazhuo

    The efforts to improve health care delivery usually involve studies and analysis of patient populations and healthcare systems. In this dissertation, I present the research conducted in the following areas: identifying patient groups, improving treatments for specific conditions by using statistical as well as data mining techniques, and developing new operation research models to increase system efficiency from the health institutes' perspective. The results provide better understanding of high risk patient groups, more accuracy in detecting disease' correlations and practical scheduling tools that consider uncertain operation durations and real-life constraints.

  19. Smart glass as the method of improving the energy efficiency of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Gamayunova, Olga; Gumerova, Eliza; Miloradova, Nadezda

    2018-03-01

    The question that has to be answered in high-rise building is glazing and its service life conditions. Contemporary market offers several types of window units, for instance, wooden, aluminum, PVC and combined models. Wooden and PVC windows become the most widespread and competitive between each other. In recent times design engineers choose smart glass. In this article, the advantages and drawbacks of all types of windows are reviewed, and the recommendations are given according to choice of window type in order to improve energy efficiency of buildings.

  20. An improved task-role-based access control model for G-CSCW applications

    NASA Astrophysics Data System (ADS)

    He, Chaoying; Chen, Jun; Jiang, Jie; Han, Gang

    2005-10-01

    Access control is an important and popular security mechanism for multi-user applications. GIS-based Computer Supported Cooperative Work (G-CSCW) application is one of such applications. This paper presents an improved Task-Role-Based Access Control (X-TRBAC) model for G-CSCW applications. The new model inherits the basic concepts of the old ones, such as role and task. Moreover, it has introduced two concepts, i.e. object hierarchy and operation hierarchy, and the corresponding rules to improve the efficiency of permission definition in access control models. The experiments show that the method can simplify the definition of permissions, and it is more applicable for G-CSCW applications.

  1. Coupled Neutron Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.

    2009-01-01

    Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.

  2. Analysis of financing efficiency of big data industry in Guizhou province based on DEA models

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Pan, Kang; Luo, Cong

    2018-03-01

    Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.

  3. Customized "In-Office" Three-Dimensional Printing for Virtual Surgical Planning in Craniofacial Surgery.

    PubMed

    Mendez, Bernardino M; Chiodo, Michael V; Patel, Parit A

    2015-07-01

    Virtual surgical planning using three-dimensional (3D) printing technology has improved surgical efficiency and precision. A limitation to this technology is that production of 3D surgical models requires a third-party source, leading to increased costs (up to $4000) and prolonged assembly times (averaging 2-3 weeks). The purpose of this study is to evaluate the feasibility, cost, and production time of customized skull models created by an "in-office" 3D printer for craniofacial reconstruction. Two patients underwent craniofacial reconstruction with the assistance of "in-office" 3D printing technology. Three-dimensional skull models were created from a bioplastic filament with a 3D printer using computed tomography (CT) image data. The cost and production time for each model were measured. For both patients, a customized 3D surgical model was used preoperatively to plan split calvarial bone grafting and intraoperatively to more efficiently and precisely perform the craniofacial reconstruction. The average cost for surgical model production with the "in-office" 3D printer was $25 (cost of bioplastic materials used to create surgical model) and the average production time was 14  hours. Virtual surgical planning using "in office" 3D printing is feasible and allows for a more cost-effective and less time consuming method for creating surgical models and guides. By bringing 3D printing to the office setting, we hope to improve intraoperative efficiency, surgical precision, and overall cost for various types of craniofacial and reconstructive surgery.

  4. Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong

    2018-06-01

    This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.

  5. 3D modeling based on CityEngine

    NASA Astrophysics Data System (ADS)

    Jia, Guangyin; Liao, Kaiju

    2017-03-01

    Currently, there are many 3D modeling softwares, like 3DMAX, AUTOCAD, and more populous BIM softwares represented by REVIT. CityEngine modeling software introduced in this paper can fully utilize the existing GIS data and combine other built models to make 3D modeling on internal and external part of buildings in a rapid and batch manner, so as to improve the 3D modeling efficiency.

  6. Interface Design Optimization by an Improved Operating Model for College Students

    ERIC Educational Resources Information Center

    Ko, Ya-Chuan; Lo, Chi-Hung; Hsiao, Shih-Wen

    2017-01-01

    A method was proposed in this study for assessing the interface operating efficiency of a remote control. The operating efficiency of a product interface can be determined by the proposed approach in which the related dimensions of human palms were measured. The reachable range (blue zone) and the most comfortable range (green zone) were…

  7. Improving operating room productivity via parallel anesthesia processing.

    PubMed

    Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R

    2014-01-01

    Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

  8. Automatic domain updating technique for improving computational efficiency of 2-D flood-inundation simulation

    NASA Astrophysics Data System (ADS)

    Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.

    2017-12-01

    Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.

  9. Performance analysis and comparison of a minimum interconnections direct storage model with traditional neural bidirectional memories.

    PubMed

    Bhatti, A Aziz

    2009-12-01

    This study proposes an efficient and improved model of a direct storage bidirectional memory, improved bidirectional associative memory (IBAM), and emphasises the use of nanotechnology for efficient implementation of such large-scale neural network structures at a considerable lower cost reduced complexity, and less area required for implementation. This memory model directly stores the X and Y associated sets of M bipolar binary vectors in the form of (MxN(x)) and (MxN(y)) memory matrices, requires O(N) or about 30% of interconnections with weight strength ranging between +/-1, and is computationally very efficient as compared to sequential, intraconnected and other bidirectional associative memory (BAM) models of outer-product type that require O(N(2)) complex interconnections with weight strength ranging between +/-M. It is shown that it is functionally equivalent to and possesses all attributes of a BAM of outer-product type, and yet it is simple and robust in structure, very large scale integration (VLSI), optical and nanotechnology realisable, modular and expandable neural network bidirectional associative memory model in which the addition or deletion of a pair of vectors does not require changes in the strength of interconnections of the entire memory matrix. The analysis of retrieval process, signal-to-noise ratio, storage capacity and stability of the proposed model as well as of the traditional BAM has been carried out. Constraints on and characteristics of unipolar and bipolar binaries for improved storage and retrieval are discussed. The simulation results show that it has log(e) N times higher storage capacity, superior performance, faster convergence and retrieval time, when compared to traditional sequential and intraconnected bidirectional memories.

  10. Transportation economics and energy

    NASA Astrophysics Data System (ADS)

    Soltani Sobh, Ali

    The overall objective of this research is to study the impacts of technology improvement including fuel efficiency increment, extending the use of natural gas vehicle and electric vehicles on key parameters of transportation. In the first chapter, a simple economic analysis is used in order to demonstrate the adoption rate of natural gas vehicles as an alternative fuel vehicle. The effect of different factors on adoption rate of commuters is calculated in sensitivity analysis. In second chapter the VMT is modeled and forecasted under influence of CNG vehicles in different scenarios. The VMT modeling is based on the time series data for Washington State. In order to investigate the effect of population growth on VMT, the per capita model is also developed. In third chapter the effect of fuel efficiency improvement on fuel tax revenue and greenhouse emission is examined. The model is developed based on time series data of Washington State. The rebound effect resulted from fuel efficiency improvement is estimated and is considered in fuel consumption forecasting. The reduction in fuel tax revenue and greenhouse gas (GHG) emissions as two outcomes of lower fuel consumption are computed. In addition, the proper fuel tax rate to restitute the revenue is suggested. In the fourth chapter effective factors on electric vehicles (EV) adoption is discussed. The constructed model is aggregated binomial logit share model that estimates the modal split between EV and conventional vehicles for different states over time. Various factors are incorporated in the utility function as explanatory variables in order to quantify their effect on EV adoption choices. The explanatory variables include income, VMT, electricity price, gasoline price, urban area and number of EV stations.

  11. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes.

    PubMed

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-10-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Efficiency Analysis of Waveform Shape for Electrical Excitation of Nerve Fibers

    PubMed Central

    Wongsarnpigoon, Amorn; Woock, John P.; Grill, Warren M.

    2011-01-01

    Stimulation efficiency is an important consideration in the stimulation parameters of implantable neural stimulators. The objective of this study was to analyze the effects of waveform shape and duration on the charge, power, and energy efficiency of neural stimulation. Using a population model of mammalian axons and in vivo experiments on cat sciatic nerve, we analyzed the stimulation efficiency of four waveform shapes: square, rising exponential, decaying exponential, and rising ramp. No waveform was simultaneously energy-, charge-, and power-optimal, and differences in efficiency among waveform shapes varied with pulse width (PW) For short PWs (≤ 0.1 ms), square waveforms were no less energy-efficient than exponential waveforms, and the most charge-efficient shape was the ramp. For long PWs (≥0.5 ms), the square was the least energy-efficient and charge-efficient shape, but across most PWs, the square was the most power-efficient shape. Rising exponentials provided no practical gains in efficiency over the other shapes, and our results refute previous claims that the rising exponential is the energy-optimal shape. An improved understanding of how stimulation parameters affect stimulation efficiency will help improve the design and programming of implantable stimulators to minimize tissue damage and extend battery life. PMID:20388602

  13. Testing particle filters on convective scale dynamics

    NASA Astrophysics Data System (ADS)

    Haslehner, Mylene; Craig, George. C.; Janjic, Tijana

    2014-05-01

    Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.

  14. Detailed assessment of global transport-energy models’ structures and projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, Sonia; Mishra, Gouri Shankar; Fulton, Lew

    This paper focuses on comparing the frameworks and projections from four major global transportation models with considerable transportation technology and behavioral detail. We analyze and compare the modeling frameworks, underlying data, assumptions, intermediate parameters, and projections to identify the sources of divergence or consistency, as well as key knowledge gaps. We find that there are significant differences in the base-year data and key parameters for future projections, especially for developing countries. These include passenger and freight activity, mode shares, vehicle ownership rates, and even energy consumption by mode, particularly for shipping, aviation and trucking. This may be due in partmore » to a lack of previous efforts to do such consistency-checking and “bench-marking.” We find that the four models differ in terms of the relative roles of various mitigation strategies to achieve a 2°C / 450 ppm CO2e target: the economics-based integrated assessment models favor the use of low carbon fuels as the primary mitigation option followed by efficiency improvements, whereas transport-only and expert-based models favor efficiency improvements of vehicles followed by mode shifts. We offer recommendations for future modeling improvements focusing on (1) reducing data gaps; (2) translating the findings from this study into relevant policy implications such as feasibility of current policy goals, additional policy targets needed, regional vs. global reductions, etc.; (3) modeling strata of demographic groups to improve understanding of vehicle ownership levels, travel behavior, and urban vs. rural considerations; and (4) conducting coordinated efforts in aligning input assumptions and historical data, policy analysis, and modeling insights.« less

  15. Nanoscale Light Manipulation for Improved Organic Solar Cells

    NASA Astrophysics Data System (ADS)

    Fisher, Brett

    Organic Solar Cells can be made to be flexible, semi-transparent, and low-cost making them ideal for novel energy harvesting applications such as in greenhouses. However, the main disadvantage of this technology is its low energy conversion efficiency (<15%); mostly due to high recombination rates, compared with other higher performing technologies, such as thinfilm GaAs (>30% Efficiency), and Si-based (>20% Efficiency), solar cells, where recombination within these technologies is much less than Organic Solar Cells. There are still many challenges to overcome to improve the efficiency of Organic Solar Cells. Some of these challenges include: Maximising the absorption of the solar spectrum; improving the charge dynamics; and increasing the lifetime of the devices. One method to address some of these challenges is to include plasmonic nanoparticles into the devices, which has been shown to increase the absorption through scattering, and improve the charge dynamic through localised surface plasmon resonance effects. However, including nanoparticles into Organic Solar Cells has shown to adversely affect the performance of the devices in other ways, such as increasing the recombination of excitons. To address this, an additional (insulating) coating around the nanoparticles supresses this increase, and has shown to be able to increase the performance of the solar cells. In this work, we demonstrate the use of our all-inclusive optical model in the design and optimisation of bespoke colour-specific windows (i.e. Red, Green, and Blue), where the solar cells can be made to have a specific transparency and colour, whilst maximizing their efficiency. For example, we could specify that we wish the colour to be red, with 50% transmissivity; the model will then maximise the Power Conversion Efficiency. We also demonstrate how our extension to Mie theory can simulate nanoparticle systems and can be used to tune the plasmon resonance utilising different coatings, and configurations thereof.

  16. A novel energy recovery system for parallel hybrid hydraulic excavator.

    PubMed

    Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan

    2014-01-01

    Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented.

  17. The Lindsay Leg Club: supporting the NHS to provide leg ulcer care.

    PubMed

    McKenzie, Morag

    2013-06-01

    Public health services will need to cope with additional demands due to an ageing society and the increasing prevalence of chronic conditions. Lower-limb ulceration is a long-term, life-changing condition and leg ulcer management can be challenging for nursing staff. The Lindsay Leg Club model is a unique partnership between community nurses, members and the local community, which provides quality of care and empowerment for patients with leg ulcers, while also supporting and educating nursing staff. The Leg Club model works in accord with core themes of Government and NHS policy. Patient feedback on the Leg Club model is positive and the Leg Clubs provide a service to members which is well accepted by patients, yet is more economically efficient than the traditional district nursing practice of home visits. Lindsay Leg Clubs provide a valuable support service to the NHS in delivering improved quality of care while improving efficiency.

  18. Multilayer shallow water models with locally variable number of layers and semi-implicit time discretization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys

    2018-07-01

    We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.

  19. A Novel Energy Recovery System for Parallel Hybrid Hydraulic Excavator

    PubMed Central

    Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan

    2014-01-01

    Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented. PMID:25405215

  20. Optimizing Chemical Reactions with Deep Reinforcement Learning

    PubMed Central

    2017-01-01

    Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability. PMID:29296675

  1. Experiment and modeling of paired effect on evacuation from a three-dimensional space

    NASA Astrophysics Data System (ADS)

    Jun, Hu; Huijun, Sun; Juan, Wei; Xiaodan, Chen; Lei, You; Musong, Gu

    2014-10-01

    A novel three-dimensional cellular automata evacuation model was proposed based on stairs factor for paired effect and variety velocities in pedestrian evacuation. In the model pedestrians' moving probability of target position at the next moment was defined based on distance profit and repulsive force profit, and evacuation strategy was elaborated in detail through analyzing variety velocities and repulsive phenomenon in moving process. At last, experiments with the simulation platform were conducted to study the relationships of evacuation time, average velocity and pedestrian velocity. The results showed that when the ratio of single pedestrian was higher in the system, the shortest route strategy was good for improving evacuation efficiency; in turn, if ratio of paired pedestrians was higher, it is good for improving evacuation efficiency to adopt strategy that avoided conflicts, and priority should be given to scattered evacuation.

  2. Specialized nursing practice for chronic disease management in the primary care setting: an evidence-based analysis.

    PubMed

    2013-01-01

    In response to the increasing demand for better chronic disease management and improved health care efficiency in Ontario, nursing roles have expanded in the primary health care setting. To determine the effectiveness of specialized nurses who have a clinical role in patient care in optimizing chronic disease management among adults in the primary health care setting. A literature search was performed using OVID MEDLINE, OVID MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Wiley Cochrane Library, and the Centre for Reviews and Dissemination database. Results were limited to randomized controlled trials and systematic reviews and were divided into 2 models: Model 1 (nurse alone versus physician alone) and Model 2 (nurse and physician versus physician alone). Effectiveness was determined by comparable outcomes between groups in Model 1, or improved outcomes or efficiency in Model 2. Six studies were included. In Model 1, there were no significant differences in health resource use, disease-specific measures, quality of life, or patient satisfaction. In Model 2, there was a reduction in hospitalizations and improved management of blood pressure and lipids among patients with coronary artery disease. Among patients with diabetes, there was a reduction in hemoglobin A1c but no difference in other disease-specific measures. There was a trend toward improved process measures, including medication prescribing and clinical assessments. Results related to quality of life were inconsistent, but patient satisfaction with the nurse-physician team was improved. Overall, there were more and longer visits to the nurse, and physician workload did not change. There was heterogeneity across patient populations, and in the titles, roles, and scope of practice of the specialized nurses. Specialized nurses with an autonomous role in patient care had comparable outcomes to physicians alone (Model 1) based on moderate quality evidence, with consistent results among a subgroup analysis of patients with diabetes based on low quality evidence. Model 2 showed an overall improvement in appropriate process measures, disease-specific measures, and patient satisfaction based on low to moderate quality evidence. There was low quality evidence that nurses working under Model 2 may reduce hospitalizations for patients with coronary artery disease. The specific role of the nurse in supplementing or substituting physician care was unclear, making it difficult to determine the impact on efficiency. Nurses with additional skills, training, or scope of practice may help improve the primary care of patients with chronic diseases. This review found that specialized nurses working on their own could achieve health outcomes that were similar to those of doctors. It also found that specialized nurses who worked with doctors could reduce hospital visits and improve certain patient outcomes related to diabetes, coronary artery disease, or heart failure. Patients who had nurse-led care were more satisfied and tended to receive more tests and medications. It is unclear whether specialized nurses improve quality of life or doctor workload.

  3. [New model of doctor-nurse communication based on electronic medical advice platform].

    PubMed

    Cao, Yang; Ding, Aimin; Wang, Yan

    2012-01-01

    This article introduces a new model of the communication between doctors and nurses, with the aid of the electronic medical advice platform. This model has achieved good results in improving doctor and nurse's co-working efficiency, treating patients safely, preventing medical accidents, reducing medical errors and so on.

  4. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  5. Aerodynamical Probation Of Semi-Industrial Production Plant For Centrifugal Dust Collectors’ Efficiency Research

    NASA Astrophysics Data System (ADS)

    Buligin, Y. I.; Zharkova, M. G.; Alexeenko, L. N.

    2017-01-01

    In previous studies, experiments were carried out on the small-size models of cyclonic units, but now there completed the semi-industrial pilot plant ≪Cyclone≫, which would allow comparative testing of real samples of different shaped centrifugal dust-collectors and compare their efficiency. This original research plant is patented by authors. The aim of the study is to improve efficiency of exhaust gases collecting process, by creating improved designs of centrifugal dust collectors, providing for the possibility of regulation constructive parameters depending on the properties and characteristics of air-fuel field. The objectives of the study include identifying and studying the cyclonic apparatus association constructive parameters with their aerodynamic characteristics and dust-collecting efficiency. The article is very relevant, especially for future practical application of its results in dust removal technology.

  6. Wave propagation in equivalent continuums representing truss lattice materials

    DOE PAGES

    Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...

    2015-07-29

    Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less

  7. Removal of bio-aerosols by water flow on surfaces in health-care settings

    NASA Astrophysics Data System (ADS)

    Yu, Han; Li, Yuguo

    2016-11-01

    Hand hygiene is one of the most important and efficient measures to prevent infections, however the compliance with hand hygiene remains poor especially for health-care workers. To improve this situation, the mechanisms of hand cleansing need to be explored and a detailed study on the adhesion interactions for bio-aerosols on hand surfaces and the process during particles removal by flow is significant for more efficient methods to decrease infections. The first part of presentation will focus on modelling adhesion interactions between particles, like bacteria and virus, and hand surfaces with roughness in water environment. The model presented is based on the DLVO and its extended theories. The removal process comes next, which will put forward a new model to describe the removal of particles by water flow. In this model, molecular dynamics is combined with particle motion and the results by the model will be compared with experiment results and existed models (RnR, Rock & Roll). Finally, possible improvement of the study and future design of experiments will be discussed.

  8. Eco-efficience et analyse des couts du cycle de vie: Developpement d'un outil d'aide a la conception dans l'industrie aeronautique

    NASA Astrophysics Data System (ADS)

    Mami, Fares

    The aeronautical sector, responsible for about 3 % of the world emissions of greenhouse gases, predict a 70 % growth in 2025 and 300 % to 500 % in 2050 of its emissions compared to the level of 2005. The decision-makers must thus be supported in their choice of conception to integrate the environmental aspect into the decision-making. Our industrial partner in the aeronautical sector developed an expertise in Life Cycle Assessment (LCA) and seeks to integrate the costs and the environmental impacts in a systematic way into the ecodesign of products. Based on the literature review and the objectives of this research we propose a model of eco-efficiency, which integrates LCA with Life Cycle Costing (LCC). This model is consistent with defined cost cutting and environmental impacts reduction targets and allows a simple interpretation of the results while minimizing the efforts during data collection. The model is applied for 3D printing as an alternative production process in the manufacturing of an aircraft blocker door. 3D printing is a new technology of production working by addition of material and present interesting opportunities of cost cutting and environmental impacts, particularly in the aeronautical domain. The results showed that 3D printing, when associated with improvement in the topology of the part, allows an improvement both on costs and environmental impacts of the part life cycle. Nevertheless, the results are sensitive to the productivity of the 3D printing machine, in particular with costs when the productivity of the 3D printing is reduced. This eco-efficiency model presents several opportunities of improvement. A more elaborate definition of the objectives in reduction of environmental impacts would allow to direct the choices in design to considerations of eco-efficiency at a macro level. Moreover, the integration of the social dimension in the model constitutes an important stage to operationalize the stakes of environmental and social responsibility of the company.

  9. Performance of a two-leaf light use efficiency model for mapping gross primary productivity against remotely sensed sun-induced chlorophyll fluorescence data.

    PubMed

    Zan, Mei; Zhou, Yanlian; Ju, Weimin; Zhang, Yongguang; Zhang, Leiming; Liu, Yibo

    2018-02-01

    Estimating terrestrial gross primary production is an important task when studying the carbon cycle. In this study, the ability of a two-leaf light use efficiency model to simulate regional gross primary production in China was validated using satellite Global Ozone Monitoring Instrument - 2 sun-induced chlorophyll fluorescence data. The two-leaf light use efficiency model was used to estimate daily gross primary production in China's terrestrial ecosystems with 500-m resolution for the period from 2007 to 2014. Gross primary production simulated with the two-leaf light use efficiency model was resampled to a spatial resolution of 0.5° and then compared with sun-induced chlorophyll fluorescence. During the study period, sun-induced chlorophyll fluorescence and gross primary production simulated by the two-leaf light use efficiency model exhibited similar spatial and temporal patterns in China. The correlation coefficient between sun-induced chlorophyll fluorescence and monthly gross primary production simulated by the two-leaf light use efficiency model was significant (p<0.05, n=96) in 88.9% of vegetated areas in China (average value 0.78) and varied among vegetation types. The interannual variations in monthly sun-induced chlorophyll fluorescence and gross primary production simulated by the two-leaf light use efficiency model were similar in spring and autumn in most vegetated regions, but dissimilar in winter and summer. The spatial variability of sun-induced chlorophyll fluorescence and gross primary production simulated by the two-leaf light use efficiency model was similar in spring, summer, and autumn. The proportion of spatial variations of sun-induced chlorophyll fluorescence and annual gross primary production simulated by the two-leaf light use efficiency model explained by ranged from 0.76 (2011) to 0.80 (2013) during the study period. Overall, the two-leaf light use efficiency model was capable of capturing spatial and temporal variations in gross primary production in China. However, the model needs further improvement to better simulate gross primary production in summer. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  11. Rapid and simple half-quantitative measurement alpha-fetoprotein by poly(dimethylsiloxane) microfluidic chip immunochromatographic assay

    NASA Astrophysics Data System (ADS)

    Tong, Chao; Jin, Qinghui; Zhao, Jianlong

    2008-03-01

    In this article, a kind of microfluidic method based on MEMS technology combined with gold immunochromatographic assay (GICA) is developed and discussed. Compared to the traditional GICA, this method supplies us convenient, multi-channel, in-parallel, low cost and similar efficiency approach in the fields of alpha-fetopro-tei (AFP)detection. Firstly, we improved the adhesion between the model material SU-8 and Silicon wafer, optimized approaches of the fabrication of the SU-8 model systematically, and fabricate the PDMS micro fluid chip with good reproduction successfully. Secondly, Surface modification and antibody immobilization methods with the GICA on the PDMS micro fluid analysis chip are studied, we choose the PDMS material and transfer GICA to the PDMS micro fluid chip successfully after researching the antibody immobilization efficiency of different materials utilized in fabrication of the micro fluid chip. In order to improve the reaction efficiency of the immobilized antibody, we studied the characteristics of micro fluid without the gas drive, and the fluid velocity control in our design; we also design structure of grove to strengthen the ability of immobilizing the antibody. The stimulation of the structure shows that it achieves great improvement and experiments prove the design is feasible.

  12. Design of an optical system for interrogation of implanted luminescent sensors and verification with silicone skin phantoms.

    PubMed

    Long, Ruiqi; McShane, Mike

    2012-09-01

    Implantable luminescent sensors are being developed for on-demand monitoring of blood glucose levels. For these sensors to be deployed in vivo, a matched external hardware system is needed. In this paper, we designed a compact, low-cost optical system with highly efficient photon delivery and collection using advanced optical modeling software. Compared to interrogation with a fiber bundle, the new system was predicted to improve interrogation efficiency by a factor of 200 for native sensors; an improvement of 37 times was predicted for sensors implanted at a depth of 1 mm in a skin-simulating phantom. A physical prototype was tested using silicone-based skin phantoms developed specifically to mimic the scattering and absorbing properties of human skin. The experimental evaluations revealed that the prototype device performed in agreement with expectations from simulation results, resulting in an overall improvement of over 2000 times. This efficient system enables use of a low-cost commercial spectrometer for recording sensor emission, which was not possible using only fiber optic delivery and collection, and will be used as a tool for in vivo studies with animal models or human subjects.

  13. Efficient utilization of greenhouse gases in a gas-to-liquids process combined with CO2/steam-mixed reforming and Fe-based Fischer-Tropsch synthesis.

    PubMed

    Zhang, Chundong; Jun, Ki-Won; Ha, Kyoung-Su; Lee, Yun-Jo; Kang, Seok Chang

    2014-07-15

    Two process models for carbon dioxide utilized gas-to-liquids (GTL) process (CUGP) mainly producing light olefins and Fischer-Tropsch (F-T) synthetic oils were developed by Aspen Plus software. Both models are mainly composed of a reforming unit, an F-T synthesis unit and a recycle unit, while the main difference is the feeding point of fresh CO2. In the reforming unit, CO2 reforming and steam reforming of methane are combined together to produce syngas in flexible composition. Meanwhile, CO2 hydrogenation is conducted via reverse water gas shift on the Fe-based catalysts in the F-T synthesis unit to produce hydrocarbons. After F-T synthesis, the unreacted syngas is recycled to F-T synthesis and reforming units to enhance process efficiency. From the simulation results, it was found that the carbon efficiencies of both CUGP options were successfully improved, and total CO2 emissions were significantly reduced, compared with the conventional GTL processes. The process efficiency was sensitive to recycle ratio and more recycle seemed to be beneficial for improving process efficiency and reducing CO2 emission. However, the process efficiency was rather insensitive to split ratio (recycle to reforming unit/total recycle), and the optimum split ratio was determined to be zero.

  14. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  15. Energy efficiency technologies in cement and steel industry

    NASA Astrophysics Data System (ADS)

    Zanoli, Silvia Maria; Cocchioni, Francesco; Pepe, Crescenzo

    2018-02-01

    In this paper, Advanced Process Control strategies aimed at energy efficiency achievement and improvement in cement and steel industry are proposed. A flexible and smart control structure constituted by several functional modules and blocks has been developed. The designed control strategy is based on Model Predictive Control techniques, formulated on linear models. Two industrial control solutions have been developed, oriented to energy efficiency and process control improvement in cement industry clinker rotary kilns (clinker production phase) and in steel industry billets reheating furnaces. Tailored customization procedures for the design of ad hoc control systems have been executed, based on the specific needs and specifications of the analysed processes. The installation of the developed controllers on cement and steel plants produced significant benefits in terms of process control which resulted in working closer to the imposed operating limits. With respect to the previous control systems, based on local controllers and/or operators manual conduction, more profitable configurations of the crucial process variables have been provided.

  16. Evaluating the Impacts of Health, Social Network and Capital on Craft Efficiency and Productivity: A Case Study of Construction Workers in China

    PubMed Central

    Yi, Wen; Miao, Mengyi; Zhang, Lei

    2018-01-01

    The construction industry has been recognized, for many years, as among those having a high likelihood of accidents, injuries and occupational illnesses. Such risks of construction workers can lead to low productivity and social problems. As a result, construction workers’ well-being should be highly addressed to improve construction workers’ efficiency and productivity. Meanwhile, the social support from a social network and capital (SNC) of construction workers has been considered as an effective approach to promote construction workers’ physical and mental health (P&M health), as well as their work efficiency and productivity. Based on a comprehensive literature review, a conceptual model, which aims to improve construction workers’ efficiency and productivity from the perspective of health and SNC, was proposed. A questionnaire survey was conducted to investigate the construction workers’ health, SNC and work efficiency and productivity in Nanjing, China. A structural equation model (SEM) was employed to test the three hypothetical relationships among construction workers’ P&M health, SNC and work efficiency and productivity. The results indicated that the direct impacts from construction workers’ P&M health on work efficiency and productivity were more significant than that from the SNC. In addition, the construction workers’ social capital and the network can indirectly influence the work efficiency and productivity by affecting the construction workers’ P&M health. Therefore, strategies for enhancing construction workers’ efficiency and productivity were proposed. Furthermore, many useable suggestions can be drawn from the research findings from the perspective of a government. The identified indicators and relationships would contribute to the construction work efficiency and productivity assessment and health management from the perspective of the construction workers. PMID:29462861

  17. Evaluating the Impacts of Health, Social Network and Capital on Craft Efficiency and Productivity: A Case Study of Construction Workers in China.

    PubMed

    Yuan, Jingfeng; Yi, Wen; Miao, Mengyi; Zhang, Lei

    2018-02-15

    The construction industry has been recognized, for many years, as among those having a high likelihood of accidents, injuries and occupational illnesses. Such risks of construction workers can lead to low productivity and social problems. As a result, construction workers' well-being should be highly addressed to improve construction workers' efficiency and productivity. Meanwhile, the social support from a social network and capital (SNC) of construction workers has been considered as an effective approach to promote construction workers' physical and mental health (P&M health), as well as their work efficiency and productivity. Based on a comprehensive literature review, a conceptual model, which aims to improve construction workers' efficiency and productivity from the perspective of health and SNC, was proposed. A questionnaire survey was conducted to investigate the construction workers' health, SNC and work efficiency and productivity in Nanjing, China. A structural equation model (SEM) was employed to test the three hypothetical relationships among construction workers' P&M health, SNC and work efficiency and productivity. The results indicated that the direct impacts from construction workers' P&M health on work efficiency and productivity were more significant than that from the SNC. In addition, the construction workers' social capital and the network can indirectly influence the work efficiency and productivity by affecting the construction workers' P&M health. Therefore, strategies for enhancing construction workers' efficiency and productivity were proposed. Furthermore, many useable suggestions can be drawn from the research findings from the perspective of a government. The identified indicators and relationships would contribute to the construction work efficiency and productivity assessment and health management from the perspective of the construction workers.

  18. Green Energy Options for Consumer-Owned Business

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Co-opPlus of Western Massachusetts

    2006-05-01

    The goal of this project was to define, test, and prototype a replicable business model for consumer-owned cooperatives. The result is a replicable consumer-owned cooperative business model for the generation, interconnection, and distribution of renewable energy that incorporates energy conservation and efficiency improvements.

  19. Artificial intelligence-based computer modeling tools for controlling slag foaming in electric arc furnaces

    NASA Astrophysics Data System (ADS)

    Wilson, Eric Lee

    Due to increased competition in a world economy, steel companies are currently interested in developing techniques that will allow for the improvement of the steelmaking process, either by increasing output efficiency or by improving the quality of their product, or both. Slag foaming is one practice that has been shown to contribute to both these goals. However, slag foaming is highly dynamic and difficult to model or control. This dissertation describes an effort to use artificial intelligence-based tools (genetic algorithms, fuzzy logic, and neural networks) to both model and control the slag foaming process. Specifically, a neural network is trained and tested on slag foaming data provided by a steel plant. This neural network model is then controlled by a fuzzy logic controller, which in turn is optimized by a genetic algorithm. This tuned controller is then installed at a steel plant and given control be a more efficient slag foaming controller than what was previously used by the steel plant.

  20. Kinetic model of water disinfection using peracetic acid including synergistic effects.

    PubMed

    Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D

    2016-01-01

    The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors.

  1. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less

  2. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Chun-Yi

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less

  3. Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation.

    PubMed

    Ertel, K; Banerjee, S; Mason, P D; Phillips, P J; Siebold, M; Hernandez-Gomez, C; Collier, J C

    2011-12-19

    We present a numerical model of a pulsed, diode-pumped Yb:YAG laser amplifier for the generation of high energy ns-pulses. This model is used to explore how optical-to-optical efficiency depends on factors such as pump duration, pump spectrum, pump intensity, doping concentration, and operating temperature. We put special emphasis on finding ways to achieve high efficiency within the practical limitations imposed by real-world laser systems, such as limited pump brightness and limited damage fluence. We show that a particularly advantageous way of improving efficiency within those constraints is operation at cryogenic temperature. Based on the numerical findings we present a concept for a scalable amplifier based on an end-pumped, cryogenic, gas-cooled multi-slab architecture.

  4. Spectral diffraction efficiency characterization of broadband diffractive optical elements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Junoh; Cruz-Cabrera, Alvaro Augusto; Tanbakuchi, Anthony

    Diffractive optical elements, with their thin profile and unique dispersion properties, have been studied and utilized in a number of optical systems, often yielding smaller and lighter systems. Despite the interest in and study of diffractive elements, the application has been limited to narrow spectral bands. This is due to the etch depths, which are optimized for optical path differences of only a single wavelength, consequently leading to rapid decline in efficiency as the working wavelength shifts away from the design wavelength. Various broadband diffractive design methodologies have recently been developed that improve spectral diffraction efficiency and expand the workingmore » bandwidth of diffractive elements. We have developed diffraction efficiency models and utilized the models to design, fabricate, and test two such extended bandwidth diffractive designs.« less

  5. Research on Evacuation Based on Social Force Model

    NASA Astrophysics Data System (ADS)

    Liu, W.; Deng, Z.; Li, W.; Lin, J.

    2017-09-01

    Crowded centers always cause personnel casualties in evacuation operations. Stampede events often occur by hit, squeeze and crush due to panic. It is of vital important to alleviate such situation. With the deepening of personnel evacuation research, more and more researchers are committed to study individual behaviors and self-organization phenomenon in evacuation process. The study mainly includes: 1, enrich the social force model from different facets such as visual, psychological, external force to descript more realistic evacuation; 2, research on causes and effects of self - organization phenomenon. In this paper, we focus on disorder motion that occurs in the crowded indoor publics, especially the narrow channel and safety exits and other special arteries. We put forward the improved social force model to depict pedestrians' behaviors, an orderly speed-stratification evacuation method to solve disorder problem, and shape-changed export to alleviate congestion. The result of this work shows an improvement of evacuation efficiency by 19.5 %. Guiding pedestrians' direction to slow down the influence of social forces has a guidance function in improving the efficiency of indoor emergency evacuation.

  6. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    PubMed

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  7. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blyth, Taylor S.; Avramova, Maria

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less

  8. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    NASA Astrophysics Data System (ADS)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  9. Shifting and power sharing control of a novel dual input clutchless transmission for electric vehicles

    NASA Astrophysics Data System (ADS)

    Liang, Jiejunyi; Yang, Haitao; Wu, Jinglai; Zhang, Nong; Walker, Paul D.

    2018-05-01

    To improve the overall efficiency of electric vehicles and guarantee the driving comfort and vehicle drivability under the concept of simplifying mechanism complexity and minimizing manufacturing cost, this paper proposes a novel clutchless power-shifting transmission system with shifting control strategy and power sharing control strategy. The proposed shifting strategy takes advantage of the transmission architecture to achieve power-on shifting, which greatly improves the driving comfort compared with conventional automated manual transmission, with a bump function based shifting control method. To maximize the overall efficiency, a real-time power sharing control strategy is designed to solve the power distribution problem between the two motors. Detailed mathematical model is built to verify the effectiveness of the proposed methods. The results demonstrate the proposed strategies considerably improve the overall efficiency while achieve non-interrupted power-on shifting and maintain the vehicle jerk during shifting under an acceptable threshold.

  10. Polymer optical fiber compound parabolic concentrator tip for enhanced coupling efficiency for fluorescence based glucose sensors

    PubMed Central

    Hassan, Hafeez Ul; Nielsen, Kristian; Aasmul, Soren; Bang, Ole

    2015-01-01

    We demonstrate that the light excitation and capturing efficiency of fluorescence based fiber-optical sensors can be significantly increased by using a CPC (Compound Parabolic Concentrator) tip instead of the standard plane-cut tip. We use Zemax modelling to find the optimum CPC tip profile and fiber length of a polymer optical fiber diabetes sensor for continuous monitoring of glucose levels. We experimentally verify the improved performance of the CPC tipped sensor and the predicted production tolerances. Due to physical size requirements when the sensor has to be inserted into the body a non-optimal fiber length of 35 mm is chosen. For this length an average improvement in efficiency of a factor of 1.7 is experimentally demonstrated and critically compared to the predicted ideal factor of 3 in terms of parameters that should be improved through production optimization. PMID:26713213

  11. Study on convection improvement of standard vacuum tube

    NASA Astrophysics Data System (ADS)

    He, J. H.; Du, W. P.; Qi, R. R.; He, J. X.

    2017-11-01

    For the standard all-glass vacuum tube collector, enhancing the vacuum tube axial natural convection can improve its thermal efficiency. According to the study of the standard all-glass vacuum tube, three kinds of guide plates which can inhibit the radial convection and increase axial natural convection are designed, and theory model is established. Experiments were carried out on vacuum tubes with three types of baffles and standard vacuum tubes without the improvement. The results show that T-type guide plate is better than that of Y-type guide plate on restraining convection and increasing axial radial convection effect, Y type is better than that of flat plate type, all guide plates are better than no change; the thermal efficiency of the tube was 2.6% higher than that of the unmodified standard vacuum tube. The efficiency of the system in the experiment can be increased by 3.1%.

  12. Polymer optical fiber compound parabolic concentrator tip for enhanced coupling efficiency for fluorescence based glucose sensors.

    PubMed

    Hassan, Hafeez Ul; Nielsen, Kristian; Aasmul, Soren; Bang, Ole

    2015-12-01

    We demonstrate that the light excitation and capturing efficiency of fluorescence based fiber-optical sensors can be significantly increased by using a CPC (Compound Parabolic Concentrator) tip instead of the standard plane-cut tip. We use Zemax modelling to find the optimum CPC tip profile and fiber length of a polymer optical fiber diabetes sensor for continuous monitoring of glucose levels. We experimentally verify the improved performance of the CPC tipped sensor and the predicted production tolerances. Due to physical size requirements when the sensor has to be inserted into the body a non-optimal fiber length of 35 mm is chosen. For this length an average improvement in efficiency of a factor of 1.7 is experimentally demonstrated and critically compared to the predicted ideal factor of 3 in terms of parameters that should be improved through production optimization.

  13. Effect of year-to-year variability of leaf area index on variable infiltration capacity model performance and simulation of streamflow during drought

    NASA Astrophysics Data System (ADS)

    Tesemma, Z. K.; Wei, Y.; Peel, M. C.; Western, A. W.

    2014-09-01

    This study assessed the effect of using observed monthly leaf area index (LAI) on hydrologic model performance and the simulation of streamflow during drought using the variable infiltration capacity (VIC) hydrological model in the Goulburn-Broken catchment of Australia, which has heterogeneous vegetation, soil and climate zones. VIC was calibrated with both observed monthly LAI and long-term mean monthly LAI, which were derived from the Global Land Surface Satellite (GLASS) observed monthly LAI dataset covering the period from 1982 to 2012. The model performance under wet and dry climates for the two different LAI inputs was assessed using three criteria, the classical Nash-Sutcliffe efficiency, the logarithm transformed flow Nash-Sutcliffe efficiency and the percentage bias. Finally, the percentage deviation of the simulated monthly streamflow using the observed monthly LAI from simulated streamflow using long-term mean monthly LAI was computed. The VIC model predicted monthly streamflow in the selected sub-catchments with model efficiencies ranging from 61.5 to 95.9% during calibration (1982-1997) and 59 to 92.4% during validation (1998-2012). Our results suggest systematic improvements from 4 to 25% in the Nash-Sutcliffe efficiency in pasture dominated catchments when the VIC model was calibrated with the observed monthly LAI instead of the long-term mean monthly LAI. There was limited systematic improvement in tree dominated catchments. The results also suggest that the model overestimation or underestimation of streamflow during wet and dry periods can be reduced to some extent by including the year-to-year variability of LAI in the model, thus reflecting the responses of vegetation to fluctuations in climate and other factors. Hence, the year-to-year variability in LAI should not be neglected; rather it should be included in model calibration as well as simulation of monthly water balance.

  14. The effect of year-to-year variability of leaf area index on Variable Infiltration Capacity model performance and simulation of runoff

    NASA Astrophysics Data System (ADS)

    Tesemma, Z. K.; Wei, Y.; Peel, M. C.; Western, A. W.

    2015-09-01

    This study assessed the effect of using observed monthly leaf area index (LAI) on hydrological model performance and the simulation of runoff using the Variable Infiltration Capacity (VIC) hydrological model in the Goulburn-Broken catchment of Australia, which has heterogeneous vegetation, soil and climate zones. VIC was calibrated with both observed monthly LAI and long-term mean monthly LAI, which were derived from the Global Land Surface Satellite (GLASS) leaf area index dataset covering the period from 1982 to 2012. The model performance under wet and dry climates for the two different LAI inputs was assessed using three criteria, the classical Nash-Sutcliffe efficiency, the logarithm transformed flow Nash-Sutcliffe efficiency and the percentage bias. Finally, the deviation of the simulated monthly runoff using the observed monthly LAI from simulated runoff using long-term mean monthly LAI was computed. The VIC model predicted monthly runoff in the selected sub-catchments with model efficiencies ranging from 61.5% to 95.9% during calibration (1982-1997) and 59% to 92.4% during validation (1998-2012). Our results suggest systematic improvements, from 4% to 25% in Nash-Sutcliffe efficiency, in sparsely forested sub-catchments when the VIC model was calibrated with observed monthly LAI instead of long-term mean monthly LAI. There was limited systematic improvement in tree dominated sub-catchments. The results also suggest that the model overestimation or underestimation of runoff during wet and dry periods can be reduced to 25 mm and 35 mm respectively by including the year-to-year variability of LAI in the model, thus reflecting the responses of vegetation to fluctuations in climate and other factors. Hence, the year-to-year variability in LAI should not be neglected; rather it should be included in model calibration as well as simulation of monthly water balance.

  15. Transmission and Distribution Efficiency Improvement Rearch and Development Survey.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, C.L.; Westinghouse Electric Corporation. Advanced Systems Technology.

    Purpose of this study was to identify and quantify those technologies for improving transmission and distribution (T and D) system efficiency that could provide the greatest benefits for utility customers in the Pacific Northwest. Improving the efficiency of transmission and distribution systems offers a potential source of conservation within the utility sector. An extensive review of this field resulted in a list of 49 state-of-the-art technologies and 39 future technologies. Of these, 15 from the former list and 7 from the latter were chosen as the most promising and then submitted to an evaluative test - a modeled sample systemmore » for Benton County PUD, a utility with characteristics typical of a BPA customer system. Reducing end-use voltage on secondary distribution systems to decrease the energy consumption of electrical users when possible, called ''Conservation Voltage Reduction,'' was found to be the most cost effective state-of-the-art technology. Voltampere reactive (var) optimization is a similarly cost effective alternative. The most significant reduction in losses on the transmission and distribution system would be achieved through the replacement of standard transformers with high efficiency transformers, such as amorphous steel transformers. Of the future technologies assessed, the ''Distribution Static VAR Generator'' appears to have the greatest potential for technological breakthroughs and, therefore in time, commercialization. ''Improved Dielectric Materials,'' with a relatively low cost and high potential for efficiency improvement, warrant R and D consideration. ''Extruded Three-Conductor Cable'' and ''Six- and Twelve-Phase Transmission'' programs provide only limited gains in efficiency and applicability and are therefore the least cost effective.« less

  16. Processing mechanics of alternate twist ply (ATP) yarn technology

    NASA Astrophysics Data System (ADS)

    Elkhamy, Donia Said

    Ply yarns are important in many textile manufacturing processes and various applications. The primary process used for producing ply yarns is cabling. The speed of cabling is limited to about 35m/min. With the world's increasing demands of ply yarn supply, cabling is incompatible with today's demand activated manufacturing strategies. The Alternate Twist Ply (ATP) yarn technology is a relatively new process for producing ply yarns with improved productivity and flexibility. This technology involves self plying of twisted singles yarn to produce ply yarn. The ATP process can run more than ten times faster than cabling. To implement the ATP process to produce ply yarns there are major quality issues; uniform Twist Profile and yarn Twist Efficiency. The goal of this thesis is to improve these issues through process modeling based on understanding the physics and processing mechanics of the ATP yarn system. In our study we determine the main parameters that control the yarn twist profile. Process modeling of the yarn twist across different process zones was done. A computational model was designed to predict the process parameters required to achieve a square wave twist profile. Twist efficiency, a measure of yarn torsional stability and bulk, is determined by the ratio of ply yarn twist to singles yarn twist. Response Surface Methodology was used to develop the processing window that can reproduce ATP yarns with high twist efficiency. Equilibrium conditions of tensions and torques acting on the yarns at the self ply point were analyzed and determined the pathway for achieving higher twist efficiency. Mechanistic modeling relating equilibrium conditions to the twist efficiency was developed. A static tester was designed to zoom into the self ply zone of the ATP yarn. A computer controlled, prototypic ATP machine was constructed and confirmed the mechanistic model results. Optimum parameters achieving maximum twist efficiency were determined in this study. The successful results of this work have led to the filing of a US patent disclosing the method for producing ATP yarns with high yarn twist efficiency using a high convergence angle at the self ply point together with applying ply torque.

  17. Targeting glioblastoma-derived pericytes improves chemotherapeutic outcome.

    PubMed

    Guerra, Daniel A P; Paiva, Ana E; Sena, Isadora F G; Azevedo, Patrick O; Silva, Walison N; Mintz, Akiva; Birbrair, Alexander

    2018-05-14

    Glioblastoma is the most common malignant brain cancer in adults, with poor prognosis. The blood-brain barrier limits the arrival of several promising anti-glioblastoma drugs, and restricts the design of efficient therapies. Recently, by using state-of-the-art technologies, including thymidine kinase targeting system in combination with glioblastoma xenograft mouse models, it was revealed that targeting glioblastoma-derived pericytes improves chemotherapy efficiency. Strikingly, ibrutinib treatment enhances chemotherapeutic effectiveness, by targeting pericytes, improving blood-brain barrier permeability, and prolonging survival. This study identifies glioblastoma-derived pericyte as a novel target in the brain tumor microenvironment during carcinogenesis. Here, we summarize and evaluate recent advances in the understanding of pericyte's role in the glioblastoma microenvironment.

  18. Quantifying variability in removal efficiencies of chemicals in activated sludge wastewater treatment plants - a meta-analytical approach.

    PubMed

    Douziech, Mélanie; Conesa, Irene Rosique; Benítez-López, Ana; Franco, Antonio; Huijbregts, Mark; van Zelm, Rosalie

    2018-01-24

    Large variations in removal efficiencies (REs) of chemicals have been reported for monitoring studies of activated sludge wastewater treatment plants (WWTPs). In this work, we conducted a meta-analysis on REs (1539 data points) for a set of 209 chemicals consisting of fragrances, surfactants, and pharmaceuticals in order to assess the drivers of the variability relating to inherent properties of the chemicals and operational parameters of activated sludge WWTPs. For a reduced dataset (n = 542), we developed a mixed-effect model (meta-regression) to explore the observed variability in REs for the chemicals using three chemical specific factors and four WWTP-related parameters. The overall removal efficiency of the set of chemicals was 82.1% (95% CI 75.2-87.1%, N = 1539). Our model accounted for 17% of the total variability in REs, while the process-based model SimpleTreat did not perform better than the average of the measured REs. We identified that, after accounting for other factors potentially influencing RE, readily biodegradable compounds were better removed than non-readily biodegradable ones. Further, we showed that REs increased with increasing sludge retention times (SRTs), especially for non-readily biodegradable compounds. Finally, our model highlighted a decrease in RE with increasing K OC . The counterintuitive relationship to K OC stresses the need for a better understanding of electrochemical interactions influencing the RE of ionisable chemicals. In addition, we highlighted the need to improve the modelling of chemicals that undergo deconjugation when predicting RE. Our meta-analysis represents a first step in better explaining the observed variability in measured REs of chemicals. It can be of particular help to prioritize the improvements required in existing process-based models to predict removal efficiencies of chemicals in WWTPs.

  19. An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture.

    PubMed

    Zhang, Xiaopu; Lin, Jun; Chen, Zubin; Sun, Feng; Zhu, Xi; Fang, Gengfa

    2018-06-05

    Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoeschele, Marc; Weitzel, Elizabeth; Backman, Christine

    This project completed a modeling evaluation of a hybrid gas water heater that combines a reduced capacity tankless unit with a downsized storage tank. This product would meet a significant market need by providing a higher efficiency gas water heater solution for retrofit applications while maintaining compatibility with the 1/2 inch gas lines and standard B vents found in most homes. The TRNSYS simulation tool was used to model a base case 0.60 EF atmospheric gas storage water, a 0.82 EF non-condensing gas tankless water heater, an existing (high capacity) hybrid unit on the market, and an alternative hybrid unitmore » with lower storage volume and reduced gas input requirements. Simulations were completed under a 'peak day' sizing scenario with 183 gpd hot water loads in a Minnesota winter climate case. Full-year simulations were then completed in three climates (ranging from Phoenix to Minneapolis) for three hot water load scenarios (36, 57, and 96 gpd). Model projections indicate that the alternative hybrid offers an average 4.5% efficiency improvement relative to the 0.60 EF gas storage unit across all scenarios modeled. The alternative hybrid water heater evaluated does show promise, but the current low cost of natural gas across much of the country and the relatively small incremental efficiency improvement poses challenges in initially building a market demand for the product.« less

  1. Evaluation of flow hydrodynamics in a pilot-scale dissolved air flotation tank: a comparison between CFD and experimental measurements.

    PubMed

    Lakghomi, B; Lawryshyn, Y; Hofmann, R

    2015-01-01

    Computational fluid dynamics (CFD) models of dissolved air flotation (DAF) have shown formation of stratified flow (back and forth horizontal flow layers at the top of the separation zone) and its impact on improved DAF efficiency. However, there has been a lack of experimental validation of CFD predictions, especially in the presence of solid particles. In this work, for the first time, both two-phase (air-water) and three-phase (air-water-solid particles) CFD models were evaluated at pilot scale using measurements of residence time distribution, bubble layer position and bubble-particle contact efficiency. The pilot-scale results confirmed the accuracy of the CFD model for both two-phase and three-phase flows, but showed that the accuracy of the three-phase CFD model would partly depend on the estimation of bubble-particle attachment efficiency.

  2. Simulation, measurement, and emulation of photovoltaic modules using high frequency and high power density power electronic circuits

    NASA Astrophysics Data System (ADS)

    Erkaya, Yunus

    The number of solar photovoltaic (PV) installations is growing exponentially, and to improve the energy yield and the efficiency of PV systems, it is necessary to have correct methods for simulation, measurement, and emulation. PV systems can be simulated using PV models for different configurations and technologies of PV modules. Additionally, different environmental conditions of solar irradiance, temperature, and partial shading can be incorporated in the model to accurately simulate PV systems for any given condition. The electrical measurement of PV systems both prior to and after making electrical connections is important for attaining high efficiency and reliability. Measuring PV modules using a current-voltage (I-V) curve tracer allows the installer to know whether the PV modules are 100% operational. The installed modules can be properly matched to maximize performance. Once installed, the whole system needs to be characterized similarly to detect mismatches, partial shading, or installation damage before energizing the system. This will prevent any reliability issues from the onset and ensure the system efficiency will remain high. A capacitive load is implemented in making I-V curve measurements with the goal of minimizing the curve tracer volume and cost. Additionally, the increase of measurement resolution and accuracy is possible via the use of accurate voltage and current measurement methods and accurate PV models to translate the curves to standard testing conditions. A move from mechanical relays to solid-state MOSFETs improved system reliability while significantly reducing device volume and costs. Finally, emulating PV modules is necessary for testing electrical components of a PV system. PV emulation simplifies and standardizes the tests allowing for different irradiance, temperature and partial shading levels to be easily tested. Proper emulation of PV modules requires an accurate and mathematically simple PV model that incorporates all known system variables so that any PV module can be emulated as the design requires. A non-synchronous buck converter is proposed for the emulation of a single, high-power PV module using traditional silicon devices. With the proof-of-concept working and improvements in efficiency, power density and steady-state errors made, dynamic tests were performed using an inverter connected to the PV emulator. In order to improve the dynamic characteristics, a synchronous buck converter topology is proposed along with the use of advanced GaNFET devices which resulted in very high power efficiency and improved dynamic response characteristics when emulating PV modules.

  3. Improving building energy efficiency in India: State-level analysis of building energy efficiency policies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Tan, Qing; Evans, Meredydd

    India is expected to add 40 billion m2 of new buildings till 2050. Buildings are responsible for one third of India’s total energy consumption today and building energy use is expected to continue growing driven by rapid income and population growth. The implementation of the Energy Conservation Building Code (ECBC) is one of the measures to improve building energy efficiency. Using the Global Change Assessment Model, this study assesses growth in the buildings sector and impacts of building energy policies in Gujarat, which would help the state adopt ECBC and expand building energy efficiency programs. Without building energy policies, buildingmore » energy use in Gujarat would grow by 15 times in commercial buildings and 4 times in urban residential buildings between 2010 and 2050. ECBC improves energy efficiency in commercial buildings and could reduce building electricity use in Gujarat by 20% in 2050, compared to the no policy scenario. Having energy codes for both commercial and residential buildings could result in additional 10% savings in electricity use. To achieve these intended savings, it is critical to build capacity and institution for robust code implementation.« less

  4. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT LASER TOUCH AND TECHNOLOGIES, LLC LASER TOUCH MODEL LT-B512

    EPA Science Inventory

    The Environmental Technology Verification report discusses the technology and performance of Laser Touch model LT-B512 targeting device manufactured by Laser Touch and Technologies, LLC, for manual spray painting operations. The relative transfer efficiency (TE) improved an avera...

  5. Can Community Colleges Afford to Improve Completion? Measuring the Costs and Efficiency Effects of College Reforms. CCRC Working Paper No. 55

    ERIC Educational Resources Information Center

    Belfield, Clive; Crosta, Peter; Jenkins, Davis

    2013-01-01

    Community colleges are under pressure to increase completion rates and efficiency despite limited evidence of the economic consequences of different reform strategies. We introduce an economic model of student course pathways linked to college expenditures and revenues. Using detailed data from a single college, we calculate baseline efficiency…

  6. Fuel Economy Regulations and Efficiency Technology Improvements in U.S. Cars Since 1975

    NASA Astrophysics Data System (ADS)

    MacKenzie, Donald Warren

    Light-duty vehicles account for 43% of petroleum consumption and 23% of greenhouse gas emissions in the United States. Corporate Average Fuel Economy (CAFE) standards are the primary policy tool addressing petroleum consumption in the U.S., and are set to tighten substantially through 2025. In this dissertation, I address several interconnected questions on the technical, policy, and market aspects of fuel consumption reduction. I begin by quantifying historic improvements in fuel efficiency technologies since the 1970s. First. I develop a linear regression model of acceleration performance conditional on power, weight, powertrain, and body characteristics, showing that vehicles today accelerate 20-30% faster than vehicles with similar specifications in the 1970s. Second, I find that growing use of alternative materials and a switch to more weight-efficient vehicle architectures since 1975 have cut the weight of today's new cars by approximately 790 kg (46%). Integrating these results with model-level specification data, I estimate that the average fuel economy of new cars could have tripled from 1975-2009, if not for changes in performance, size, and features over this period. The pace of improvements was not uniform, averaging 5% annually from 1975-1990, but only 2% annually since then. I conclude that the 2025 standards can be met through improvements in efficiency technology, if we can return to 1980s rates of improvement, and growth in acceleration performance and feature content is curtailed. I next test the hypotheses that higher fuel prices and more stringent CAFE standards cause automotive firms to deploy efficiency technologies more rapidly. I find some evidence that higher fuel prices cause more rapid changes in technology, but little to no evidence that tighter CAFE standards increase rates of technology change. I conclude that standards alone, without continued high gasoline prices, may not drive technology improvements at rates needed to meet the 2025 CAFE standards factors determining industry support for nationwide fuel economy regulations. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  7. Geometric and material determinants of patterning efficiency by dielectrophoresis.

    PubMed

    Albrecht, Dirk R; Sah, Robert L; Bhatia, Sangeeta N

    2004-10-01

    Dielectrophoretic (DEP) forces have been used extensively to manipulate, separate, and localize biological cells and bioparticles via high-gradient electric fields. However, minimization of DEP exposure time is desirable, because of possible untoward effects on cell behavior. Toward this goal, this article investigates the geometric and material determinants of particle patterning kinetics and efficiency. In particular, the time required to achieve a steady-state pattern is theoretically modeled and experimentally validated for a planar, interdigitated bar electrode array energized in a standing-wave configuration. This measure of patterning efficiency is calculated from an improved Fourier series solution of DEP force, in which realistic boundary conditions and a finite chamber height are imposed to reflect typical microfluidic applications. The chamber height, electrode spacing, and fluid viscosity and conductivity are parameters that profoundly affect patterning efficiency, and optimization can reduce electric field exposure by orders of magnitude. Modeling strategies are generalizable to arbitrary electrode design as well as to conditions where DEP force may not act alone to cause particle motion. This improved understanding of DEP patterning kinetics provides a framework for new advances in the development of DEP-based biological devices and assays with minimal perturbation of cell behavior. Copyright 2004 Biophysical Society

  8. Effects of recent energy system changes on CO2 projections for the United States.

    PubMed

    Lenox, Carol S; Loughlin, Daniel H

    2017-09-21

    Recent projections of future United States carbon dioxide (CO 2 ) emissions are considerably lower than projections made just a decade ago. A myriad of factors have contributed to lower forecasts, including reductions in end-use energy service demands, improvements in energy efficiency, and technological innovations. Policies that have encouraged these changes include renewable portfolio standards, corporate vehicle efficiency standards, smart growth initiatives, revisions to building codes, and air and climate regulations. Understanding the effects of these and other factors can be advantageous as society evaluates opportunities for achieving additional CO 2 reductions. Energy system models provide a means to develop such insights. In this analysis, the MARKet ALlocation (MARKAL) model was applied to estimate the relative effects of various energy system changes that have happened since the year 2005 on CO 2 projections for the year 2025. The results indicate that transformations in the transportation and buildings sectors have played major roles in lowering projections. Particularly influential changes include improved vehicle efficiencies, reductions in projected travel demand, reductions in miscellaneous commercial electricity loads, and higher efficiency lighting. Electric sector changes have also contributed significantly to the lowered forecasts, driven by demand reductions, renewable portfolio standards, and air quality regulations.

  9. Geometric and Material Determinants of Patterning Efficiency by Dielectrophoresis

    PubMed Central

    Albrecht, Dirk R.; Sah, Robert L.; Bhatia, Sangeeta N.

    2004-01-01

    Dielectrophoretic (DEP) forces have been used extensively to manipulate, separate, and localize biological cells and bioparticles via high-gradient electric fields. However, minimization of DEP exposure time is desirable, because of possible untoward effects on cell behavior. Toward this goal, this article investigates the geometric and material determinants of particle patterning kinetics and efficiency. In particular, the time required to achieve a steady-state pattern is theoretically modeled and experimentally validated for a planar, interdigitated bar electrode array energized in a standing-wave configuration. This measure of patterning efficiency is calculated from an improved Fourier series solution of DEP force, in which realistic boundary conditions and a finite chamber height are imposed to reflect typical microfluidic applications. The chamber height, electrode spacing, and fluid viscosity and conductivity are parameters that profoundly affect patterning efficiency, and optimization can reduce electric field exposure by orders of magnitude. Modeling strategies are generalizable to arbitrary electrode design as well as to conditions where DEP force may not act alone to cause particle motion. This improved understanding of DEP patterning kinetics provides a framework for new advances in the development of DEP-based biological devices and assays with minimal perturbation of cell behavior. PMID:15454417

  10. Improved CO sub 2 enhanced oil recovery -- Mobility control by in-situ chemical precipitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameri, S.; Aminian, K.; Wasson, J.A.

    1991-06-01

    The overall objective of this study has been to evaluate the feasibility of chemical precipitation to improve CO{sub 2} sweep efficiency and mobility control. The laboratory experiments have indicated that carbonate precipitation can alter the permeability of the core samples under reservoir conditions. Furthermore, the relative permeability measurements have revealed that precipitation reduces the gas permeability in favor of liquid permeability. This indicates that precipitation is occurring preferentially in the larger pores. Additional experimental work with a series of connected cores have indicated that the permeability profile can be successfully modified. However, Ph control plays a critical role in propagationmore » of the chemical precipitation reaction. A numerical reservoir model has been utilized to evaluate the effects of permeability heterogeneity and permeability modification on the CO{sub 2} sweep efficiency. The computer simulation results indicate that the permeability profile modification can significantly enhance CO{sub 2} vertical and horizontal sweep efficiencies. The scoping studies with the model have further revealed that only a fraction of high permeability zones need to be altered to achieve sweep efficiency enhancement. 64 refs., 30 figs., 16 tabs.« less

  11. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    NASA Technical Reports Server (NTRS)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  12. Measuring the labeling efficiency of pseudocontinuous arterial spin labeling.

    PubMed

    Chen, Zhensen; Zhang, Xingxing; Yuan, Chun; Zhao, Xihai; van Osch, Matthias J P

    2017-05-01

    Optimization and validation of a sequence for measuring the labeling efficiency of pseudocontinuous arterial spin labeling (pCASL) perfusion MRI. The proposed sequence consists of a labeling module and a single slice Look-Locker echo planar imaging readout. A model-based algorithm was used to calculate labeling efficiency from the signal acquired from the main brain-feeding arteries. Stability of the labeling efficiency measurement was evaluated with regard to the use of cardiac triggering, flow compensation and vein signal suppression. Accuracy of the measurement was assessed by comparing the measured labeling efficiency to mean brain pCASL signal intensity over a wide range of flip angles as applied in the pCASL labeling. Simulations show that the proposed algorithm can effectively calculate labeling efficiency when correcting for T1 relaxation of the blood spins. Use of cardiac triggering and vein signal suppression improved stability of the labeling efficiency measurement, while flow compensation resulted in little improvement. The measured labeling efficiency was found to be linearly (R = 0.973; P < 0.001) related to brain pCASL signal intensity over a wide range of pCASL flip angles. The optimized labeling efficiency sequence provides robust artery-specific labeling efficiency measurement within a short acquisition time (∼30 s), thereby enabling improved accuracy of pCASL CBF quantification. Magn Reson Med 77:1841-1852, 2017. © 2016 International Society for Magnetic Resonance in Medicine Magn Reson Med 77:1841-1852, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  13. Improved Mapping of Carbon, Water and Energy Land-Surface Fluxes Using Remotely Sensed Indicators of Canopy Light Use Efficiency

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Kustas, W.; Cammalleri, C.; Houborg, R.

    2012-12-01

    A light-use-efficiency (LUE) based model of canopy resistance has been embedded into a thermal-based Two-Source Energy Balance (TSEB) model to facilitate coupled simulations of transpiration and carbon assimilation. The model assumes that deviations of the observed canopy LUE from a nominal stand-level value (LUEn - typically indexed by vegetation class) are due to varying conditions of light, humidity, CO2 concentration and leaf temperature. The deviations are accommodated by adjusting an effective LUE that responds to the varying conditions. The challenge to monitoring fluxes on a larger scale is to capture the physiological responses due to changing conditions. This challenge can be met using remotely sensed leaf chlorophyll (Cab). Since Cab is a vital pigment for absorbing light for use in photosynthesis, it has been recognized as a key parameter for quantifying photosynthetic functioning that are sensitive to these conditions. Recent studies have shown that it is sensitive to changes in LUE, which defines how efficiently a plant can assimilate carbon dioxide (CO2) given the absorbed Photosynthetically Active Radiation (PAR) and is therefore useful for monitoring carbon fluxes. We investigate the feasibility of leaf chlorophyll to capture these variations in LUEn using remotely sensed data. To retrieve Cab from remotely sensed data we use REGFLEC, a physically based tool that translates at-sensor radiances in the green, red and NIR spectral regions from multiple satellite sensors into realistic maps of LAI and Cab. Initial results show that Cab is exponentially correlated to light use efficiency. Incorporating nominal light use efficiency estimated from Cab is shown to improve fluxes of carbon, water and energy most notably in times of stressed vegetation. The result illustrates that Cab is sensitive to changes in plant physiology and can capture plant stress needed for improved estimation of fluxes. The observed relationship and initial results demonstrate the need for integrating remotely sensed Cab to facilitate improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  14. Desktop microsimulation: a tool to improve efficiency in the medical office practice.

    PubMed

    Montgomery, James B; Linville, Beth A; Slonim, Anthony D

    2013-01-01

    Because the economic crisis in the United States continues to have an impact on healthcare organizations, industry leaders must optimize their decision making. Discrete-event computer simulation is a quality tool with a demonstrated track record of improving the precision of analysis for process redesign. However, the use of simulation to consolidate practices and design efficiencies into an unfinished medical office building was a unique task. A discrete-event computer simulation package was used to model the operations and forecast future results for four orthopedic surgery practices. The scenarios were created to allow an evaluation of the impact of process change on the output variables of exam room utilization, patient queue size, and staff utilization. The model helped with decisions regarding space allocation and efficient exam room use by demonstrating the impact of process changes in patient queues at check-in/out, x-ray, and cast room locations when compared to the status quo model. The analysis impacted decisions on facility layout, patient flow, and staff functions in this newly consolidated practice. Simulation was found to be a useful tool for process redesign and decision making even prior to building occupancy. © 2011 National Association for Healthcare Quality.

  15. Determinant Factors of Long-Term Performance Development in Young Swimmers.

    PubMed

    Morais, Jorge E; Silva, António J; Marinho, Daniel A; Lopes, Vítor P; Barbosa, Tiago M

    2017-02-01

    To develop a performance predictor model based on swimmers' biomechanical profile, relate the partial contribution of the main predictors with the training program, and analyze the time effect, sex effect, and time × sex interaction. 91 swimmers (44 boys, 12.04 ± 0.81 y; 47 girls, 11.22 ± 0.98 y) evaluated during a 3-y period. The decimal age and anthropometric, kinematic, and efficiency features were collected 10 different times over 3 seasons (ie, longitudinal research). Hierarchical linear modeling was the procedure used to estimate the performance predictors. Performance improved between season 1 early and season 3 late for both sexes (boys 26.9% [20.88;32.96], girls 16.1% [10.34;22.54]). Decimal age (estimate [EST] -2.05, P < .001), arm span (EST -0.59, P < .001), stroke length (EST 3.82; P = .002), and propelling efficiency (EST -0.17, P = .001) were entered in the final model. Over 3 consecutive seasons young swimmers' performance improved. Performance is a multifactorial phenomenon where anthropometrics, kinematics, and efficiency were the main determinants. The change of these factors over time was coupled with the training plans of this talent identification and development program.

  16. Energy efficient model based algorithm for control of building HVAC systems.

    PubMed

    Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N

    2015-11-01

    Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Modeling of a Ne/Xe dielectric barrier discharge excilamp for improvement of VUV radiation production

    NASA Astrophysics Data System (ADS)

    Khodja, K.; Belasri, A.; Loukil, H.

    2017-08-01

    This work is devoted to excimer lamp efficiency optimization by using a homogenous discharge model of a dielectric barrier discharge in a Ne-Xe mixture. The model includes the plasma chemistry, electrical circuit, and Boltzmann equation. In this paper, we are particularly interested in the electrical and kinetic properties and light output generated by the DBD. Xenon is chosen for its high luminescence in the range of vacuum UV radiation around 173 nm. Our study is motivated by interest in this type of discharge in many industrial applications, including the achievement of high light output lamps. In this work, we used an applied sinusoidal voltage, frequency, gas pressure, and concentration in the ranges of 2-8 kV, 10-200 kHz, 100-800 Torr, and 10-50%, respectively. The analyzed results concern the voltage V p across the gap, the dielectric voltage V d, the discharge current I, and the particles densities. We also investigated the effect of the electric parameters and xenon concentration on the lamp efficiency. This investigation will allow one to find out the appropriate parameters for Ne/Xe DBD excilamps to improve their efficiency.

  18. Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU

    NASA Astrophysics Data System (ADS)

    Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.

    2016-09-01

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.

  19. Regional hospital improves efficiency with co-generation retrofit.

    PubMed

    Knutson, D; Anderson, L

    1999-11-01

    Feasibility analysis of the co-generation retrofit of the Red Deer Regional Hospital pointed to a reasonable payback of capital cost and increased efficiency in operation of the facility. Budget restrictions nearly stopped the project from proceeding. Innovative construction procedures proposed by the Facility Management Group, in particular, Mr Keith Metcalfe, Director of Maintenance, allowed a worthwhile project to reach successful completion. We feel that this model can perhaps be used by similar facilities in the future to achieve their energy efficiency goals.

  20. Model-assisted survey regression estimation with the lasso

    Treesearch

    Kelly S. McConville; F. Jay Breidt; Thomas C. M. Lee; Gretchen G. Moisen

    2017-01-01

    In the U.S. Forest Service’s Forest Inventory and Analysis (FIA) program, as in other natural resource surveys, many auxiliary variables are available for use in model-assisted inference about finite population parameters. Some of this auxiliary information may be extraneous, and therefore model selection is appropriate to improve the efficiency of the survey...

  1. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  2. Experimental evaluation of a mathematical model for predicting transfer efficiency of a high volume-low pressure air spray gun.

    PubMed

    Tan, Y M; Flynn, M R

    2000-10-01

    The transfer efficiency of a spray-painting gun is defined as the amount of coating applied to the workpiece divided by the amount sprayed. Characterizing this transfer process allows for accurate estimation of the overspray generation rate, which is important for determining a spray painter's exposure to airborne contaminants. This study presents an experimental evaluation of a mathematical model for predicting the transfer efficiency of a high volume-low pressure spray gun. The effects of gun-to-surface distance and nozzle pressure on the agreement between the transfer efficiency measurement and prediction were examined. Wind tunnel studies and non-volatile vacuum pump oil in place of commercial paint were used to determine transfer efficiency at nine gun-to-surface distances and four nozzle pressure levels. The mathematical model successfully predicts transfer efficiency within the uncertainty limits. The least squares regression between measured and predicted transfer efficiency has a slope of 0.83 and an intercept of 0.12 (R2 = 0.98). Two correction factors were determined to improve the mathematical model. At higher nozzle pressure settings, 6.5 psig and 5.5 psig, the correction factor is a function of both gun-to-surface distance and nozzle pressure level. At lower nozzle pressures, 4 psig and 2.75 psig, gun-to-surface distance slightly influences the correction factor, while nozzle pressure has no discernible effect.

  3. Global-scale combustion sources of organic aerosols: sensitivity to formation and removal mechanisms

    NASA Astrophysics Data System (ADS)

    Tsimpidi, Alexandra P.; Karydis, Vlassis A.; Pandis, Spyros N.; Lelieveld, Jos

    2017-06-01

    Organic compounds from combustion sources such as biomass burning and fossil fuel use are major contributors to the global atmospheric load of aerosols. We analyzed the sensitivity of model-predicted global-scale organic aerosols (OA) to parameters that control primary emissions, photochemical aging, and the scavenging efficiency of organic vapors. We used a computationally efficient module for the description of OA composition and evolution in the atmosphere (ORACLE) of the global chemistry-climate model EMAC (ECHAM/MESSy Atmospheric Chemistry). A global dataset of aerosol mass spectrometer (AMS) measurements was used to evaluate simulated primary (POA) and secondary (SOA) OA concentrations. Model results are sensitive to the emission rates of intermediate-volatility organic compounds (IVOCs) and POA. Assuming enhanced reactivity of semi-volatile organic compounds (SVOCs) and IVOCs with OH substantially improved the model performance for SOA. The use of a hybrid approach for the parameterization of the aging of IVOCs had a small effect on predicted SOA levels. The model performance improved by assuming that freshly emitted organic compounds are relatively hydrophobic and become increasingly hygroscopic due to oxidation.

  4. Stability and the Evolvability of Function in a Model Protein

    PubMed Central

    Bloom, Jesse D.; Wilke, Claus O.; Arnold, Frances H.; Adami, Christoph

    2004-01-01

    Functional proteins must fold with some minimal stability to a structure that can perform a biochemical task. Here we use a simple model to investigate the relationship between the stability requirement and the capacity of a protein to evolve the function of binding to a ligand. Although our model contains no built-in tradeoff between stability and function, proteins evolved function more efficiently when the stability requirement was relaxed. Proteins with both high stability and high function evolved more efficiently when the stability requirement was gradually increased than when there was constant selection for high stability. These results show that in our model, the evolution of function is enhanced by allowing proteins to explore sequences corresponding to marginally stable structures, and that it is easier to improve stability while maintaining high function than to improve function while maintaining high stability. Our model also demonstrates that even in the absence of a fundamental biophysical tradeoff between stability and function, the speed with which function can evolve is limited by the stability requirement imposed on the protein. PMID:15111394

  5. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    NASA Technical Reports Server (NTRS)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  6. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun

    2017-12-01

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.

  7. Directional Slack-Based Measure for the Inverse Data Envelopment Analysis

    PubMed Central

    Abu Bakar, Mohd Rizam; Lee, Lai Soon; Jaafar, Azmi B.; Heydar, Maryam

    2014-01-01

    A novel technique has been introduced in this research which lends its basis to the Directional Slack-Based Measure for the inverse Data Envelopment Analysis. In practice, the current research endeavors to elucidate the inverse directional slack-based measure model within a new production possibility set. On one occasion, there is a modification imposed on the output (input) quantities of an efficient decision making unit. In detail, the efficient decision making unit in this method was omitted from the present production possibility set but substituted by the considered efficient decision making unit while its input and output quantities were subsequently modified. The efficiency score of the entire DMUs will be retained in this approach. Also, there would be an improvement in the efficiency score. The proposed approach was investigated in this study with reference to a resource allocation problem. It is possible to simultaneously consider any upsurges (declines) of certain outputs associated with the efficient decision making unit. The significance of the represented model is accentuated by presenting numerical examples. PMID:24883350

  8. Transient analysis of a superconducting AC generator using the compensated 2-D model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chun, Y.D.; Lee, H.W.; Lee, J.

    1999-09-01

    A SCG has many advantages over conventional generators, such as reduction in width and size, improvement in efficiency, and better steady-state stability. The paper presents a 2-D transient analysis of a superconducting AC generator (SCG) using the finite element method (FEM). The compensated 2-D model obtained by lengthening the airgap of the original 2-D model is proposed for the accurate and efficient transient analysis. The accuracy of the compensated 2-D model is verified by the small error 6.4% compared to experimental data. The transient characteristics of the 30 KVA SCG model have been investigated in detail and the damper performancemore » on various design parameters is examined.« less

  9. Knowing requires data

    USGS Publications Warehouse

    Naranjo, Ramon C.

    2017-01-01

    Groundwater-flow models are often calibrated using a limited number of observations relative to the unknown inputs required for the model. This is especially true for models that simulate groundwater surface-water interactions. In this case, subsurface temperature sensors can be an efficient means for collecting long-term data that capture the transient nature of physical processes such as seepage losses. Continuous and spatially dense network of diverse observation data can be used to improve knowledge of important physical drivers, conceptualize and calibrate variably saturated groundwater flow models. An example is presented for which the results of such analysis were used to help guide irrigation districts and water management decisions on costly upgrades to conveyance systems to improve water usage, farm productivity and restoration efforts to improve downstream water quality and ecosystems.

  10. Multibody dynamics simulation of an all-wheel-drive motorcycle for handling and energy efficiency investigations

    NASA Astrophysics Data System (ADS)

    Griffin, J. W.; Popov, A. A.

    2018-07-01

    It is now possible, through electrical, hydraulic or mechanical means, to power the front wheel of a motorcycle. The aim of this is often to improve performance in limit-handling scenarios including off-road low-traction conditions and on-road high-speed cornering. Following on from research into active torque distribution in 4-wheeled vehicles, the possibility exists for efficiency improvements to be realised by reducing the total amount of energy dissipated as slip at the wheel-road contact. This paper presents the results of an investigation into the effect that varying the torque distribution ratio has on the energy consumption of the two-wheeled vehicle. A 13-degree of freedom multibody model was created, which includes the effects of suspension, aerodynamics and gyroscopic bodies. SimMechanics, from the MathWorks?, is used for automatic generation of equations of motion and time-domain simulation, in conjunction with MATLAB and Simulink. A simple driver model is used to control the speed and yaw rate of the motorcycle. The handling characteristics of the motorcycle are quantitatively analysed, and the impact of torque distribution on energy consumption is considered during straight line and cornering situations. The investigation has shown that only a small improvement in efficiency can be made by transferring a portion of the drive torque to the front wheel. Tyre longevity could be improved by reduced slip energy dissipation.

  11. Process Correlation Analysis Model for Process Improvement Identification

    PubMed Central

    Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170

  12. Process correlation analysis model for process improvement identification.

    PubMed

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  13. Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport

    NASA Technical Reports Server (NTRS)

    Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.

    2008-01-01

    Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.

  14. Optimization of blade motion of vertical axis turbine

    NASA Astrophysics Data System (ADS)

    Ma, Yong; Zhang, Liang; Zhang, Zhi-yang; Han, Duan-feng

    2016-04-01

    In this paper, a method is proposed to improve the energy efficiency of the vertical axis turbine. First of all, a single disk multiple stream-tube model is used to calculate individual fitness. Genetic algorithm is adopted to optimize blade pitch motion of vertical axis turbine with the maximum energy efficiency being selected as the optimization objective. Then, a particular data processing method is proposed, fitting the result data into a cosine-like curve. After that, a general formula calculating the blade motion is developed. Finally, CFD simulation is used to validate the blade pitch motion formula. The results show that the turbine's energy efficiency becomes higher after the optimization of blade pitch motion; compared with the fixed pitch turbine, the efficiency of variable-pitch turbine is significantly improved by the active blade pitch control; the energy efficiency declines gradually with the growth of speed ratio; besides, compactness has lager effect on the blade motion while the number of blades has little effect on it.

  15. 75 FR 20111 - Energy Conservation Program: Energy Conservation Standards for Residential Water Heaters, Direct...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-16

    ... the ``three heating products'') must be designed to ``achieve the maximum improvement in energy... and CO 2 savings are performed with different computer models, leading to different time frames for... of EPCA sets forth a variety of provisions designed to improve energy efficiency. Part A\\1\\ of Title...

  16. Field Tests of In-Service Modifications to Improve Performance of An Icebreaker Main Diesel Engine

    DOT National Transportation Integrated Search

    1977-08-01

    Field tests of in-service modifications to improve engine efficiency and lower the emissions were performed on the no. 3 main diesel engine of the USCGC Mackinaw (WAGB-83). This engine is a model 38D8-1/8 manufactured by Colt Industries, Fairbanks Mo...

  17. Evaluate and Analysis Efficiency of Safaga Port Using DEA-CCR, BCC and SBM Models-Comparison with DP World Sokhna

    NASA Astrophysics Data System (ADS)

    Elsayed, Ayman; Shabaan Khalil, Nabil

    2017-10-01

    The competition among maritime ports is increasing continuously; the main purpose of Safaga port is to become the best option for companies to carry out their trading activities, particularly importing and exporting The main objective of this research is to evaluate and analyze factors that may significantly affect the levels of Safaga port efficiency in Egypt (particularly the infrastructural capacity). The assessment of such efficiency is a task that must play an important role in the management of Safaga port in order to improve the possibility of development and success in commercial activities. Drawing on Data Envelopment Analysis(DEA)models, this paper develops a manner of assessing the comparative efficiency of Safaga port in Egypt during the study period 2004-2013. Previous research for port efficiencies measurement usually using radial DEA models (DEA-CCR), (DEA-BCC), but not using non radial DEA model. The research applying radial - output oriented (DEA-CCR), (DEA-BCC) and non-radial (DEA-SBM) model with ten inputs and four outputs. The results were obtained from the analysis input and output variables based on DEA-CCR, DEA-BCC and SBM models, by software Max DEA Pro 6.3. DP World Sokhna port higher efficiency for all outputs were compared to Safaga port. DP World Sokhna position is below the southern entrance to the Suez Canal, on the Red Sea, Egypt, makes it strategically located to handle cargo transiting through one of the world's busiest commercial waterways.

  18. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  19. Using Chebyshev polynomial interpolation to improve the computational efficiency of gravity models near an irregularly-shaped asteroid

    NASA Astrophysics Data System (ADS)

    Hu, Shou-Cun; Ji, Jiang-Hui

    2017-12-01

    In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.

  20. A generative, probabilistic model of local protein structure.

    PubMed

    Boomsma, Wouter; Mardia, Kanti V; Taylor, Charles C; Ferkinghoff-Borg, Jesper; Krogh, Anders; Hamelryck, Thomas

    2008-07-01

    Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state. Our method represents a significant theoretical and practical improvement over the widely used fragment assembly technique by avoiding the drawbacks associated with a discrete and nonprobabilistic approach.

  1. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  2. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  3. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, Maolong; Lu, Dan; Gui, Dongwei

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  4. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  5. Numerical Modeling and Testing of an Inductively-Driven and High-Energy Pulsed Plasma Thrusters

    NASA Technical Reports Server (NTRS)

    Parma, Brian

    2004-01-01

    Pulsed Plasma Thrusters (PPTs) are advanced electric space propulsion devices that are characterized by simplicity and robustness. They suffer, however, from low thrust efficiencies. This summer, two approaches to improve the thrust efficiency of PPTs will be investigated through both numerical modeling and experimental testing. The first approach, an inductively-driven PPT, uses a double-ignition circuit to fire two PPTs in succession. This effectively changes the PPTs configuration from an LRC circuit to an LR circuit. The LR circuit is expected to provide better impedance matching and improving the efficiency of the energy transfer to the plasma. An added benefit of the LR circuit is an exponential decay of the current, whereas a traditional PPT s under damped LRC circuit experiences the characteristic "ringing" of its current. The exponential decay may provide improved lifetime and sustained electromagnetic acceleration. The second approach, a high-energy PPT, is a traditional PPT with a variable size capacitor bank. This PPT will be simulated and tested at energy levels between 100 and 450 joules in order to investigate the relationship between efficiency and energy level. Arbitrary Coordinate Hydromagnetic (MACH2) code is used. The MACH2 code, designed by the Center for Plasma Theory and Computation at the Air Force Research Laboratory, has been used to gain insight into a variety of plasma problems, including electric plasma thrusters. The goals for this summer include numerical predictions of performance for both the inductively-driven PPT and high-energy PFT, experimental validation of the numerical models, and numerical optimization of the designs. These goals will be met through numerical and experimental investigation of the PPTs current waveforms, mass loss (or ablation), and impulse bit characteristics.

  6. Synthetic biology as it relates to CAM photosynthesis: challenges and opportunities.

    PubMed

    DePaoli, Henrique C; Borland, Anne M; Tuskan, Gerald A; Cushman, John C; Yang, Xiaohan

    2014-07-01

    To meet future food and energy security needs, which are amplified by increasing population growth and reduced natural resource availability, metabolic engineering efforts have moved from manipulating single genes/proteins to introducing multiple genes and novel pathways to improve photosynthetic efficiency in a more comprehensive manner. Biochemical carbon-concentrating mechanisms such as crassulacean acid metabolism (CAM), which improves photosynthetic, water-use, and possibly nutrient-use efficiency, represent a strategic target for synthetic biology to engineer more productive C3 crops for a warmer and drier world. One key challenge for introducing multigene traits like CAM onto a background of C3 photosynthesis is to gain a better understanding of the dynamic spatial and temporal regulatory events that underpin photosynthetic metabolism. With the aid of systems and computational biology, vast amounts of experimental data encompassing transcriptomics, proteomics, and metabolomics can be related in a network to create dynamic models. Such models can undergo simulations to discover key regulatory elements in metabolism and suggest strategic substitution or augmentation by synthetic components to improve photosynthetic performance and water-use efficiency in C3 crops. Another key challenge in the application of synthetic biology to photosynthesis research is to develop efficient systems for multigene assembly and stacking. Here, we review recent progress in computational modelling as applied to plant photosynthesis, with attention to the requirements for CAM, and recent advances in synthetic biology tool development. Lastly, we discuss possible options for multigene pathway construction in plants with an emphasis on CAM-into-C3 engineering. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. Learning classification models with soft-label information.

    PubMed

    Nguyen, Quang; Valizadegan, Hamed; Hauskrecht, Milos

    2014-01-01

    Learning of classification models in medicine often relies on data labeled by a human expert. Since labeling of clinical data may be time-consuming, finding ways of alleviating the labeling costs is critical for our ability to automatically learn such models. In this paper we propose a new machine learning approach that is able to learn improved binary classification models more efficiently by refining the binary class information in the training phase with soft labels that reflect how strongly the human expert feels about the original class labels. Two types of methods that can learn improved binary classification models from soft labels are proposed. The first relies on probabilistic/numeric labels, the other on ordinal categorical labels. We study and demonstrate the benefits of these methods for learning an alerting model for heparin induced thrombocytopenia. The experiments are conducted on the data of 377 patient instances labeled by three different human experts. The methods are compared using the area under the receiver operating characteristic curve (AUC) score. Our AUC results show that the new approach is capable of learning classification models more efficiently compared to traditional learning methods. The improvement in AUC is most remarkable when the number of examples we learn from is small. A new classification learning framework that lets us learn from auxiliary soft-label information provided by a human expert is a promising new direction for learning classification models from expert labels, reducing the time and cost needed to label data.

  8. Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2

    NASA Technical Reports Server (NTRS)

    Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.

    1988-01-01

    The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.

  9. Care coordination and the essential role of the nurse.

    PubMed

    Cropley, Stacey; Sandrs, Ellare Duis

    2013-01-01

    Quality improvement and cost control rely on effective coordination of patient care. Registered nurses (RNs) across the continuum of care play an essential role in care coordination. Greater health care efficiencies can be realized through coordination of care centered on the needs and preferences of patients and their families. Professional nursing links these approaches, promoting quality, safety, and efficiency in care, resulting in improved health care outcomes that are consistent with nursing's holistic, patient-centered framework of care. This model for RN care coordination provides a guideline for nurses in direct care as well as those in highly specialized care coordination positions.

  10. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  11. Prospect of the high efficiency for the VEST (Via-hole Etching for the Separation of Thin films) cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deguchi, M.; Kawama, Y.; Matsuno, Y.

    1994-12-31

    The optimum design of the via-holes for the VEST cell was studied. Using a simple model, fill factors of the VEST cell were calculated. As for the via-hole distribution pattern, square grid pattern was found to be most suitable from the view points of the cell performance and the easiness of the electrode designing. It was found that the fill factor large enough (> 0.79) for the high efficiency can be obtained. A fabricated test cell showed the efficiency of 14.4%. Further improvement (efficiency over 18%) is possibly expected.

  12. Determinants of healthcare system's efficiency in OECD countries.

    PubMed

    Hadad, Sharon; Hadad, Yossi; Simon-Tuval, Tzahit

    2013-04-01

    Firstly, to compare healthcare systems' efficiency (HSE) using two models: one incorporating mostly inputs that are considered to be within the discretionary control of the healthcare system (i.e., physicians' density, inpatient bed density, and health expenditure), and another, including mostly inputs beyond healthcare systems' control (i.e., GDP, fruit and vegetables consumption, and health expenditure). Secondly, analyze whether institutional arrangements, population behavior, and socioeconomic or environmental determinants are associated with HSE. Data envelopment analysis (DEA) was utilized to calculate OECD countries' HSE. Life expectancy and infant survival rate were considered as outputs in both models. Healthcare systems' rankings according to the super-efficiency and the cross-efficiency ranking methods were used to analyze determinants associated with efficiency. (1) Healthcare systems in nine countries with large and stable economies were defined as efficient in model I, but were found to be inefficient in model II; (2) Gatekeeping and the presence of multiple insurers were associated with a lower efficiency; and (3) The association between socioeconomic and environmental indicators was found to be ambiguous. Countries striving to improve their HSE should aim to impact population behavior and welfare rather than only ensure adequate medical care. In addition, they may consider avoiding specific institutional arrangements, namely gatekeeping and the presence of multiple insurers. Finally, the ambiguous association found between socioeconomic and environmental indicators, and a country's HSE necessitates caution when interpreting different ranking techniques in a cross-country efficiency evaluation and needs further exploration.

  13. Hidden Efficiencies: Making Completion of the Pediatric Vaccine Schedule More Efficient for Physicians

    PubMed Central

    Ciarametaro, Mike; Bradshaw, Steven E.; Guiglotto, Jillian; Hahn, Beth; Meier, Genevieve

    2015-01-01

    Abstract The objective of this work is to demonstrate the potential time and labor savings that may result from increased use of combination vaccinations. The study (GSK study identifier: HO-12-4735) was a model developed to evaluate the efficiency of the pediatric vaccine schedule, using time and motion studies. The model considered vaccination time and the associated labor costs, but vaccination acquisition costs were not considered. We also did not consider any efficacy or safety differences between formulations. The model inputs were supported by a targeted literature review. The reference year for the model was 2012. The most efficient vaccination program using currently available vaccines was predicted to reduce costs through a combination of fewer injections (62%) and less time per vaccination (38%). The most versus the least efficient vaccine program was predicted to result in a 47% reduction in vaccination time and a 42% reduction in labor and supply costs. The estimated administration cost saving with the most versus the least efficient program was estimated to be nearly US $45 million. If hypothetical 6- or 7-valent vaccines are developed using the already most efficient schedule by adding additional antigens (pneumococcal conjugate vaccine and Haemophilus influenzae type b) to the most efficient 5-valent vaccine, the savings are predicted to be even greater. Combination vaccinations reduce the time burden of the childhood immunization schedule and could create the potential to improve vaccination uptake and compliance as a result of fewer required injections. PMID:25634165

  14. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images

    PubMed Central

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-01-01

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236

  15. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images.

    PubMed

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-06-22

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.

  16. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    NASA Astrophysics Data System (ADS)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  17. An Information System Development Method Connecting Business Process Modeling and its Experimental Evaluation

    NASA Astrophysics Data System (ADS)

    Okawa, Tsutomu; Kaminishi, Tsukasa; Kojima, Yoshiyuki; Hirabayashi, Syuichi; Koizumi, Hisao

    Business process modeling (BPM) is gaining attention as a measure of analysis and improvement of the business process. BPM analyses the current business process as an AS-IS model and solves problems to improve the current business and moreover it aims to create a business process, which produces values, as a TO-BE model. However, researches of techniques that connect the business process improvement acquired by BPM to the implementation of the information system seamlessly are rarely reported. If the business model obtained by BPM is converted into UML, and the implementation can be carried out by the technique of UML, we can expect the improvement in efficiency of information system implementation. In this paper, we describe a method of the system development, which converts the process model obtained by BPM into UML and the method is evaluated by modeling a prototype of a parts procurement system. In the evaluation, comparison with the case where the system is implemented by the conventional UML technique without going via BPM is performed.

  18. Building energy modeling for green architecture and intelligent dashboard applications

    NASA Astrophysics Data System (ADS)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the representation of unpredictable occupancy patterns on model results. Combined, these studies inform modelers and researchers on frameworks for simulating holistically designed architecture and improving the interaction between models and building occupants, in residential and commercial settings. v

  19. Elimination of trait blocks from multiple trait mixed model equations with singular (Co)variance parameter matrices

    USDA-ARS?s Scientific Manuscript database

    Transformations to multiple trait mixed model equations (MME) which are intended to improve computational efficiency in best linear unbiased prediction (BLUP) and restricted maximum likelihood (REML) are described. It is shown that traits that are expected or estimated to have zero residual variance...

  20. Hydrothermal germination models: Improving experimental efficiency by limiting data collection to the relevant hydrothermal range

    USDA-ARS?s Scientific Manuscript database

    Hydrothermal models used to predict germination response in the field are usually parameterized with data from laboratory experiments that examine the full range of germination response to temperature and water potential. Inclusion of low water potential and high and low-temperature treatments, how...

  1. Development and Optimized Design of Propeller Pump System & Structure with VFD in Low-head Pumping Station

    NASA Astrophysics Data System (ADS)

    Rentian, Zhang; Honggeng, Zhu; Arnold, Jaap; Linbi, Yao

    2010-06-01

    Compared with vertical-installed pumps, the propeller (bulb tubular) pump systems can achieve higher hydraulic efficiencies, which are particularly suitable for low-head pumping stations. More than four propeller pumping stations are being, or will be built in the first stage of the S-to-N Water Diversion Project in China, diverting water from Yangtze River to the northern part of China to alleviate water-shortage problems and develop the economy. New structures of propeller pump have been developed for specified pumping stations in Jiangsu and Shandong Provinces respectively and Variable Frequency Drives (VFDs) are used in those pumping stations to regulate operating conditions. Based on the Navier-Stokes equations and the standard k-e turbulent model, numerical simulations of the flow field and performance prediction in the propeller pump system were conducted on the platform of commercial software CFX by using the SIMPLEC algorithm. Through optimal design of bulb dimensions and diffuser channel shape, the hydraulic system efficiency has improved evidently. Furthermore, the structures of propeller pumps have been optimized to for the introduction of conventional as well as permanent magnet motors. In order to improve the hydraulic efficiency of pumping systems, both the pump discharge and the motor diameter were optimized respectively. If a conventional motor is used, the diameter of the pump casing has to be increased to accommodate the motor installed inside. If using a permanent magnet motor, the diameter of motor casing can be decreased effectively without decreasing its output power, thus the cross-sectional area is enlarged and the velocity of flowing water decreased favorably to reduce hydraulic loss of discharge channel and thereby raising the pumping system efficiency. Witness model tests were conducted after numerical optimization on specific propeller pump systems, indicating that the model system hydraulic efficiencies can be improved by 0.5%˜3.7% in different specified operational conditions.

  2. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  3. Electromagnetomechanical elastodynamic model for Lamb wave damage quantification in composites

    NASA Astrophysics Data System (ADS)

    Borkowski, Luke; Chattopadhyay, Aditi

    2014-03-01

    Physics-based wave propagation computational models play a key role in structural health monitoring (SHM) and the development of improved damage quantification methodologies. Guided waves (GWs), such as Lamb waves, provide the capability to monitor large plate-like aerospace structures with limited actuators and sensors and are sensitive to small scale damage; however due to the complex nature of GWs, accurate and efficient computation tools are necessary to investigate the mechanisms responsible for dispersion, coupling, and interaction with damage. In this paper, the local interaction simulation approach (LISA) coupled with the sharp interface model (SIM) solution methodology is used to solve the fully coupled electro-magneto-mechanical elastodynamic equations for the piezoelectric and piezomagnetic actuation and sensing of GWs in fiber reinforced composite material systems. The final framework provides the full three-dimensional displacement as well as electrical and magnetic potential fields for arbitrary plate and transducer geometries and excitation waveform and frequency. The model is validated experimentally and proven computationally efficient for a laminated composite plate. Studies are performed with surface bonded piezoelectric and embedded piezomagnetic sensors to gain insight into the physics of experimental techniques used for SHM. The symmetric collocation of piezoelectric actuators is modeled to demonstrate mode suppression in laminated composites for the purpose of damage detection. The effect of delamination and damage (i.e., matrix cracking) on the GW propagation is demonstrated and quantified. The developed model provides a valuable tool for the improvement of SHM techniques due to its proven accuracy and computational efficiency.

  4. Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckerle, Wayne; Rutland, Chris; Rohlfing, Eric

    This report is based on a SC/EERE Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE), held March 3, 2011, to determine strategic focus areas that will accelerate innovation in engine design to meet national goals in transportation efficiency. The U.S. has reached a pivotal moment when pressures of energy security, climate change, and economic competitiveness converge. Oil prices remain volatile and have exceeded $100 per barrel twice in five years. At these prices, the U.S. spends $1 billion per day on imported oil to meet our energy demands. Because the transportation sector accountsmore » for two-thirds of our petroleum use, energy security is deeply entangled with our transportation needs. At the same time, transportation produces one-quarter of the nation’s carbon dioxide output. Increasing the efficiency of internal combustion engines is a technologically proven and cost-effective approach to dramatically improving the fuel economy of the nation’s fleet of vehicles in the near- to mid-term, with the corresponding benefits of reducing our dependence on foreign oil and reducing carbon emissions. Because of their relatively low cost, high performance, and ability to utilize renewable fuels, internal combustion engines—including those in hybrid vehicles—will continue to be critical to our transportation infrastructure for decades. Achievable advances in engine technology can improve the fuel economy of automobiles by over 50% and trucks by over 30%. Achieving these goals will require the transportation sector to compress its product development cycle for cleaner, more efficient engine technologies by 50% while simultaneously exploring innovative design space. Concurrently, fuels will also be evolving, adding another layer of complexity and further highlighting the need for efficient product development cycles. Current design processes, using “build and test” prototype engineering, will not suffice. Current market penetration of new engine technologies is simply too slow—it must be dramatically accelerated. These challenges present a unique opportunity to marshal U.S. leadership in science-based simulation to develop predictive computational design tools for use by the transportation industry. The use of predictive simulation tools for enhancing combustion engine performance will shrink engine development timescales, accelerate time to market, and reduce development costs, while ensuring the timely achievement of energy security and emissions targets and enhancing U.S. industrial competitiveness. In 2007 Cummins achieved a milestone in engine design by bringing a diesel engine to market solely with computer modeling and analysis tools. The only testing was after the fact to confirm performance. Cummins achieved a reduction in development time and cost. As important, they realized a more robust design, improved fuel economy, and met all environmental and customer constraints. This important first step demonstrates the potential for computational engine design. But, the daunting complexity of engine combustion and the revolutionary increases in efficiency needed require the development of simulation codes and computation platforms far more advanced than those available today. Based on these needs, a Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) convened over 60 U.S. leaders in the engine combustion field from industry, academia, and national laboratories to focus on two critical areas of advanced simulation, as identified by the U.S. automotive and engine industries. First, modern engines require precise control of the injection of a broad variety of fuels that is far more subtle than achievable to date and that can be obtained only through predictive modeling and simulation. Second, the simulation, understanding, and control of these stochastic in-cylinder combustion processes lie on the critical path to realizing more efficient engines with greater power density. Fuel sprays set the initial conditions for combustion in essentially all future transportation engines; yet today designers primarily use empirical methods that limit the efficiency achievable. Three primary spray topics were identified as focus areas in the workshop: The fuel delivery system, which includes fuel manifolds and internal injector flow, The multi-phase fuel–air mixing in the combustion chamber of the engine, and The heat transfer and fluid interactions with cylinder walls. Current understanding and modeling capability of stochastic processes in engines remains limited and prevents designers from achieving significantly higher fuel economy. To improve this situation, the workshop participants identified three focus areas for stochastic processes: Improve fundamental understanding that will help to establish and characterize the physical causes of stochastic events, Develop physics-based simulation models that are accurate and sensitive enough to capture performance-limiting variability, and Quantify and manage uncertainty in model parameters and boundary conditions. Improved models and understanding in these areas will allow designers to develop engines with reduced design margins and that operate reliably in more efficient regimes. All of these areas require improved basic understanding, high-fidelity model development, and rigorous model validation. These advances will greatly reduce the uncertainties in current models and improve understanding of sprays and fuel–air mixture preparation that limit the investigation and development of advanced combustion technologies. The two strategic focus areas have distinctive characteristics but are inherently coupled. Coordinated activities in basic experiments, fundamental simulations, and engineering-level model development and validation can be used to successfully address all of the topics identified in the PreSICE workshop. The outcome will be: New and deeper understanding of the relevant fundamental physical and chemical processes in advanced combustion technologies, Implementation of this understanding into models and simulation tools appropriate for both exploration and design, and Sufficient validation with uncertainty quantification to provide confidence in the simulation results. These outcomes will provide the design tools for industry to reduce development time by up to 30% and improve engine efficiencies by 30% to 50%. The improved efficiencies applied to the national mix of transportation applications have the potential to save over 5 million barrels of oil per day, a current cost savings of $500 million per day.« less

  5. Evaluation on the efficiency of the construction sector companies in Malaysia with data envelopment analysis model

    NASA Astrophysics Data System (ADS)

    Weng Hoe, Lam; Jinn, Lim Shun; Weng Siew, Lam; Hai, Tey Kim

    2018-04-01

    In Malaysia, construction sector is essential parts in driving the development of the Malaysian economy. Construction industry is an economic investment and its relationship with economic development is well posited. However, the evaluation on the efficiency of the construction sectors companies listed in Kuala Lumpur Stock Exchange (KLSE) with Data Analysis Envelopment (DEA) model have not been actively studied by the past researchers. Hence the purpose of this study is to examine the financial performance the listed construction sectors companies in Malaysia in the year of 2015. The results of this study show that the efficiency of construction sectors companies can be obtained by using DEA model through ratio analysis which defined as the ratio of total outputs to total inputs. This study is significant because the inefficient companies are identified for potential improvement.

  6. Energy-Efficiency Retrofits in Small-Scale Multifamily Rental Housing: A Business Model

    NASA Astrophysics Data System (ADS)

    DeChambeau, Brian

    The goal of this thesis to develop a real estate investment model that creates a financial incentive for property owners to perform energy efficiency retrofits in small multifamily rental housing in southern New England. The medium for this argument is a business plan that is backed by a review of the literature and input from industry experts. In addition to industry expertise, the research covers four main areas: the context of green building, efficient building technologies, precedent programs, and the Providence, RI real estate market for the business plan. The thesis concludes that the model proposed can improve the profitability of real estate investment in small multifamily rental properties, though the extent to which this is possible depends partially on utility-run incentive programs and the capital available to invest in retrofit measures.

  7. Enzyme clustering accelerates processing of intermediates through metabolic channeling

    PubMed Central

    Castellana, Michele; Wilson, Maxwell Z.; Xu, Yifan; Joshi, Preeti; Cristea, Ileana M.; Rabinowitz, Joshua D.; Gitai, Zemer; Wingreen, Ned S.

    2015-01-01

    We present a quantitative model to demonstrate that coclustering multiple enzymes into compact agglomerates accelerates the processing of intermediates, yielding the same efficiency benefits as direct channeling, a well-known mechanism in which enzymes are funneled between enzyme active sites through a physical tunnel. The model predicts the separation and size of coclusters that maximize metabolic efficiency, and this prediction is in agreement with previously reported spacings between coclusters in mammalian cells. For direct validation, we study a metabolic branch point in Escherichia coli and experimentally confirm the model prediction that enzyme agglomerates can accelerate the processing of a shared intermediate by one branch, and thus regulate steady-state flux division. Our studies establish a quantitative framework to understand coclustering-mediated metabolic channeling and its application to both efficiency improvement and metabolic regulation. PMID:25262299

  8. Measurement and modeling of intrinsic transcription terminators

    PubMed Central

    Cambray, Guillaume; Guimaraes, Joao C.; Mutalik, Vivek K.; Lam, Colin; Mai, Quynh-Anh; Thimmaiah, Tim; Carothers, James M.; Arkin, Adam P.; Endy, Drew

    2013-01-01

    The reliable forward engineering of genetic systems remains limited by the ad hoc reuse of many types of basic genetic elements. Although a few intrinsic prokaryotic transcription terminators are used routinely, termination efficiencies have not been studied systematically. Here, we developed and validated a genetic architecture that enables reliable measurement of termination efficiencies. We then assembled a collection of 61 natural and synthetic terminators that collectively encode termination efficiencies across an ∼800-fold dynamic range within Escherichia coli. We simulated co-transcriptional RNA folding dynamics to identify competing secondary structures that might interfere with terminator folding kinetics or impact termination activity. We found that structures extending beyond the core terminator stem are likely to increase terminator activity. By excluding terminators encoding such context-confounding elements, we were able to develop a linear sequence-function model that can be used to estimate termination efficiencies (r = 0.9, n = 31) better than models trained on all terminators (r = 0.67, n = 54). The resulting systematically measured collection of terminators should improve the engineering of synthetic genetic systems and also advance quantitative modeling of transcription termination. PMID:23511967

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less

  10. Modeling and analysis of a meso-hydraulic climbing robot with artificial muscle actuation.

    PubMed

    Chapman, Edward M; Jenkins, Tyler E; Bryant, Matthew

    2017-11-08

    This paper presents a fully coupled electro-hydraulic model of a bio-inspired climbing robot actuated by fluidic artificial muscles (FAMs). This analysis expands upon previous FAM literature by considering not only the force and contraction characteristics of the actuator, but the complete hydraulic and electromechanical circuits as well as the dynamics of the climbing robot. This analysis allows modeling of the time-varying applied pressure, electrical current, and actuator contraction for accurate prediction of the robot motion, energy consumption, and mechanical work output. The developed model is first validated against mechanical and electrical data collected from a proof-of-concept prototype robot. The model is then employed to study the system-level sensitivities of the robot locomotion efficiency and average climbing speed to several design and operating parameters. The results of this analysis demonstrate that considering only the transduction efficiency of the FAM actuators is insufficient to maximize the efficiency of the complete robot, and that a holistic approach can lead to significant improvements in performance.

  11. Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle

    NASA Astrophysics Data System (ADS)

    Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun

    2018-05-01

    The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.

  12. The importance of radiation for semiempirical water-use efficiency models

    NASA Astrophysics Data System (ADS)

    Boese, Sven; Jung, Martin; Carvalhais, Nuno; Reichstein, Markus

    2017-06-01

    Water-use efficiency (WUE) is a fundamental property for the coupling of carbon and water cycles in plants and ecosystems. Existing model formulations predicting this variable differ in the type of response of WUE to the atmospheric vapor pressure deficit of water (VPD). We tested a representative WUE model on the ecosystem scale at 110 eddy covariance sites of the FLUXNET initiative by predicting evapotranspiration (ET) based on gross primary productivity (GPP) and VPD. We found that introducing an intercept term in the formulation increases model performance considerably, indicating that an additional factor needs to be considered. We demonstrate that this intercept term varies seasonally and we subsequently associate it with radiation. Replacing the constant intercept term with a linear function of global radiation was found to further improve model predictions of ET. Our new semiempirical ecosystem WUE formulation indicates that, averaged over all sites, this radiation term accounts for up to half (39-47 %) of transpiration. These empirical findings challenge the current understanding of water-use efficiency on the ecosystem scale.

  13. Modeling photovoltaic performance in periodic patterned colloidal quantum dot solar cells.

    PubMed

    Fu, Yulan; Dinku, Abay G; Hara, Yukihiro; Miller, Christopher W; Vrouwenvelder, Kristina T; Lopez, Rene

    2015-07-27

    Colloidal quantum dot (CQD) solar cells have attracted tremendous attention mostly due to their wide absorption spectrum window and potentially low processability cost. The ultimate efficiency of CQD solar cells is highly limited by their high trap state density. Here we show that the overall device power conversion efficiency could be improved by employing photonic structures that enhance both charge generation and collection efficiencies. By employing a two-dimensional numerical model, we have calculated the characteristics of patterned CQD solar cells based of a simple grating structure. Our calculation predicts a power conversion efficiency as high as 11.2%, with a short circuit current density of 35.2 mA/cm2, a value nearly 1.5 times larger than the conventional flat design, showing the great potential value of patterned quantum dot solar cells.

  14. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  15. Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yiguang Ju; Frederick Dryer

    2009-02-07

    Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.

  16. Advances in Projection Moire Interferometry Development for Large Wind Tunnel Applications

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A.; Soto, Hector L.; South, Bruce W.; Bartram, Scott M.

    1999-01-01

    An instrument development program aimed at using Projection Moire Interferometry (PMI) for acquiring model deformation measurements in large wind tunnels was begun at NASA Langley Research Center in 1996. Various improvements to the initial prototype PMI systems have been made throughout this development effort. This paper documents several of the most significant improvements to the optical hardware and image processing software, and addresses system implementation issues for large wind tunnel applications. The improvements have increased both measurement accuracy and instrument efficiency, promoting the routine use of PMI for model deformation measurements in production wind tunnel tests.

  17. Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient

    PubMed Central

    Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming

    2016-01-01

    This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262

  18. Specialized Nursing Practice for Chronic Disease Management in the Primary Care Setting

    PubMed Central

    2013-01-01

    Background In response to the increasing demand for better chronic disease management and improved health care efficiency in Ontario, nursing roles have expanded in the primary health care setting. Objectives To determine the effectiveness of specialized nurses who have a clinical role in patient care in optimizing chronic disease management among adults in the primary health care setting. Data Sources and Review Methods A literature search was performed using OVID MEDLINE, OVID MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Wiley Cochrane Library, and the Centre for Reviews and Dissemination database. Results were limited to randomized controlled trials and systematic reviews and were divided into 2 models: Model 1 (nurse alone versus physician alone) and Model 2 (nurse and physician versus physician alone). Effectiveness was determined by comparable outcomes between groups in Model 1, or improved outcomes or efficiency in Model 2. Results Six studies were included. In Model 1, there were no significant differences in health resource use, disease-specific measures, quality of life, or patient satisfaction. In Model 2, there was a reduction in hospitalizations and improved management of blood pressure and lipids among patients with coronary artery disease. Among patients with diabetes, there was a reduction in hemoglobin A1c but no difference in other disease-specific measures. There was a trend toward improved process measures, including medication prescribing and clinical assessments. Results related to quality of life were inconsistent, but patient satisfaction with the nurse-physician team was improved. Overall, there were more and longer visits to the nurse, and physician workload did not change. Limitations There was heterogeneity across patient populations, and in the titles, roles, and scope of practice of the specialized nurses. Conclusions Specialized nurses with an autonomous role in patient care had comparable outcomes to physicians alone (Model 1) based on moderate quality evidence, with consistent results among a subgroup analysis of patients with diabetes based on low quality evidence. Model 2 showed an overall improvement in appropriate process measures, disease-specific measures, and patient satisfaction based on low to moderate quality evidence. There was low quality evidence that nurses working under Model 2 may reduce hospitalizations for patients with coronary artery disease. The specific role of the nurse in supplementing or substituting physician care was unclear, making it difficult to determine the impact on efficiency. Plain Language Summary Nurses with additional skills, training, or scope of practice may help improve the primary care of patients with chronic diseases. This review found that specialized nurses working on their own could achieve health outcomes that were similar to those of doctors. It also found that specialized nurses who worked with doctors could reduce hospital visits and improve certain patient outcomes related to diabetes, coronary artery disease, or heart failure. Patients who had nurse-led care were more satisfied and tended to receive more tests and medications. It is unclear whether specialized nurses improve quality of life or doctor workload. PMID:24194798

  19. Resilience and efficiency in transportation networks

    PubMed Central

    Ganin, Alexander A.; Kitsak, Maksim; Marchese, Dayton; Keisler, Jeffrey M.; Seager, Thomas; Linkov, Igor

    2017-01-01

    Urban transportation systems are vulnerable to congestion, accidents, weather, special events, and other costly delays. Whereas typical policy responses prioritize reduction of delays under normal conditions to improve the efficiency of urban road systems, analytic support for investments that improve resilience (defined as system recovery from additional disruptions) is still scarce. In this effort, we represent paved roads as a transportation network by mapping intersections to nodes and road segments between the intersections to links. We built road networks for 40 of the urban areas defined by the U.S. Census Bureau. We developed and calibrated a model to evaluate traffic delays using link loads. The loads may be regarded as traffic-based centrality measures, estimating the number of individuals using corresponding road segments. Efficiency was estimated as the average annual delay per peak-period auto commuter, and modeled results were found to be close to observed data, with the notable exception of New York City. Resilience was estimated as the change in efficiency resulting from roadway disruptions and was found to vary between cities, with increased delays due to a 5% random loss of road linkages ranging from 9.5% in Los Angeles to 56.0% in San Francisco. The results demonstrate that many urban road systems that operate inefficiently under normal conditions are nevertheless resilient to disruption, whereas some more efficient cities are more fragile. The implication is that resilience, not just efficiency, should be considered explicitly in roadway project selection and justify investment opportunities related to disaster and other disruptions. PMID:29291243

  20. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples.

    PubMed

    Daudelin, Denise H; Selker, Harry P; Leslie, Laurel K

    2015-12-01

    There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in-person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. © 2015 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc.

  1. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  2. Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver

    NASA Astrophysics Data System (ADS)

    Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur

    2017-12-01

    Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.; Britt, J.; Birkmire, R.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less

  4. Experimental modeling of swirl flows in power plants

    NASA Astrophysics Data System (ADS)

    Shtork, S. I.; Litvinov, I. V.; Gesheva, E. S.; Tsoy, M. A.; Skripkin, S. G.

    2018-03-01

    The article presents an overview of the methods and approaches to experimental modeling of various thermal and hydropower units - furnaces of pulverized coal boilers and flow-through elements of hydro turbines. The presented modeling approaches based on a combination of experimentation and rapid prototyping of working parts may be useful in optimizing energy equipment to improve safety and efficiency of industrial energy systems.

  5. Genetically engineered livestock: ethical use for food and medical models.

    PubMed

    Garas, Lydia C; Murray, James D; Maga, Elizabeth A

    2015-01-01

    Recent advances in the production of genetically engineered (GE) livestock have resulted in a variety of new transgenic animals with desirable production and composition changes. GE animals have been generated to improve growth efficiency, food composition, and disease resistance in domesticated livestock species. GE animals are also used to produce pharmaceuticals and as medical models for human diseases. The potential use of these food animals for human consumption has prompted an intense debate about food safety and animal welfare concerns with the GE approach. Additionally, public perception and ethical concerns about their use have caused delays in establishing a clear and efficient regulatory approval process. Ethically, there are far-reaching implications of not using genetically engineered livestock, at a detriment to both producers and consumers, as use of this technology can improve both human and animal health and welfare.

  6. Nonlinear predictive control for durability enhancement and efficiency improvement in a fuel cell power system

    NASA Astrophysics Data System (ADS)

    Luna, Julio; Jemei, Samir; Yousfi-Steiner, Nadia; Husar, Attila; Serra, Maria; Hissel, Daniel

    2016-10-01

    In this work, a nonlinear model predictive control (NMPC) strategy is proposed to improve the efficiency and enhance the durability of a proton exchange membrane fuel cell (PEMFC) power system. The PEMFC controller is based on a distributed parameters model that describes the nonlinear dynamics of the system, considering spatial variations along the gas channels. Parasitic power from different system auxiliaries is considered, including the main parasitic losses which are those of the compressor. A nonlinear observer is implemented, based on the discretised model of the PEMFC, to estimate the internal states. This information is included in the cost function of the controller to enhance the durability of the system by means of avoiding local starvation and inappropriate water vapour concentrations. Simulation results are presented to show the performance of the proposed controller over a given case study in an automotive application (New European Driving Cycle). With the aim of representing the most relevant phenomena that affects the PEMFC voltage, the simulation model includes a two-phase water model and the effects of liquid water on the catalyst active area. The control model is a simplified version that does not consider two-phase water dynamics.

  7. Improving Global Gross Primary Productivity Estimates by Computing Optimum Light Use Efficiencies Using Flux Tower Data

    NASA Astrophysics Data System (ADS)

    Madani, Nima; Kimball, John S.; Running, Steven W.

    2017-11-01

    In the light use efficiency (LUE) approach of estimating the gross primary productivity (GPP), plant productivity is linearly related to absorbed photosynthetically active radiation assuming that plants absorb and convert solar energy into biomass within a maximum LUE (LUEmax) rate, which is assumed to vary conservatively within a given biome type. However, it has been shown that photosynthetic efficiency can vary within biomes. In this study, we used 149 global CO2 flux towers to derive the optimum LUE (LUEopt) under prevailing climate conditions for each tower location, stratified according to model training and test sites. Unlike LUEmax, LUEopt varies according to heterogeneous landscape characteristics and species traits. The LUEopt data showed large spatial variability within and between biome types, so that a simple biome classification explained only 29% of LUEopt variability over 95 global tower training sites. The use of explanatory variables in a mixed effect regression model explained 62.2% of the spatial variability in tower LUEopt data. The resulting regression model was used for global extrapolation of the LUEopt data and GPP estimation. The GPP estimated using the new LUEopt map showed significant improvement relative to global tower data, including a 15% R2 increase and 34% root-mean-square error reduction relative to baseline GPP calculations derived from biome-specific LUEmax constants. The new global LUEopt map is expected to improve the performance of LUE-based GPP algorithms for better assessment and monitoring of global terrestrial productivity and carbon dynamics.

  8. In vitro generation of three-dimensional substrate-adherent embryonic stem cell-derived neural aggregates for application in animal models of neurological disorders.

    PubMed

    Hargus, Gunnar; Cui, Yi-Fang; Dihné, Marcel; Bernreuther, Christian; Schachner, Melitta

    2012-05-01

    In vitro-differentiated embryonic stem (ES) cells comprise a useful source for cell replacement therapy, but the efficiency and safety of a translational approach are highly dependent on optimized protocols for directed differentiation of ES cells into the desired cell types in vitro. Furthermore, the transplantation of three-dimensional ES cell-derived structures instead of a single-cell suspension may improve graft survival and function by providing a beneficial microenvironment for implanted cells. To this end, we have developed a new method to efficiently differentiate mouse ES cells into neural aggregates that consist predominantly (>90%) of postmitotic neurons, neural progenitor cells, and radial glia-like cells. When transplanted into the excitotoxically lesioned striatum of adult mice, these substrate-adherent embryonic stem cell-derived neural aggregates (SENAs) showed significant advantages over transplanted single-cell suspensions of ES cell-derived neural cells, including improved survival of GABAergic neurons, increased cell migration, and significantly decreased risk of teratoma formation. Furthermore, SENAs mediated functional improvement after transplantation into animal models of Parkinson's disease and spinal cord injury. This unit describes in detail how SENAs are efficiently derived from mouse ES cells in vitro and how SENAs are isolated for transplantation. Furthermore, methods are presented for successful implantation of SENAs into animal models of Huntington's disease, Parkinson's disease, and spinal cord injury to study the effects of stem cell-derived neural aggregates in a disease context in vivo.

  9. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  10. Improving the sludge disintegration efficiency of sonication by combining with alkalization and thermal pre-treatment methods.

    PubMed

    Şahinkaya, S; Sevimli, M F; Aygün, A

    2012-01-01

    One of the most serious problems encountered in biological wastewater treatment processes is the production of waste activated sludge (WAS). Sonication, which is an energy-intensive process, is the most powerful sludge pre-treatment method. Due to lack of information about the combined pre-treatment methods of sonication, the combined pre-treatment methods were investigated and it was aimed to improve the disintegration efficiency of sonication by combining sonication with alkalization and thermal pre-treatment methods in this study. The process performances were evaluated based on the quantities of increases in soluble chemical oxygen demand (COD), protein and carbohydrate. The releases of soluble COD, carbohydrate and protein by the combined methods were higher than those by sonication, alkalization and thermal pre-treatment alone. Degrees of sludge disintegration in various options of sonication were in the following descending order: sono-alkalization > sono-thermal pre-treatment > sonication. Therefore, it was determined that combining sonication with alkalization significantly improved the sludge disintegration and decreased the required energy to reach the same yield by sonication. In addition, effects on sludge settleability and dewaterability and kinetic mathematical modelling of pre-treatment performances of these methods were investigated. It was proven that the proposed model accurately predicted the efficiencies of ultrasonic pre-treatment methods.

  11. Integrated Practice Improvement Solutions-Practical Steps to Operating Room Management.

    PubMed

    Chernov, Mikhail; Pullockaran, Janet; Vick, Angela; Leyvi, Galina; Delphin, Ellise

    2016-10-01

    Perioperative productivity is a vital concern for surgeons, anesthesiologists, and administrators as the OR is a major source of hospital elective admissions and revenue. Based on elements of existing Practice Improvement Methodologies (PIMs), "Integrated Practice Improvement Solutions" (IPIS) is a practical and simple solution incorporating aspects of multiple management approaches into a single open source framework to increase OR efficiency and productivity by better utilization of existing resources. OR efficiency was measured both before and after IPIS implementation using the total number of cases versus room utilization, OR/anesthesia revenue and staff overtime (OT) costs. Other parameters of efficiency, such as the first case on-time start and the turnover time (TOT) were measured in parallel. IPIS implementation resulted in increased numbers of surgical procedures performed by an average of 10.7%, and OR and anesthesia revenue increases of 18.5% and 6.9%, respectively, with a simultaneous decrease in TOT (15%) and OT for anesthesia staff (26%). The number of perioperative adverse events was stable during the two-year study period which involved a total of 20,378 patients. IPIS, an effective and flexible practice improvement model, was designed to quickly, significantly, and sustainably improve OR efficiency by better utilization of existing resources. Success of its implementation directly correlates with the involvement of and acceptance by the entire OR team and hospital administration.

  12. Thin-Film Photovoltaic Solar Array Parametric Assessment

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.; Kerslake, Thomas W.; Hepp, Aloysius F.; Jacobs, Mark K.; Ponnusamy, Deva

    2000-01-01

    This paper summarizes a study that had the objective to develop a model and parametrically determine the circumstances for which lightweight thin-film photovoltaic solar arrays would be more beneficial, in terms of mass and cost, than arrays using high-efficiency crystalline solar cells. Previous studies considering arrays with near-term thin-film technology for Earth orbiting applications are briefly reviewed. The present study uses a parametric approach that evaluated the performance of lightweight thin-film arrays with cell efficiencies ranging from 5 to 20 percent. The model developed for this study is described in some detail. Similar mass and cost trends for each array option were found across eight missions of various power levels in locations ranging from Venus to Jupiter. The results for one specific mission, a main belt asteroid tour, indicate that only moderate thin-film cell efficiency (approx. 12 percent) is necessary to match the mass of arrays using crystalline cells with much greater efficiency (35 percent multi-junction GaAs based and 20 percent thin-silicon). Regarding cost, a 12 percent efficient thin-film array is projected to cost about half is much as a 4-junction GaAs array. While efficiency improvements beyond 12 percent did not significantly further improve the mass and cost benefits for thin-film arrays, higher efficiency will be needed to mitigate the spacecraft-level impacts associated with large deployed array areas. A low-temperature approach to depositing thin-film cells on lightweight, flexible plastic substrates is briefly described. The paper concludes with the observation that with the characteristics assumed for this study, ultra-lightweight arrays using efficient, thin-film cells on flexible substrates may become a leading alternative for a wide variety of space missions.

  13. Measuring the efficiency of Palestinian public hospitals during 2010-2015: an application of a two-stage DEA method.

    PubMed

    Sultan, Wasim I M; Crispim, José

    2018-05-29

    While health needs and expenditure in the Occupied Palestinian Territories (OPT) are growing, the international donations are declining and the economic situation is worsening. The purpose of this paper is twofold, to evaluate the productive efficiency of public hospitals in West Bank and to study contextual factors contributing to efficiency differences. This study examined technical efficiency among 11 public hospitals in West Bank from 2010 through 2015 targeting a total of 66 observations. Nationally representative data were extracted from the official annual health reports. We applied input-oriented Data Envelopment Analysis (DEA) models to estimate efficiency scores. To elaborate further on performance, we used Tobit regression to identify contextual factors whose impact on inefficient performance is statistically significant. Despite the increase in efficiency mean scores by 4% from 2010 to 2015, findings show potential savings of 14.5% of resource consumption without reducing the volume of the provided services. The significant Tobit model showed four predictors explaining the inefficient performance of a hospital (p <  0.01) are: bed occupancy rate (BOR); the outpatient-inpatient ratio (OPIPR); hospital's size (SIZE); and the availability of primary healthcare centers within the hospital's catchment area (PRC). There is a strong effect of OPIPR on efficiency differences between hospitals: A one unit increase in OPIPR will lead a decrease of 19.7% in the predicted inefficiency level holding all other factors constant. To date, no previous studies have examined the efficiency of public hospitals in the OPT. Our work identified their efficiency levels for potential improvements and the determinants of efficient performance. Based on the measurement of efficiency, the generated information may guide hospitals' managers, policymakers, and international donors improving the performance of the main national healthcare provider. The scope of this study is limited to public hospitals in West Bank. For a better understanding of the Palestinian market, further research on private hospitals and hospitals in Gaza Strip will be useful.

  14. Integrated hospital emergency care improves efficiency.

    PubMed

    Boyle, A A; Robinson, S M; Whitwell, D; Myers, S; Bennett, T J H; Hall, N; Haydock, S; Fritz, Z; Atkinson, P

    2008-02-01

    There is uncertainty about the most efficient model of emergency care. An attempt has been made to improve the process of emergency care in one hospital by developing an integrated model. The medical admissions unit was relocated into the existing emergency department and came under the 4-hour target. Medical case records were redesigned to provide a common assessment document for all patients presenting as an emergency. Medical, surgical and paediatric short-stay wards were opened next to the emergency department. A clinical decision unit replaced the more traditional observation unit. The process of patient assessment was streamlined so that a patient requiring admission was fully clerked by the first attending doctor to a level suitable for registrar or consultant review. Patients were allocated directly to specialty on arrival. The effectiveness of this approach was measured with routine data over the same 3-month periods in 2005 and 2006. There was a 16.3% decrease in emergency medical admissions and a 3.9% decrease in emergency surgical admissions. The median length of stay for emergency medical patients was reduced from 7 to 5 days. The efficiency of the elective surgical services was also improved. Performance against the 4-hour target declined but was still acceptable. The number of bed days for admitted surgical and medical cases rose slightly. There was an increase in the number of medical outliers on surgical wards, a reduction in the number of incident forms and formal complaints and a reduction in income for the hospital. Integrated emergency care has the ability to use spare capacity within emergency care. It offers significant advantages beyond the emergency department. However, improved efficiency in processing emergency patients placed the hospital at a financial disadvantage.

  15. Catalytic efficiency and thermostability improvement of Suc2 invertase through rational site-directed mutagenesis.

    PubMed

    Mohandesi, Nooshin; Haghbeen, Kamahldin; Ranaei, Omid; Arab, Seyed Shahriar; Hassani, Sorour

    2017-01-01

    Engineering of invertases has come to attention because of increasing demand for possible applications of invertases in various industrial processes. Due to the known physicochemical properties, invertases from micro-organisms such as Saccharomyces cerevisiae carrying SUC2 gene are considered as primary models. To improve thermostability and catalytic efficiency of SUC2 invertase (SInv), six influential residues with Relative Solvent Accessibility<5% were selected through multiple-sequence alignments, molecular modelling, structural and computational analyses. Consequently, SInv and 5 mutants including three mutants with single point substitution [Mut1=P152V, Mut2=S85V and Mut3=K153F)], one mutant with two points [Mut4=S305V-N463V] and one mutant with three points [Mut5=S85V-K153F-T271V] were developed via site-directed mutagenesis and produced using Pichia pastoris as the host. Physicochemical studies on these enzymes indicated that the selected amino acids which were located in the active site region mainly influenced catalytic efficiency. The best improvement belonged to Mut1 (54% increase in K cat /K m ) and Mut3 exhibited the worst effect (90% increase in K m ). These results suggest that Pro152 and Lys153 play key role in preparation of the right substrate lodging in the active site of SInv. The best thermostability improvement (16%) was observed for Mut4 in which two hydrophilic residues located on the loops, far from the active site, were replaced by Valines. These results suggest that tactful simultaneous substitution of influential hydrophilic residues in both active site region and peripheral loops with hydrophobic amino acids could result in more thermostable invertases with enhanced catalytic efficiency. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Blocking effect and numerical study of polymer particles dispersion flooding in heterogeneous reservoir

    NASA Astrophysics Data System (ADS)

    Zhu, Weiyao; Li, Jianhui; Lou, Yu

    2018-02-01

    Polymer flooding has become an effective way to improve the sweep efficiency in many oil fields. Many scholars have carried out a lot of researches on the mechanism of polymer flooding. In this paper, the effect of polymer on seepage is analyzed. The blocking effect of polymer particles was studied experimentally, and the residual resistance coefficient (RRF) were used to represent the blocking effect. We also build a mathematical model for heterogeneous concentration distribution of polymer particles. Furthermore, the effects of polymer particles on reservoir permeability, fluid viscosity and relative permeability are considered, and a two-phase flow model of oil and polymer particles is established. In addition, the model was tested in the heterogeneous stratum model, and three influencing factors, such as particle concentration, injection volume and PPD (short for polymer particle dispersion) injection time, were analyzed. Simulation results show that PPD can effectively improve sweep efficiency and especially improve oil recovery of low permeability layer. Oil recovery increases with the increase of particle concentration, but oil recovery increase rate gradually decreases with that. The greater the injected amount of PPD, the greater oil recovery and the smaller oil recovery increase rate. And there is an optimal timing to inject PPD for specific reservoir.

  17. Graphics Processing Units (GPU) and the Goddard Earth Observing System atmospheric model (GEOS-5): Implementation and Potential Applications

    NASA Technical Reports Server (NTRS)

    Putnam, William M.

    2011-01-01

    Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions

  18. An improved car-following model accounting for the preceding car's taillight

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Tang, Tie-Qiao; Yu, Shao-Wei

    2018-02-01

    During the deceleration process, the preceding car's taillight may have influences on its following car's driving behavior. In this paper, we propose an extended car-following model with consideration of the preceding car's taillight. Two typical situations are used to simulate each car's movement and study the effects of the preceding car's taillight on the driving behavior. Meanwhile, sensitivity analysis of the model parameter is in detail discussed. The numerical results show that the proposed model can improve the stability of traffic flow and the traffic safety can be enhanced without a decrease of efficiency especially when cars pass through a signalized intersection.

  19. Local-world and cluster-growing weighted networks with controllable clustering

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Xia; Tang, Min-Xuan; Tang, Hai-Qiang; Deng, Qiang-Qiang

    2014-12-01

    We constructed an improved weighted network model by introducing local-world selection mechanism and triangle coupling mechanism based on the traditional BBV model. The model gives power-law distributions of degree, strength and edge weight and presents the linear relationship both between the degree and strength and between the degree and the clustering coefficient. Particularly, the model is equipped with an ability to accelerate the speed increase of strength exceeding that of degree. Besides, the model is more sound and efficient in tuning clustering coefficient than the original BBV model. Finally, based on our improved model, we analyze the virus spread process and find that reducing the size of local-world has a great inhibited effect on virus spread.

  20. Shuttle Ground Operations Efficiencies/Technologies (SGOE/T) study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Scholz, A. L.; Hart, M. T.; Lowry, D. J.

    1987-01-01

    Methods and technolgoy were defined to reduce the overall operations cost of a major space program. Space Shuttle processing at Kennedy Space Center (KSC) was designed as the working model that would be the source of the operational information. Methods of improving efficiency of ground operations were assessed and technology elements that could reduce cost identified. Emphasis is on: (1) specific technology items and (2) management approaches required to develop and support efficient ground operations. Prime study results are to be recommendations on how to achieve more efficient operations and identification of existing or new technology that would make vehicle processing in both the current program and future programs more efficient and, therefore, less costly.

Top