Sample records for efficiency based models

  1. Assessment of Energy Efficient and Model Based Control

    DTIC Science & Technology

    2017-06-15

    ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig Lennon...originator. ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig...

  2. [Ecological management model of agriculture-pasture ecotone based on the theory of energy and material flow--a case study in Houshan dryland area of Inner Mongolia].

    PubMed

    Fan, Jinlong; Pan, Zhihua; Zhao, Ju; Zheng, Dawei; Tuo, Debao; Zhao, Peiyi

    2004-04-01

    The degradation of ecological environment in the agriculture-pasture ecotone in northern China has been paid more attentions. Based on our many years' research and under the guide of energy and material flow theory, this paper put forward an ecological management model, with a hill as the basic cell and according to the natural, social and economic characters of Houshan dryland farming area inside the north agriculture-pasture ecotone. The input and output of three models, i.e., the traditional along-slope-tillage model, the artificial grassland model and the ecological management model, were observed and recorded in detail in 1999. Energy and material flow analysis based on field test showed that compared with traditional model, ecological management model could increase solar use efficiency by 8.3%, energy output by 8.7%, energy conversion efficiency by 19.4%, N output by 26.5%, N conversion efficiency by 57.1%, P output by 12.1%, P conversion efficiency by 45.0%, and water use efficiency by 17.7%. Among the models, artificial grassland model had the lowest solar use efficiency, energy output and energy conversion efficiency; while the ecological management model had the most outputs and benefits, was the best model with high economic effect, and increased economic benefits by 16.1%, compared with the traditional model.

  3. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  4. A two-stage DEA approach for environmental efficiency measurement.

    PubMed

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Abdelaziz, Omar; Shrestha, Som S.

    Based on the laboratory investigation in FY16, for R-22 and R-410A alternative low GWP refrigerants in two baseline rooftop air conditioners (RTU), we used the DOE/ORNL Heat Pump Design Model to model the two RTUs and calibrated the models against the experimental data. Using the calibrated equipment models, we compared the compressor efficiencies, heat exchanger performances. An efficiency-based compressor mapping method was developed, which is able to predict compressor performances of the alternative low GWP refrigerants accurately. Extensive model-based optimizations were conducted to provide a fair comparison between all the low GWP candidates by selecting their preferred configurations at themore » same cooling capacity and compressor efficiencies.« less

  6. Strained layer relaxation effect on current crowding and efficiency improvement of GaN based LED

    NASA Astrophysics Data System (ADS)

    Aurongzeb, Deeder

    2012-02-01

    Efficiency droop effect of GaN based LED at high power and high temperature is addressed by several groups based on career delocalization and photon recycling effect(radiative recombination). We extend the previous droop models to optical loss parameters. We correlate stained layer relaxation at high temperature and high current density to carrier delocalization. We propose a third order model and show that Shockley-Hall-Read and Auger recombination effect is not enough to account for the efficiency loss. Several strained layer modification scheme is proposed based on the model.

  7. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  8. Efficient model checking of network authentication protocol based on SPIN

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan

    2013-03-01

    Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.

  9. Spatial econometric analysis of factors influencing regional energy efficiency in China.

    PubMed

    Song, Malin; Chen, Yu; An, Qingxian

    2018-05-01

    Increased environmental pollution and energy consumption caused by the country's rapid development has raised considerable public concern, and has become the focus of the government and public. This study employs the super-efficiency slack-based model-data envelopment analysis (SBM-DEA) to measure the total factor energy efficiency of 30 provinces in China. The estimation model for the spatial interaction intensity of regional total factor energy efficiency is based on Wilson's maximum entropy model. The model is used to analyze the factors that affect the potential value of total factor energy efficiency using spatial dynamic panel data for 30 provinces during 2000-2014. The study found that there are differences and spatial correlations of energy efficiency among provinces and regions in China. The energy efficiency in the eastern, central, and western regions fluctuated significantly, and was mainly because of significant energy efficiency impacts on influences of industrial structure, energy intensity, and technological progress. This research is of great significance to China's energy efficiency and regional coordinated development.

  10. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  11. Value-based Proposition for a Dedicated Interventional Pulmonology Suite: an Adaptable Business Model.

    PubMed

    Desai, Neeraj R; French, Kim D; Diamond, Edward; Kovitz, Kevin L

    2018-05-31

    Value-based care is evolving with a focus on improving efficiency, reducing cost, and enhancing the patient experience. Interventional pulmonology has the opportunity to lead an effective value-based care model. This model is supported by the relatively low cost of pulmonary procedures and has the potential to improve efficiencies in thoracic care. We discuss key strategies to evaluate and improve efficiency in Interventional Pulmonology practice and describe our experience in developing an interventional pulmonology suite. Such a model can be adapted to other specialty areas and may encourage a more coordinated approach to specialty care. Copyright © 2018. Published by Elsevier Inc.

  12. Measuring the efficiency of zakat collection process using data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Hamzah, Ahmad Aizuddin; Krishnan, Anath Rau

    2016-10-01

    It is really necessary for each zakat institution in the nation to timely measure and understand their efficiency in collecting zakat for the sake of continuous betterment. Pusat Zakat Sabah, Malaysia which has kicked off its operation in early of 2007, is not excused from this obligation as well. However, measuring the collection efficiency is not a very easy task as it usually incorporates the consideration of multiple inputs or/and outputs. This paper sequentially employed three data envelopment analysis models, namely Charnes-Cooper-Rhodes (CCR) primal model, CCR dual model, and slack based model to quantitatively evaluate the efficiency of zakat collection in Sabah across the year of 2007 up to 2015 by treating each year as a decision making unit. The three models were developed based on two inputs (i.e. number of zakat branches and number of staff) and one output (i.e. total collection). The causes for not achieving efficiency and the suggestions on how the efficiency in each year could have been improved were disclosed.

  13. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    PubMed

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  14. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  15. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  16. Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning

    DOE PAGES

    Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...

    2016-04-26

    A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.

  17. Efficient generation of mouse models of human diseases via ABE- and BE-mediated base editing.

    PubMed

    Liu, Zhen; Lu, Zongyang; Yang, Guang; Huang, Shisheng; Li, Guanglei; Feng, Songjie; Liu, Yajing; Li, Jianan; Yu, Wenxia; Zhang, Yu; Chen, Jia; Sun, Qiang; Huang, Xingxu

    2018-06-14

    A recently developed adenine base editor (ABE) efficiently converts A to G and is potentially useful for clinical applications. However, its precision and efficiency in vivo remains to be addressed. Here we achieve A-to-G conversion in vivo at frequencies up to 100% by microinjection of ABE mRNA together with sgRNAs. We then generate mouse models harboring clinically relevant mutations at Ar and Hoxd13, which recapitulates respective clinical defects. Furthermore, we achieve both C-to-T and A-to-G base editing by using a combination of ABE and SaBE3, thus creating mouse model harboring multiple mutations. We also demonstrate the specificity of ABE by deep sequencing and whole-genome sequencing (WGS). Taken together, ABE is highly efficient and precise in vivo, making it feasible to model and potentially cure relevant genetic diseases.

  18. Model-based optimizations of packaged rooftop air conditioners using low global warming potential refrigerants

    DOE PAGES

    Shen, Bo; Abdelaziz, Omar; Shrestha, Som; ...

    2017-10-31

    Based on laboratory investigations for R-22 and R-410A alternative low GWP refrigerants in two baseline rooftop air conditioners (RTU), the DOE/ORNL Heat Pump Design Model was used to model the two RTUs and the models were calibrated against the experimental data. We compared the compressor efficiencies and heat exchanger performances. An efficiency-based compressor mapping method was developed. Extensive model-based optimizations were conducted to provide a fair comparison between all the low GWP candidates by selecting optimal configurations. The results illustrate that all the R-22 low GWP refrigerants will lead to slightly lower COPs. ARM-20B appears to be the best R-22more » replacement at normal conditions. At higher ambient temperatures, ARM-20A exhibits better performance. All R-410A low GWP candidates will result in similar or better efficiencies than R-410A. R-32 has the best COP while requiring the smallest compressor. Finally, R-452B uses the closest compressor displacement volume and achieves the same efficiency as R-410A.« less

  19. Model-based optimizations of packaged rooftop air conditioners using low global warming potential refrigerants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Abdelaziz, Omar; Shrestha, Som

    Based on laboratory investigations for R-22 and R-410A alternative low GWP refrigerants in two baseline rooftop air conditioners (RTU), the DOE/ORNL Heat Pump Design Model was used to model the two RTUs and the models were calibrated against the experimental data. We compared the compressor efficiencies and heat exchanger performances. An efficiency-based compressor mapping method was developed. Extensive model-based optimizations were conducted to provide a fair comparison between all the low GWP candidates by selecting optimal configurations. The results illustrate that all the R-22 low GWP refrigerants will lead to slightly lower COPs. ARM-20B appears to be the best R-22more » replacement at normal conditions. At higher ambient temperatures, ARM-20A exhibits better performance. All R-410A low GWP candidates will result in similar or better efficiencies than R-410A. R-32 has the best COP while requiring the smallest compressor. Finally, R-452B uses the closest compressor displacement volume and achieves the same efficiency as R-410A.« less

  20. Investigating market efficiency through a forecasting model based on differential equations

    NASA Astrophysics Data System (ADS)

    de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco

    2017-05-01

    A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.

  1. Model of Ni-63 battery with realistic PIN structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munson, Charles E.; Voss, Paul L.; Ougazzaden, Abdallah, E-mail: aougazza@georgiatech-metz.fr

    2015-09-14

    GaN, with its wide bandgap of 3.4 eV, has emerged as an efficient material for designing high-efficiency betavoltaic batteries. An important part of designing efficient betavoltaic batteries involves a good understanding of the full process, from the behavior of the nuclear material and the creation of electron-hole pairs all the way through the collection of photo-generated carriers. This paper presents a detailed model based on Monte Carlo and Silvaco for a GaN-based betavoltaic battery device, modeled after Ni-63 as an energy source. The accuracy of the model is verified by comparing it with experimental values obtained for a GaN-based p-i-nmore » structure under scanning electron microscope illumination.« less

  2. Model of Ni-63 battery with realistic PIN structure

    NASA Astrophysics Data System (ADS)

    Munson, Charles E.; Arif, Muhammad; Streque, Jeremy; Belahsene, Sofiane; Martinez, Anthony; Ramdane, Abderrahim; El Gmili, Youssef; Salvestrini, Jean-Paul; Voss, Paul L.; Ougazzaden, Abdallah

    2015-09-01

    GaN, with its wide bandgap of 3.4 eV, has emerged as an efficient material for designing high-efficiency betavoltaic batteries. An important part of designing efficient betavoltaic batteries involves a good understanding of the full process, from the behavior of the nuclear material and the creation of electron-hole pairs all the way through the collection of photo-generated carriers. This paper presents a detailed model based on Monte Carlo and Silvaco for a GaN-based betavoltaic battery device, modeled after Ni-63 as an energy source. The accuracy of the model is verified by comparing it with experimental values obtained for a GaN-based p-i-n structure under scanning electron microscope illumination.

  3. A model for the Global Quantum Efficiency for a TPB-based wavelength-shifting system used with photomultiplier tubes in liquid argon in MicroBooNE

    NASA Astrophysics Data System (ADS)

    Pate, S. F.; Wester, T.; Bugel, L.; Conrad, J.; Henderson, E.; Jones, B. J. P.; McLean, A. I. L.; Moon, J. S.; Toups, M.; Wongjirad, T.

    2018-02-01

    We present a model for the Global Quantum Efficiency (GQE) of the MicroBooNE optical units. An optical unit consists of a flat, circular acrylic plate, coated with tetraphenyl butadiene (TPB), positioned near the photocathode of a 20.2-cm diameter photomultiplier tube. The plate converts the ultra-violet scintillation photons from liquid argon into visible-spectrum photons to which the cryogenic phototubes are sensitive. The GQE is the convolution of the efficiency of the plates that convert the 128 nm scintillation light from liquid argon to visible light, the efficiency of the shifted light to reach the photocathode, and the efficiency of the cryogenic photomultiplier tube. We develop a GEANT4-based model of the optical unit, based on first principles, and obtain the range of probable values for the expected number of detected photoelectrons (NPE) given the known systematic errors on the simulation parameters. We compare results from four measurements of the NPE determined using alpha-particle sources placed at two distances from a TPB-coated plate in a liquid argon cryostat test stand. We also directly measured the radial dependence of the quantum efficiency, and find that this has the same shape as predicted by our model. Our model results in a GQE of 0.0055±0.0009 for the MicroBooNE optical units. While the information shown here is MicroBooNE specific, the approach to the model and the collection of simulation parameters will be widely applicable to many liquid-argon-based light collection systems.

  4. Efficiency-Based Funding for Public Four-Year Colleges and Universities

    ERIC Educational Resources Information Center

    Sexton, Thomas R.; Comunale, Christie L.; Gara, Stephen C.

    2012-01-01

    We propose an efficiency-based mechanism for state funding of public colleges and universities using data envelopment analysis. We describe the philosophy and the mathematics that underlie the approach and apply\\break the proposed model to data from 362 U.S. public four-year colleges and universities. The model provides incentives to institution…

  5. Sensitivity to the Sampling Process Emerges From the Principle of Efficiency.

    PubMed

    Jara-Ettinger, Julian; Sun, Felix; Schulz, Laura; Tenenbaum, Joshua B

    2018-05-01

    Humans can seamlessly infer other people's preferences, based on what they do. Broadly, two types of accounts have been proposed to explain different aspects of this ability. The first account focuses on spatial information: Agents' efficient navigation in space reveals what they like. The second account focuses on statistical information: Uncommon choices reveal stronger preferences. Together, these two lines of research suggest that we have two distinct capacities for inferring preferences. Here we propose that this is not the case, and that spatial-based and statistical-based preference inferences can be explained by the assumption that agents are efficient alone. We show that people's sensitivity to spatial and statistical information when they infer preferences is best predicted by a computational model of the principle of efficiency, and that this model outperforms dual-system models, even when the latter are fit to participant judgments. Our results suggest that, as adults, a unified understanding of agency under the principle of efficiency underlies our ability to infer preferences. Copyright © 2018 Cognitive Science Society, Inc.

  6. The methodology of the gas turbine efficiency calculation

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Job, Marcin; Brzęczek, Mateusz; Nawrat, Krzysztof; Mędrych, Janusz

    2016-12-01

    In the paper a calculation methodology of isentropic efficiency of a compressor and turbine in a gas turbine installation on the basis of polytropic efficiency characteristics is presented. A gas turbine model is developed into software for power plant simulation. There are shown the calculation algorithms based on iterative model for isentropic efficiency of the compressor and for isentropic efficiency of the turbine based on the turbine inlet temperature. The isentropic efficiency characteristics of the compressor and the turbine are developed by means of the above mentioned algorithms. The gas turbine development for the high compressor ratios was the main driving force for this analysis. The obtained gas turbine electric efficiency characteristics show that an increase of pressure ratio above 50 is not justified due to the slight increase in the efficiency with a significant increase of turbine inlet combustor outlet and temperature.

  7. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  8. An improvement in the calculation of the efficiency of oxidative phosphorylation and rate of energy dissipation in mitochondria

    NASA Astrophysics Data System (ADS)

    Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman

    2014-12-01

    The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.

  9. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  10. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  11. Improving smoothing efficiency of rigid conformal polishing tool using time-dependent smoothing evaluation model

    NASA Astrophysics Data System (ADS)

    Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng

    2017-06-01

    A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o

  12. A model for the Global Quantum Efficiency for a TPB-based wavelength-shifting system used with photomultiplier tubes in liquid argon in MicroBooNE

    DOE PAGES

    Pate, S. F.; Wester, T.; Bugel, L.; ...

    2018-02-28

    We present a model for the Global Quantum Efficiency (GQE) of the MicroBooNE optical units. An optical unit consists of a flat, circular acrylic plate, coated with tetraphenyl butadiene (TPB), positioned near the photocathode of a 20.2-cm diameter photomultiplier tube. The plate converts the ultra-violet scintillation photons from liquid argon into visible-spectrum photons to which the cryogenic phototubes are sensitive. The GQE is the convolution of the efficiency of the plates that convert the 128 nm scintillation light from liquid argon to visible light, the efficiency of the shifted light to reach the photocathode, and the efficiency of the cryogenic photomultiplier tube. We develop a GEANT4-based model of the optical unit, based on first principles, and obtain the range of probable values for the expected number of detected photoelectrons (more » $$N_{\\rm PE}$$) given the known systematic errors on the simulation parameters. We compare results from four measurements of the $$N_{\\rm PE}$$ determined using alpha-particle sources placed at two distances from a TPB-coated plate in a liquid argon cryostat test stand. We also directly measured the radial dependence of the quantum efficiency, and find that this has the same shape as predicted by our model. Our model results in a GQE of $$0.0055\\pm0.0009$$ for the MicroBooNE optical units. While the information shown here is MicroBooNE specific, the approach to the model and the collection of simulation parameters will be widely applicable to many liquid-argon-based light collection systems.« less

  13. A model for the Global Quantum Efficiency for a TPB-based wavelength-shifting system used with photomultiplier tubes in liquid argon in MicroBooNE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pate, S. F.; Wester, T.; Bugel, L.

    We present a model for the Global Quantum Efficiency (GQE) of the MicroBooNE optical units. An optical unit consists of a flat, circular acrylic plate, coated with tetraphenyl butadiene (TPB), positioned near the photocathode of a 20.2-cm diameter photomultiplier tube. The plate converts the ultra-violet scintillation photons from liquid argon into visible-spectrum photons to which the cryogenic phototubes are sensitive. The GQE is the convolution of the efficiency of the plates that convert the 128 nm scintillation light from liquid argon to visible light, the efficiency of the shifted light to reach the photocathode, and the efficiency of the cryogenic photomultiplier tube. We develop a GEANT4-based model of the optical unit, based on first principles, and obtain the range of probable values for the expected number of detected photoelectrons (more » $$N_{\\rm PE}$$) given the known systematic errors on the simulation parameters. We compare results from four measurements of the $$N_{\\rm PE}$$ determined using alpha-particle sources placed at two distances from a TPB-coated plate in a liquid argon cryostat test stand. We also directly measured the radial dependence of the quantum efficiency, and find that this has the same shape as predicted by our model. Our model results in a GQE of $$0.0055\\pm0.0009$$ for the MicroBooNE optical units. While the information shown here is MicroBooNE specific, the approach to the model and the collection of simulation parameters will be widely applicable to many liquid-argon-based light collection systems.« less

  14. Simulation model for assessing the efficiency of a combined power installation based on a geothermal heat pump and a vacuum solar collector

    NASA Astrophysics Data System (ADS)

    Vaysman, Ya I.; Surkov, AA; Surkova, Yu I.; Kychkin, AV

    2017-06-01

    The article is devoted to the use of renewable energy sources and the assessment of the feasibility of their use in the climatic conditions of the Western Urals. A simulation model that calculates the efficiency of a combined power installations (CPI) was (RES) developed. The CPI consists of the geothermal heat pump (GHP) and the vacuum solar collector (VCS) and is based on the research model. This model allows solving a wide range of problems in the field of energy and resource efficiency, and can be applied to other objects using RES. Based on the research recommendations for optimizing the management and the application of CPI were given. The optimization system will give a positive effect in the energy and resource consumption of low-rise residential buildings projects.

  15. Modelling water uptake efficiency of root systems

    NASA Astrophysics Data System (ADS)

    Leitner, Daniel; Tron, Stefania; Schröder, Natalie; Bodner, Gernot; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry; Schnepf, Andrea

    2016-04-01

    Water uptake is crucial for plant productivity. Trait based breeding for more water efficient crops will enable a sustainable agricultural management under specific pedoclimatic conditions, and can increase drought resistance of plants. Mathematical modelling can be used to find suitable root system traits for better water uptake efficiency defined as amount of water taken up per unit of root biomass. This approach requires large simulation times and large number of simulation runs, since we test different root systems under different pedoclimatic conditions. In this work, we model water movement by the 1-dimensional Richards equation with the soil hydraulic properties described according to the van Genuchten model. Climatic conditions serve as the upper boundary condition. The root system grows during the simulation period and water uptake is calculated via a sink term (after Tron et al. 2015). The goal of this work is to compare different free software tools based on different numerical schemes to solve the model. We compare implementations using DUMUX (based on finite volumes), Hydrus 1D (based on finite elements), and a Matlab implementation of Van Dam, J. C., & Feddes 2000 (based on finite differences). We analyse the methods for accuracy, speed and flexibility. Using this model case study, we can clearly show the impact of various root system traits on water uptake efficiency. Furthermore, we can quantify frequent simplifications that are introduced in the modelling step like considering a static root system instead of a growing one, or considering a sink term based on root density instead of considering the full root hydraulic model (Javaux et al. 2008). References Tron, S., Bodner, G., Laio, F., Ridolfi, L., & Leitner, D. (2015). Can diversity in root architecture explain plant water use efficiency? A modeling study. Ecological modelling, 312, 200-210. Van Dam, J. C., & Feddes, R. A. (2000). Numerical simulation of infiltration, evaporation and shallow groundwater levels with the Richards equation. Journal of Hydrology, 233(1), 72-85. Javaux, M., Schröder, T., Vanderborght, J., & Vereecken, H. (2008). Use of a three-dimensional detailed modeling approach for predicting root water uptake. Vadose Zone Journal, 7(3), 1079-1088.

  16. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  17. Polyglutamine Disease Modeling: Epitope Based Screen for Homologous Recombination using CRISPR/Cas9 System.

    PubMed

    An, Mahru C; O'Brien, Robert N; Zhang, Ningzhe; Patra, Biranchi N; De La Cruz, Michael; Ray, Animesh; Ellerby, Lisa M

    2014-04-15

    We have previously reported the genetic correction of Huntington's disease (HD) patient-derived induced pluripotent stem cells using traditional homologous recombination (HR) approaches. To extend this work, we have adopted a CRISPR-based genome editing approach to improve the efficiency of recombination in order to generate allelic isogenic HD models in human cells. Incorporation of a rapid antibody-based screening approach to measure recombination provides a powerful method to determine relative efficiency of genome editing for modeling polyglutamine diseases or understanding factors that modulate CRISPR/Cas9 HR.

  18. Model Based Optimization of Integrated Low Voltage DC-DC Converter for Energy Harvesting Applications

    NASA Astrophysics Data System (ADS)

    Jayaweera, H. M. P. C.; Muhtaroğlu, Ali

    2016-11-01

    A novel model based methodology is presented to determine optimal device parameters for the fully integrated ultra low voltage DC-DC converter for energy harvesting applications. The proposed model feasibly contributes to determine the maximum efficient number of charge pump stages to fulfill the voltage requirement of the energy harvester application. The proposed DC-DC converter based power consumption model enables the analytical derivation of the charge pump efficiency when utilized simultaneously with the known LC tank oscillator behavior under resonant conditions, and voltage step up characteristics of the cross-coupled charge pump topology. The verification of the model has been done using a circuit simulator. The optimized system through the established model achieves more than 40% maximum efficiency yielding 0.45 V output with single stage, 0.75 V output with two stages, and 0.9 V with three stages for 2.5 kΩ, 3.5 kΩ and 5 kΩ loads respectively using 0.2 V input.

  19. Measuring Efficiency of Health Systems of the Middle East and North Africa (MENA) Region Using Stochastic Frontier Analysis.

    PubMed

    Hamidi, Samer; Akinci, Fevzi

    2016-06-01

    The main purpose of this study is to measure the technical efficiency of twenty health systems in the Middle East and North Africa (MENA) region to inform evidence-based health policy decisions. In addition, the effects of alternative stochastic frontier model specification on the empirical results are examined. We conducted a stochastic frontier analysis to estimate the country-level technical efficiencies using secondary panel data for 20 MENA countries for the period of 1995-2012 from the World Bank database. We also tested the effect of alternative frontier model specification using three random-effects approaches: a time-invariant model where efficiency effects are assumed to be static with regard to time, and a time-varying efficiency model where efficiency effects have temporal variation, and one model to account for heterogeneity. The average estimated technical inefficiency of health systems in the MENA region was 6.9 % with a range of 5.7-7.9 % across the three models. Among the top performers, Lebanon, Qatar, and Morocco are ranked consistently high according to the three different inefficiency model specifications. On the opposite side, Sudan, Yemen and Djibouti ranked among the worst performers. On average, the two most technically efficient countries were Qatar and Lebanon. We found that the estimated technical efficiency scores vary substantially across alternative parametric models. Based on the findings reported in this study, most MENA countries appear to be operating, on average, with a reasonably high degree of technical efficiency compared with other countries in the region. However, there is evidence to suggest that there are considerable efficiency gains yet to be made by some MENA countries. Additional empirical research is needed to inform future health policies aimed at improving both the efficiency and sustainability of the health systems in the MENA region.

  20. Novel approach for computing photosynthetically active radiation for productivity modeling using remotely sensed images in the Great Plains, United States

    USGS Publications Warehouse

    Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.

    2012-01-01

    Gross primary production (GPP) is a key indicator of ecosystem performance, and helps in many decision-making processes related to environment. We used the Eddy covariancelight use efficiency (EC-LUE) model for estimating GPP in the Great Plains, United States in order to evaluate the performance of this model. We developed a novel algorithm for computing the photosynthetically active radiation (PAR) based on net radiation. A strong correlation (R2=0.94,N=24) was found between daily PAR and Landsat-based mid-day instantaneous net radiation. Though the Moderate Resolution Spectroradiometer (MODIS) based instantaneous net radiation was in better agreement (R2=0.98,N=24) with the daily measured PAR, there was no statistical significant difference between Landsat based PAR and MODIS based PAR. The EC-LUE model validation also confirms the need to consider biological attributes (C3 versus C4 plants) for potential light use efficiency. A universal potential light use efficiency is unable to capture the spatial variation of GPP. It is necessary to use C3 versus C4 based land use/land cover map for using EC-LUE model for estimating spatiotemporal distribution of GPP.

  1. Feature-based Approach in Product Design with Energy Efficiency Consideration

    NASA Astrophysics Data System (ADS)

    Li, D. D.; Zhang, Y. J.

    2017-10-01

    In this paper, a method to measure the energy efficiency and ecological footprint metrics of features is proposed for product design. First the energy consumption models of various manufacturing features, like cutting feature, welding feature, etc. are studied. Then, the total energy consumption of a product is modeled and estimated according to its features. Finally, feature chains that combined by several sequence features based on the producing operation orders are defined and analyzed to calculate global optimal solution. The corresponding assessment model is also proposed to estimate their energy efficiency and ecological footprint. Finally, an example is given to validate the proposed approach in the improvement of sustainability.

  2. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  3. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  4. Directional Slack-Based Measure for the Inverse Data Envelopment Analysis

    PubMed Central

    Abu Bakar, Mohd Rizam; Lee, Lai Soon; Jaafar, Azmi B.; Heydar, Maryam

    2014-01-01

    A novel technique has been introduced in this research which lends its basis to the Directional Slack-Based Measure for the inverse Data Envelopment Analysis. In practice, the current research endeavors to elucidate the inverse directional slack-based measure model within a new production possibility set. On one occasion, there is a modification imposed on the output (input) quantities of an efficient decision making unit. In detail, the efficient decision making unit in this method was omitted from the present production possibility set but substituted by the considered efficient decision making unit while its input and output quantities were subsequently modified. The efficiency score of the entire DMUs will be retained in this approach. Also, there would be an improvement in the efficiency score. The proposed approach was investigated in this study with reference to a resource allocation problem. It is possible to simultaneously consider any upsurges (declines) of certain outputs associated with the efficient decision making unit. The significance of the represented model is accentuated by presenting numerical examples. PMID:24883350

  5. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  6. The evaluation model of the enterprise energy efficiency based on DPSR.

    PubMed

    Wei, Jin-Yu; Zhao, Xiao-Yu; Sun, Xue-Shan

    2017-05-08

    The reasonable evaluation of the enterprise energy efficiency is an important work in order to reduce the energy consumption. In this paper, an effective energy efficiency evaluation index system is proposed based on DPSR (Driving forces-Pressure-State-Response) with the consideration of the actual situation of enterprises. This index system which covers multi-dimensional indexes of the enterprise energy efficiency can reveal the complete causal chain which includes the "driver forces" and "pressure" of the enterprise energy efficiency "state" caused by the internal and external environment, and the ultimate enterprise energy-saving "response" measures. Furthermore, the ANP (Analytic Network Process) and cloud model are used to calculate the weight of each index and evaluate the energy efficiency level. The analysis of BL Company verifies the feasibility of this index system and also provides an effective way to improve the energy efficiency at last.

  7. Efficiency improvement by navigated safety inspection involving visual clutter based on the random search model.

    PubMed

    Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao

    2018-06-25

    Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.

  8. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  9. Light Extraction From Solution-Based Processable Electrophosphorescent Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Krummacher, Benjamin C.; Mathai, Mathew; So, Franky; Choulis, Stelios; Choong, And-En, Vi

    2007-06-01

    Molecular dye dispersed solution processable blue emitting organic light-emitting devices have been fabricated and the resulting devices exhibit efficiency as high as 25 cd/A. With down-conversion phosphors, white emitting devices have been demonstrated with peak efficiency of 38 cd/A and luminous efficiency of 25 lm/W. The high efficiencies have been a product of proper tuning of carrier transport, optimization of the location of the carrier recombination zone and, hence, microcavity effect, efficient down-conversion from blue to white light, and scattering/isotropic remission due to phosphor particles. An optical model has been developed to investigate all these effects. In contrast to the common misunderstanding that light out-coupling efficiency is about 22% and independent of device architecture, our device data and optical modeling results clearly demonstrated that the light out-coupling efficiency is strongly dependent on the exact location of the recombination zone. Estimating the device internal quantum efficiencies based on external quantum efficiencies without considering the device architecture could lead to erroneous conclusions.

  10. Variable cycle control model for intersection based on multi-source information

    NASA Astrophysics Data System (ADS)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  11. Measuring efficiency and productivity growth of new technology-based firms in business incubators: the Portuguese case study of Madan Parque.

    PubMed

    Grilo, A; Santos, J

    2015-01-01

    Business incubators can play a major role in helping to turn a business idea into a technology-based organization that is economically efficient. However, there is a shortage in the literature regarding the efficiency evaluation and productivity evolution of the new technology-based firms (NTBFs) in the incubation scope. This study develops a model based on the data envelopment analysis (DEA) methodology, which allows the incubated NTBFs to evaluate and improve the efficiency of their management. Moreover, the Malmquist index is used to examine productivity change. The index is decomposed into multiple components to give insights into the root sources of productivity change. The proposed model was applied in a case study with 13 NTBFs incubated. From that study, we conclude that inefficient firms invest excessively in research and development (R&D), and, on average, firms have a productivity growth in the period of study.

  12. Measuring Efficiency and Productivity Growth of New Technology-Based Firms in Business Incubators: The Portuguese Case Study of Madan Parque

    PubMed Central

    Grilo, A.; Santos, J.

    2015-01-01

    Business incubators can play a major role in helping to turn a business idea into a technology-based organization that is economically efficient. However, there is a shortage in the literature regarding the efficiency evaluation and productivity evolution of the new technology-based firms (NTBFs) in the incubation scope. This study develops a model based on the data envelopment analysis (DEA) methodology, which allows the incubated NTBFs to evaluate and improve the efficiency of their management. Moreover, the Malmquist index is used to examine productivity change. The index is decomposed into multiple components to give insights into the root sources of productivity change. The proposed model was applied in a case study with 13 NTBFs incubated. From that study, we conclude that inefficient firms invest excessively in research and development (R&D), and, on average, firms have a productivity growth in the period of study. PMID:25874266

  13. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    PubMed

    Riaz, Faisal; Niazi, Muaz A

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  14. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson’s arms race model

    PubMed Central

    Niazi, Muaz A.

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294

  15. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  16. Mathematical model for prediction of efficiency indicators of educational activity in high school

    NASA Astrophysics Data System (ADS)

    Tikhonova, O. M.; Kushnikov, V. A.; Fominykh, D. S.; Rezchikov, A. F.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.

    2018-05-01

    The quality of high school is a current problem all over the world. The paper presents the system dedicated to predicting the accreditation indicators of technical universities based on J. Forrester mechanism of system dynamics. The mathematical model is developed for prediction of efficiency indicators of the educational activity and is based on the apparatus of nonlinear differential equations.

  17. Herding, minority game, market clearing and efficient markets in a simple spin model framework

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav; Vosvrda, Miloslav

    2018-01-01

    We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.

  18. Analytical approximation of the InGaZnO thin-film transistors surface potential

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi

    2016-10-01

    Surface-potential-based mathematical models are among the most accurate and physically based compact models of thin-film transistors, and in turn of indium gallium zinc oxide TFTs, available today. However, the need of iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not accurate enough in particular to model transconductances and transcapacitances. In this work we present an extremely accurate (in the range of nV) and computationally efficient non-iterative approximation of the surface potential that can serve as a basis for advanced surface-potential-based indium gallium zinc oxide TFTs models.

  19. Multi-issue Agent Negotiation Based on Fairness

    NASA Astrophysics Data System (ADS)

    Zuo, Baohe; Zheng, Sue; Wu, Hong

    Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.

  20. Adaptive transmission disequilibrium test for family trio design.

    PubMed

    Yuan, Min; Tian, Xin; Zheng, Gang; Yang, Yaning

    2009-01-01

    The transmission disequilibrium test (TDT) is a standard method to detect association using family trio design. It is optimal for an additive genetic model. Other TDT-type tests optimal for recessive and dominant models have also been developed. Association tests using family data, including the TDT-type statistics, have been unified to a class of more comprehensive and flexable family-based association tests (FBAT). TDT-type tests have high efficiency when the genetic model is known or correctly specified, but may lose power if the model is mis-specified. Hence tests that are robust to genetic model mis-specification yet efficient are preferred. Constrained likelihood ratio test (CLRT) and MAX-type test have been shown to be efficiency robust. In this paper we propose a new efficiency robust procedure, referred to as adaptive TDT (aTDT). It uses the Hardy-Weinberg disequilibrium coefficient to identify the potential genetic model underlying the data and then applies the TDT-type test (or FBAT for general applications) corresponding to the selected model. Simulation demonstrates that aTDT is efficiency robust to model mis-specifications and generally outperforms the MAX test and CLRT in terms of power. We also show that aTDT has power close to, but much more robust, than the optimal TDT-type test based on a single genetic model. Applications to real and simulated data from Genetic Analysis Workshop (GAW) illustrate the use of our adaptive TDT.

  1. Physics of Efficiency Droop in GaN:Eu Light-Emitting Diodes.

    PubMed

    Fragkos, Ioannis E; Dierolf, Volkmar; Fujiwara, Yasufumi; Tansu, Nelson

    2017-12-01

    The internal quantum efficiency (IQE) of an electrically-driven GaN:Eu based device for red light emission is analyzed in the framework of a current injection efficiency model (CIE). The excitation path of the Eu +3 ion is decomposed in a multiple level system, which includes the carrier transport phenomena across the GaN/GaN:Eu/GaN active region of the device, and the interactions among traps, Eu +3 ions and the GaN host. The identification and analysis of the limiting factors of the IQE are accomplished through the CIE model. The CIE model provides a guidance for high IQE in the electrically-driven GaN:Eu based red light emitters.

  2. Market-Based Higher Education: Does Colorado's Voucher Model Improve Higher Education Access and Efficiency?

    ERIC Educational Resources Information Center

    Hillman, Nicholas W.; Tandberg, David A.; Gross, Jacob P. K.

    2014-01-01

    In 2004, Colorado introduced the nation's first voucher model for financing public higher education. With state appropriations now allocated to students, rather than institutions, state officials expect this model to create cost efficiencies while also expanding college access. Using difference-in-difference regression analysis, we find limited…

  3. Empirical Study on Total Factor Productive Energy Efficiency in Beijing-Tianjin-Hebei Region-Analysis based on Malmquist Index and Window Model

    NASA Astrophysics Data System (ADS)

    Xu, Qiang; Ding, Shuai; An, Jingwen

    2017-12-01

    This paper studies the energy efficiency of Beijing-Tianjin-Hebei region and to finds out the trend of energy efficiency in order to improve the economic development quality of Beijing-Tianjin-Hebei region. Based on Malmquist index and window analysis model, this paper estimates the total factor energy efficiency in Beijing-Tianjin-Hebei region empirically by using panel data in this region from 1991 to 2014, and provides the corresponding political recommendations. The empirical result shows that, the total factor energy efficiency in Beijing-Tianjin-Hebei region increased from 1991 to 2014, mainly relies on advances in energy technology or innovation, and obvious regional differences in energy efficiency to exist. Throughout the window period of 24 years, the regional differences of energy efficiency in Beijing-Tianjin-Hebei region shrank. There has been significant convergent trend in energy efficiency after 2000, mainly depends on the diffusion and spillover of energy technologies.

  4. Knowledge discovery from data and Monte-Carlo DEA to evaluate technical efficiency of mental health care in small health areas

    PubMed Central

    García-Alonso, Carlos; Pérez-Naranjo, Leonor

    2009-01-01

    Introduction Knowledge management, based on information transfer between experts and analysts, is crucial for the validity and usability of data envelopment analysis (DEA). Aim To design and develop a methodology: i) to assess technical efficiency of small health areas (SHA) in an uncertainty environment, and ii) to transfer information between experts and operational models, in both directions, for improving expert’s knowledge. Method A procedure derived from knowledge discovery from data (KDD) is used to select, interpret and weigh DEA inputs and outputs. Based on KDD results, an expert-driven Monte-Carlo DEA model has been designed to assess the technical efficiency of SHA in Andalusia. Results In terms of probability, SHA 29 is the most efficient being, on the contrary, SHA 22 very inefficient. 73% of analysed SHA have a probability of being efficient (Pe) >0.9 and 18% <0.5. Conclusions Expert knowledge is necessary to design and validate any operational model. KDD techniques make the transfer of information from experts to any operational model easy and results obtained from the latter improve expert’s knowledge.

  5. Protocol for Reliability Assessment of Structural Health Monitoring Systems Incorporating Model-assisted Probability of Detection (MAPOD) Approach

    DTIC Science & Technology

    2011-09-01

    a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range

  6. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  7. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  8. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  9. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  10. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  11. Testing the Model-Observer Similarity Hypothesis with Text-Based Worked Examples

    ERIC Educational Resources Information Center

    Hoogerheide, Vincent; Loyens, Sofie M. M.; Jadi, Fedora; Vrins, Anna; van Gog, Tamara

    2017-01-01

    Example-based learning is a very effective and efficient instructional strategy for novices. It can be implemented using text-based worked examples that provide a written demonstration of how to perform a task, or (video) modelling examples in which an instructor (the "model") provides a demonstration. The model-observer similarity (MOS)…

  12. Search for Directed Networks by Different Random Walk Strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long

    2012-03-01

    A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.

  13. Developing an in silico model of the modulation of base excision repair using methoxyamine for more targeted cancer therapeutics.

    PubMed

    Gurkan-Cavusoglu, Evren; Avadhani, Sriya; Liu, Lili; Kinsella, Timothy J; Loparo, Kenneth A

    2013-04-01

    Base excision repair (BER) is a major DNA repair pathway involved in the processing of exogenous non-bulky base damages from certain classes of cancer chemotherapy drugs as well as ionising radiation (IR). Methoxyamine (MX) is a small molecule chemical inhibitor of BER that is shown to enhance chemotherapy and/or IR cytotoxicity in human cancers. In this study, the authors have analysed the inhibitory effect of MX on the BER pathway kinetics using a computational model of the repair pathway. The inhibitory effect of MX depends on the BER efficiency. The authors have generated variable efficiency groups using different sets of protein concentrations generated by Latin hypercube sampling, and they have clustered simulation results into high, medium and low efficiency repair groups. From analysis of the inhibitory effect of MX on each of the three groups, it is found that the inhibition is most effective for high efficiency BER, and least effective for low efficiency repair.

  14. Trust models for efficient communication in Mobile Cloud Computing and their applications to e-Commerce

    NASA Astrophysics Data System (ADS)

    Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos

    2016-11-01

    Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns

  15. Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks

    PubMed Central

    Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh

    2017-01-01

    In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950

  16. Parameterizing ecosystem light use efficiency and water use efficiency to estimate maize gross primary production and evapotranspiration using MODIS EVI

    USDA-ARS?s Scientific Manuscript database

    Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...

  17. Available pressure amplitude of linear compressor based on phasor triangle model

    NASA Astrophysics Data System (ADS)

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.

    2017-12-01

    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  18. Reduced-order modeling of piezoelectric energy harvesters with nonlinear circuits under complex conditions

    NASA Astrophysics Data System (ADS)

    Xiang, Hong-Jun; Zhang, Zhi-Wei; Shi, Zhi-Fei; Li, Hong

    2018-04-01

    A fully coupled modeling approach is developed for piezoelectric energy harvesters in this work based on the use of available robust finite element packages and efficient reducing order modeling techniques. At first, the harvester is modeled using finite element packages. The dynamic equilibrium equations of harvesters are rebuilt by extracting system matrices from the finite element model using built-in commands without any additional tools. A Krylov subspace-based scheme is then applied to obtain a reduced-order model for improving simulation efficiency but preserving the key features of harvesters. Co-simulation of the reduced-order model with nonlinear energy harvesting circuits is achieved in a system level. Several examples in both cases of harmonic response and transient response analysis are conducted to validate the present approach. The proposed approach allows to improve the simulation efficiency by several orders of magnitude. Moreover, the parameters used in the equivalent circuit model can be conveniently obtained by the proposed eigenvector-based model order reduction technique. More importantly, this work establishes a methodology for modeling of piezoelectric energy harvesters with any complicated mechanical geometries and nonlinear circuits. The input load may be more complex also. The method can be employed by harvester designers to optimal mechanical structures or by circuit designers to develop novel energy harvesting circuits.

  19. Analysing Trends in Light-Use Efficiency and Their Influence on Seasonal CO2 Amplitude Using a Simple Land Ecosystem Model

    NASA Astrophysics Data System (ADS)

    Thomas, R.; Prentice, I. C. C.; Graven, H. D.

    2016-12-01

    A simple model for gross primary production (GPP), the P-model, is used to analyse the recent increase in the amplitude of the seasonal cycle of CO2 (ASC) at high northern latitudes. Current terrestrial biosphere models and Earth System Models generally underestimate the observed increase in ASC since 1960. The increased ASC is primarily driven by an increase in net primary productivity (NPP), rather than respiration, so models are likely underestimating increases in NPP. In a recent study of process-based terrestrial biosphere models from the Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP), we showed that the concept of light-use efficiency can be used to separate modelled NPP changes into structural and physiological components (Thomas et al, 2016). The structural component (leaf area) can be tested against observations of greening, while the physiological component (light-use efficiency) is an emergent model property. The analysis suggests that current models are capturing the increases in vegetation greenness, but underestimating the increases in light-use efficiency and NPP. We test this hypothesis using the P-model, which explicitly uses greenness data and includes the effects of rising CO2 and climate change. In the P-model, GPP is calculated using only a few equations, which are based on a strong empirical and theoretical framework, and vegetation is not separated into plant functional types. The model is driven by observed greenness, CO2, temperature and vapour pressure, and modelled photosynthetically active radiation at a monthly time-step. Photosynthetic assimilation is based on two key assumptions: the co-limitation hypothesis (electron transport- and Rubisco-limited photosynthetic rates are equal), and the least-cost hypothesis (optimal ci:ca ratio), and is limited by modelled soil moisture. We present simulated changes in GPP over the satellite period (1982-2011) in the P-model, and assess the associated changes in light-use efficiency and ASC. Our results have implications for the attribution of drivers of ecosystem change and the formulation of prognostic and diagnostic biosphere models. Thomas, R. T. et al. 2016, CO2 and greening observations indicate increasing light-use efficiency in Northern terrestrial ecosystems, Geophys Res Lett, in review.

  20. Propulsive efficiency of frog swimming with different feet and swimming patterns

    PubMed Central

    Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu

    2017-01-01

    ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669

  1. Imposing constraints on parameter values of a conceptual hydrological model using baseflow response

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.

    Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.

  2. A Gossip-Based Optimistic Replication for Efficient Delay-Sensitive Streaming Using an Interactive Middleware Support System

    NASA Astrophysics Data System (ADS)

    Mavromoustakis, Constandinos X.; Karatza, Helen D.

    2010-06-01

    While sharing resources the efficiency is substantially degraded as a result of the scarceness of availability of the requested resources in a multiclient support manner. These resources are often aggravated by many factors like the temporal constraints for availability or node flooding by the requested replicated file chunks. Thus replicated file chunks should be efficiently disseminated in order to enable resource availability on-demand by the mobile users. This work considers a cross layered middleware support system for efficient delay-sensitive streaming by using each device's connectivity and social interactions in a cross layered manner. The collaborative streaming is achieved through the epidemically replicated file chunk policy which uses a transition-based approach of a chained model of an infectious disease with susceptible, infected, recovered and death states. The Gossip-based stateful model enforces the mobile nodes whether to host a file chunk or not or, when no longer a chunk is needed, to purge it. The proposed model is thoroughly evaluated through experimental simulation taking measures for the effective throughput Eff as a function of the packet loss parameter in contrast with the effectiveness of the replication Gossip-based policy.

  3. Electrical properties of III-Nitride LEDs: Recombination-based injection model and theoretical limits to electrical efficiency and electroluminescent cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Aurelien, E-mail: adavid@soraa.com; Hurni, Christophe A.; Young, Nathan G.

    The current-voltage characteristic and ideality factor of III-Nitride quantum well light-emitting diodes (LEDs) grown on bulk GaN substrates are investigated. At operating temperature, these electrical properties exhibit a simple behavior. A model in which only active-region recombinations have a contribution to the LED current is found to account for experimental results. The limit of LED electrical efficiency is discussed based on the model and on thermodynamic arguments, and implications for electroluminescent cooling are examined.

  4. An efficient algorithm for computing fixed length attractors based on bounded model checking in synchronous Boolean networks with biochemical applications.

    PubMed

    Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N

    2015-04-28

    Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.

  5. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach.

    PubMed

    Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S

    2017-01-01

    Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.

  6. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  7. Design of artificial neural networks using a genetic algorithm to predict collection efficiency in venturi scrubbers.

    PubMed

    Taheri, Mahboobeh; Mohebbi, Ali

    2008-08-30

    In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.

  8. The unlikely high efficiency of a molecular motor based on active motion

    NASA Astrophysics Data System (ADS)

    Ebeling, W.

    2015-07-01

    The efficiency of a simple model of a motor converting chemical into mechanical energy is studied analytically. The model motor shows interesting properties corresponding qualitatively to motors investigated in experiments. The efficiency increases with the load and may for low loss reach high values near to 100 percent in a narrow regime of optimal load. It is shown that the optimal load and the maximal efficiency depend by universal power laws on the dimensionless loss parameter. Stochastic effects decrease the stability of motor regimes with high efficiency and make them unlikely. Numerical studies show efficiencies below the theoretical optimum and demonstrate that special ratchet profiles my stabilize efficient regimes.

  9. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Improving actuation efficiency through variable recruitment hydraulic McKibben muscles: modeling, orderly recruitment control, and experiments.

    PubMed

    Meller, Michael; Chipka, Jordan; Volkov, Alexander; Bryant, Matthew; Garcia, Ephrahim

    2016-11-03

    Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.

  11. Easi-CRISPR for creating knock-in and conditional knockout mouse models using long ssDNA donors.

    PubMed

    Miura, Hiromi; Quadros, Rolen M; Gurumurthy, Channabasavaiah B; Ohtsuka, Masato

    2018-01-01

    CRISPR/Cas9-based genome editing can easily generate knockout mouse models by disrupting the gene sequence, but its efficiency for creating models that require either insertion of exogenous DNA (knock-in) or replacement of genomic segments is very poor. The majority of mouse models used in research involve knock-in (reporters or recombinases) or gene replacement (e.g., conditional knockout alleles containing exons flanked by LoxP sites). A few methods for creating such models have been reported that use double-stranded DNA as donors, but their efficiency is typically 1-10% and therefore not suitable for routine use. We recently demonstrated that long single-stranded DNAs (ssDNAs) serve as very efficient donors, both for insertion and for gene replacement. We call this method efficient additions with ssDNA inserts-CRISPR (Easi-CRISPR) because it is a highly efficient technology (efficiency is typically 30-60% and reaches as high as 100% in some cases). The protocol takes ∼2 months to generate the founder mice.

  12. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  13. Evaluating Technical Efficiency of Nursing Care Using Data Envelopment Analysis and Multilevel Modeling.

    PubMed

    Min, Ari; Park, Chang Gi; Scott, Linda D

    2016-05-23

    Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.

  14. Application of constraint-based satellite mission planning model in forest fire monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Bingjun; Wang, Hongfei; Wu, Peng

    2017-10-01

    In this paper, a constraint-based satellite mission planning model is established based on the thought of constraint satisfaction. It includes target, request, observation, satellite, payload and other elements, with constraints linked up. The optimization goal of the model is to make full use of time and resources, and improve the efficiency of target observation. Greedy algorithm is used in the model solving to make observation plan and data transmission plan. Two simulation experiments are designed and carried out, which are routine monitoring of global forest fire and emergency monitoring of forest fires in Australia. The simulation results proved that the model and algorithm perform well. And the model is of good emergency response capability. Efficient and reasonable plan can be worked out to meet users' needs under complex cases of multiple payloads, multiple targets and variable priorities with this model.

  15. Evaluation of railway transportation efficiency based on super-cross efficiency

    NASA Astrophysics Data System (ADS)

    Kuang, Xiuyuan

    2018-01-01

    The efficiency of railway transportation is an important index. It can measure the development of railway transportation enterprises, and the efficiency of railway transportation has become a hot issue in the study of railway development. Data envelopment analysis (DEA) has been widely applied to railway efficiency analysis. In this paper, BBC model and super-cross efficiency model are constructed by using DEA theory, taking the 18 Railway Bureau as the research object, with the mileage, the number of employees, locomotive number, average daily loading number as input indicators, the passenger turnover, freight turnover and transport income as output indicators, then calculated and evaluated comprehensive efficiency, pure technical efficiency and scale efficiency. We get that the super-cross efficiency is more in line with the actual situation. Getting the super-cross efficiency is more in line with the actual situation.

  16. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  17. Effect of the depth base along the vertical on the electrical parameters of a vertical parallel silicon solar cell in open and short circuit

    NASA Astrophysics Data System (ADS)

    Sahin, Gokhan; Kerimli, Genber

    2018-03-01

    This article presented a modeling study of effect of the depth base initiating on vertical parallel silicon solar cell's photovoltaic conversion efficiency. After the resolution of the continuity equation of excess minority carriers, we calculated the electrical parameters such as the photocurrent density, the photovoltage, series resistance and shunt resistances, diffusion capacitance, electric power, fill factor and the photovoltaic conversion efficiency. We determined the maximum electric power, the operating point of the solar cell and photovoltaic conversion efficiency according to the depth z in the base. We showed that the photocurrent density decreases with the depth z. The photovoltage decreased when the depth base increases. Series and shunt resistances were deduced from electrical model and were influenced and the applied the depth base. The capacity decreased with the depth z of the base. We had studied the influence of the variation of the depth z on the electrical parameters in the base.

  18. Efficient Authorization of Rich Presence Using Secure and Composed Web Services

    NASA Astrophysics Data System (ADS)

    Li, Li; Chou, Wu

    This paper presents an extended Role-Based Access Control (RBAC) model for efficient authorization of rich presence using secure web services composed with an abstract presence data model. Following the information symmetry principle, the standard RBAC model is extended to support context sensitive social relations and cascaded authority. In conjunction with the extended RBAC model, we introduce an extensible presence architecture prototype using WS-Security and WS-Eventing to secure rich presence information exchanges based on PKI certificates. Applications and performance measurements of our presence system are presented to show that the proposed RBAC framework for presence and collaboration is well suited for real-time communication and collaboration.

  19. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  20. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  1. Assessment of Export Efficiency Equations in the Southern Ocean Applied to Satellite-Based Net Primary Production

    NASA Astrophysics Data System (ADS)

    Arteaga, Lionel; Haëntjens, Nils; Boss, Emmanuel; Johnson, Kenneth S.; Sarmiento, Jorge L.

    2018-04-01

    Carbon export efficiency (e-ratio) is defined as the fraction of organic carbon fixed through net primary production (NPP) that is exported out of the surface productive layer of the ocean. Recent observations for the Southern Ocean suggest a negative e-ratio versus NPP relationship, and a reduced dependency of export efficiency on temperature, different than in the global domain. In this study, we complement information from a passive satellite sensor with novel space-based lidar observations of ocean particulate backscattering to infer NPP over the entire annual cycle, and estimate Southern Ocean export rates from five different empirical models of export efficiency. Inferred Southern Ocean NPP falls within the range of previous studies, with a mean estimate of 15.8 (± 3.9) Pg C yr-1 for the region south of 30°S during the 2005-2016 period. We find that an export efficiency model that accounts for silica(Si)-ballasting, which is constrained by observations with a negative e-ratio versus NPP relationship, shows the best agreement with in situ-based estimates of annual net community production (annual export of 2.7 ± 0.6 Pg C yr-1 south of 30°S). By contrast, models based on the analysis of global observations with a positive e-ratio versus NPP relationship predict annually integrated export rates that are ˜ 33% higher than the Si-dependent model. Our results suggest that accounting for Si-induced ballasting is important for the estimation of carbon export in the Southern Ocean.

  2. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty.

    PubMed

    Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.

  3. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty

    PubMed Central

    Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. PMID:28953900

  4. An efficient formulation of robot arm dynamics for control and computer simulation

    NASA Astrophysics Data System (ADS)

    Lee, C. S. G.; Nigam, R.

    This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.

  5. Model of a thin film optical fiber fluorosensor

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1991-01-01

    The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.

  6. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  7. Multiple quay cranes scheduling for double cycling in container terminals

    PubMed Central

    Chu, Yanling; Zhang, Xiaoju; Yang, Zhongzhen

    2017-01-01

    Double cycling is an efficient tool to increase the efficiency of quay crane (QC) in container terminals. In this paper, an optimization model for double cycling is developed to optimize the operation sequence of multiple QCs. The objective is to minimize the makespan of the ship handling operation considering the ship balance constraint. To solve the model, an algorithm based on Lagrangian relaxation is designed. Finally, we compare the efficiency of the Lagrangian relaxation based heuristic with the branch-and-bound method and a genetic algorithm using instances of different sizes. The results of numerical experiments indicate that the proposed model can effectively reduce the unloading and loading times of QCs. The effects of the ship balance constraint are more notable when the number of QCs is high. PMID:28692699

  8. Multiple quay cranes scheduling for double cycling in container terminals.

    PubMed

    Chu, Yanling; Zhang, Xiaoju; Yang, Zhongzhen

    2017-01-01

    Double cycling is an efficient tool to increase the efficiency of quay crane (QC) in container terminals. In this paper, an optimization model for double cycling is developed to optimize the operation sequence of multiple QCs. The objective is to minimize the makespan of the ship handling operation considering the ship balance constraint. To solve the model, an algorithm based on Lagrangian relaxation is designed. Finally, we compare the efficiency of the Lagrangian relaxation based heuristic with the branch-and-bound method and a genetic algorithm using instances of different sizes. The results of numerical experiments indicate that the proposed model can effectively reduce the unloading and loading times of QCs. The effects of the ship balance constraint are more notable when the number of QCs is high.

  9. BCM: toolkit for Bayesian analysis of Computational Models using samplers.

    PubMed

    Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A

    2016-10-21

    Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.

  10. Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation.

    PubMed

    Ertel, K; Banerjee, S; Mason, P D; Phillips, P J; Siebold, M; Hernandez-Gomez, C; Collier, J C

    2011-12-19

    We present a numerical model of a pulsed, diode-pumped Yb:YAG laser amplifier for the generation of high energy ns-pulses. This model is used to explore how optical-to-optical efficiency depends on factors such as pump duration, pump spectrum, pump intensity, doping concentration, and operating temperature. We put special emphasis on finding ways to achieve high efficiency within the practical limitations imposed by real-world laser systems, such as limited pump brightness and limited damage fluence. We show that a particularly advantageous way of improving efficiency within those constraints is operation at cryogenic temperature. Based on the numerical findings we present a concept for a scalable amplifier based on an end-pumped, cryogenic, gas-cooled multi-slab architecture.

  11. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  12. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    DOE PAGES

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale; ...

    2014-07-14

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al 3+, Fe 3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g -1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, wemore » found that it is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al 3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.« less

  13. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al 3+, Fe 3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g -1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, wemore » found that it is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al 3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.« less

  14. Global patterns and climate drivers of water-use efficiency in terrestrial ecosystems deduced from satellite-based datasets and carbon cycle models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yan; Piao, Shilong; Huang, Mengtian

    Our aim is to investigate how ecosystem water-use efficiency (WUE) varies spatially under different climate conditions, and how spatial variations in WUE differ from those of transpiration-based water-use efficiency (WUE t) and transpiration-based inherent water-use efficiency (IWUE t). LocationGlobal terrestrial ecosystems. We investigated spatial patterns of WUE using two datasets of gross primary productivity (GPP) and evapotranspiration (ET) and four biosphere model estimates of GPP and ET. Spatial relationships between WUE and climate variables were further explored through regression analyses. Global WUE estimated by two satellite-based datasets is 1.9 ± 0.1 and 1.8 ± 0.6g C m -2mm -1 lowermore » than the simulations from four process-based models (2.0 ± 0.3g C m -2mm -1) but comparable within the uncertainty of both approaches. In both satellite-based datasets and process models, precipitation is more strongly associated with spatial gradients of WUE for temperate and tropical regions, but temperature dominates north of 50 degrees N. WUE also increases with increasing solar radiation at high latitudes. The values of WUE from datasets and process-based models are systematically higher in wet regions (with higher GPP) than in dry regions. WUE t shows a lower precipitation sensitivity than WUE, which is contrary to leaf- and plant-level observations. IWUE t, the product of WUE t and water vapour deficit, is found to be rather conservative with spatially increasing precipitation, in agreement with leaf- and plant-level measurements. In conclusion, WUE, WUE t and IWUE t produce different spatial relationships with climate variables. In dry ecosystems, water losses from evaporation from bare soil, uncorrelated with productivity, tend to make WUE lower than in wetter regions. Yet canopy conductance is intrinsically efficient in those ecosystems and maintains a higher IWUEt. This suggests that the responses of each component flux of evapotranspiration should be analysed separately when investigating regional gradients in WUE, its temporal variability and its trends.« less

  15. Global patterns and climate drivers of water-use efficiency in terrestrial ecosystems deduced from satellite-based datasets and carbon cycle models

    DOE PAGES

    Sun, Yan; Piao, Shilong; Huang, Mengtian; ...

    2015-12-23

    Our aim is to investigate how ecosystem water-use efficiency (WUE) varies spatially under different climate conditions, and how spatial variations in WUE differ from those of transpiration-based water-use efficiency (WUE t) and transpiration-based inherent water-use efficiency (IWUE t). LocationGlobal terrestrial ecosystems. We investigated spatial patterns of WUE using two datasets of gross primary productivity (GPP) and evapotranspiration (ET) and four biosphere model estimates of GPP and ET. Spatial relationships between WUE and climate variables were further explored through regression analyses. Global WUE estimated by two satellite-based datasets is 1.9 ± 0.1 and 1.8 ± 0.6g C m -2mm -1 lowermore » than the simulations from four process-based models (2.0 ± 0.3g C m -2mm -1) but comparable within the uncertainty of both approaches. In both satellite-based datasets and process models, precipitation is more strongly associated with spatial gradients of WUE for temperate and tropical regions, but temperature dominates north of 50 degrees N. WUE also increases with increasing solar radiation at high latitudes. The values of WUE from datasets and process-based models are systematically higher in wet regions (with higher GPP) than in dry regions. WUE t shows a lower precipitation sensitivity than WUE, which is contrary to leaf- and plant-level observations. IWUE t, the product of WUE t and water vapour deficit, is found to be rather conservative with spatially increasing precipitation, in agreement with leaf- and plant-level measurements. In conclusion, WUE, WUE t and IWUE t produce different spatial relationships with climate variables. In dry ecosystems, water losses from evaporation from bare soil, uncorrelated with productivity, tend to make WUE lower than in wetter regions. Yet canopy conductance is intrinsically efficient in those ecosystems and maintains a higher IWUEt. This suggests that the responses of each component flux of evapotranspiration should be analysed separately when investigating regional gradients in WUE, its temporal variability and its trends.« less

  16. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    NASA Astrophysics Data System (ADS)

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  17. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  18. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  19. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  20. The LUE data model for representation of agents and fields

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2017-04-01

    Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue

  1. Measurement and decomposition of energy efficiency of Northeast China-based on super efficiency DEA model and Malmquist index.

    PubMed

    Ma, Xiaojun; Liu, Yan; Wei, Xiaoxue; Li, Yifan; Zheng, Mengchen; Li, Yudong; Cheng, Chaochao; Wu, Yumei; Liu, Zhaonan; Yu, Yuanbo

    2017-08-01

    Nowadays, environment problem has become the international hot issue. Experts and scholars pay more and more attention to the energy efficiency. Unlike most studies, which analyze the changes of TFEE in inter-provincial or regional cities, TFEE is calculated with the ratio of target energy value and actual energy input based on data in cities of prefecture levels, which would be more accurate. Many researches regard TFP as TFEE to do analysis from the provincial perspective. This paper is intended to calculate more reliably by super efficiency DEA, observe the changes of TFEE, and analyze its relation with TFP, and it proves that TFP is not equal to TFEE. Additionally, the internal influences of the TFEE are obtained via the Malmquist index decomposition. The external influences of the TFFE are analyzed afterward based on the Tobit models. Analysis results demonstrate that Heilongjiang has the highest TFEE followed by Jilin, and Liaoning has the lowest TFEE. Eventually, some policy suggestions are proposed for the influences of energy efficiency and study results.

  2. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.

    PubMed

    Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y

    2016-11-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.

  3. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling

    PubMed Central

    Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.

    2016-01-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739

  4. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  5. Tool Efficiency Analysis model research in SEMI industry

    NASA Astrophysics Data System (ADS)

    Lei, Ma; Nana, Zhang; Zhongqiu, Zhang

    2018-06-01

    One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  6. Optimization of single photon detection model based on GM-APD

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Yang, Yi; Hao, Peiyu

    2017-11-01

    One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.

  7. Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance

    USGS Publications Warehouse

    Barr, J.G.; Engel, V.; Fuentes, J.D.; Fuller, D.O.; Kwon, H.

    2013-01-01

    Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC) systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE), and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information about environmental conditions.

  8. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach

    PubMed Central

    Karamintziou, Sofia D.; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G.; Tagaris, George A.; Sakas, Damianos E.; Polychronaki, Georgia E.; Tsirogiannis, George L.; David, Olivier; Nikita, Konstantina S.

    2017-01-01

    Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson’s disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications. PMID:28222198

  9. Identifying Nonprovider Factors Affecting Pediatric Emergency Medicine Provider Efficiency.

    PubMed

    Saleh, Fareed; Breslin, Kristen; Mullan, Paul C; Tillett, Zachary; Chamberlain, James M

    2017-10-31

    The aim of this study was to create a multivariable model of standardized relative value units per hour by adjusting for nonprovider factors that influence efficiency. We obtained productivity data based on billing records measured in emergency relative value units for (1) both evaluation and management of visits and (2) procedures for 16 pediatric emergency medicine providers with more than 750 hours worked per year. Eligible shifts were in an urban, academic pediatric emergency department (ED) with 2 sites: a tertiary care main campus and a satellite community site. We used multivariable linear regression to adjust for the impact of shift and pediatric ED characteristics on individual-provider efficiency and then removed variables from the model with minimal effect on productivity. There were 2998 eligible shifts for the 16 providers during a 3-year period. The resulting model included 4 variables when looking at both ED sites combined. These variables include the following: (1) number of procedures billed by provider, (2) season of the year, (3) shift start time, and (4) day of week. Results were improved when we separately modeled each ED location. A 3-variable model using procedures billed by provider, shift start time, and season explained 23% of the variation in provider efficiency at the academic ED site. A 3-variable model using procedures billed by provider, patient arrivals per hour, and shift start time explained 45% of the variation in provider efficiency at the satellite ED site. Several nonprovider factors affect provider efficiency. These factors should be considered when designing productivity-based incentives.

  10. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  11. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  12. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  13. Energy-saving management modelling and optimization for lead-acid battery formation process

    NASA Astrophysics Data System (ADS)

    Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.

    2017-11-01

    In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.

  14. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  15. Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle

    NASA Astrophysics Data System (ADS)

    Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun

    2018-05-01

    The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.

  16. LAMMPS integrated materials engine (LIME) for efficient automation of particle-based simulations: application to equation of state generation

    NASA Astrophysics Data System (ADS)

    Barnes, Brian C.; Leiter, Kenneth W.; Becker, Richard; Knap, Jaroslaw; Brennan, John K.

    2017-07-01

    We describe the development, accuracy, and efficiency of an automation package for molecular simulation, the large-scale atomic/molecular massively parallel simulator (LAMMPS) integrated materials engine (LIME). Heuristics and algorithms employed for equation of state (EOS) calculation using a particle-based model of a molecular crystal, hexahydro-1,3,5-trinitro-s-triazine (RDX), are described in detail. The simulation method for the particle-based model is energy-conserving dissipative particle dynamics, but the techniques used in LIME are generally applicable to molecular dynamics simulations with a variety of particle-based models. The newly created tool set is tested through use of its EOS data in plate impact and Taylor anvil impact continuum simulations of solid RDX. The coarse-grain model results from LIME provide an approach to bridge the scales from atomistic simulations to continuum simulations.

  17. An efficient and scalable deformable model for virtual reality-based medical applications.

    PubMed

    Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann

    2004-09-01

    Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.

  18. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    PubMed Central

    Reich, Oliver; Rapold, Roland; Flatscher-Thöni, Magdalena

    2012-01-01

    Introduction This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006–2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5%, respectively, of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy-makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care. PMID:22371691

  19. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  20. Efficiency of a clinical prediction model for selective rapid testing in children with pharyngitis: A prospective, multicenter study

    PubMed Central

    Cohen, Robert; Bidet, Philippe; Elbez, Annie; Levy, Corinne; Bossuyt, Patrick M.; Chalumeau, Martin

    2017-01-01

    Background There is controversy whether physicians can rely on signs and symptoms to select children with pharyngitis who should undergo a rapid antigen detection test (RADT) for group A streptococcus (GAS). Our objective was to evaluate the efficiency of signs and symptoms in selectively testing children with pharyngitis. Materials and methods In this multicenter, prospective, cross-sectional study, French primary care physicians collected clinical data and double throat swabs from 676 consecutive children with pharyngitis; the first swab was used for the RADT and the second was used for a throat culture (reference standard). We developed a logistic regression model combining signs and symptoms with GAS as the outcome. We then derived a model-based selective testing strategy, assuming that children with low and high calculated probability of GAS (<0.12 and >0.85) would be managed without the RADT. Main outcomes and measures were performance of the model (c-index and calibration) and efficiency of the model-based strategy (proportion of participants in whom RADT could be avoided). Results Throat culture was positive for GAS in 280 participants (41.4%). Out of 17 candidate signs and symptoms, eight were retained in the prediction model. The model had an optimism-corrected c-index of 0.73; calibration of the model was good. With the model-based strategy, RADT could be avoided in 6.6% of participants (95% confidence interval 4.7% to 8.5%), as compared to a RADT-for-all strategy. Conclusions This study demonstrated that relying on signs and symptoms for selectively testing children with pharyngitis is not efficient. We recommend using a RADT in all children with pharyngitis. PMID:28235012

  1. Efficiency of a clinical prediction model for selective rapid testing in children with pharyngitis: A prospective, multicenter study.

    PubMed

    Cohen, Jérémie F; Cohen, Robert; Bidet, Philippe; Elbez, Annie; Levy, Corinne; Bossuyt, Patrick M; Chalumeau, Martin

    2017-01-01

    There is controversy whether physicians can rely on signs and symptoms to select children with pharyngitis who should undergo a rapid antigen detection test (RADT) for group A streptococcus (GAS). Our objective was to evaluate the efficiency of signs and symptoms in selectively testing children with pharyngitis. In this multicenter, prospective, cross-sectional study, French primary care physicians collected clinical data and double throat swabs from 676 consecutive children with pharyngitis; the first swab was used for the RADT and the second was used for a throat culture (reference standard). We developed a logistic regression model combining signs and symptoms with GAS as the outcome. We then derived a model-based selective testing strategy, assuming that children with low and high calculated probability of GAS (<0.12 and >0.85) would be managed without the RADT. Main outcomes and measures were performance of the model (c-index and calibration) and efficiency of the model-based strategy (proportion of participants in whom RADT could be avoided). Throat culture was positive for GAS in 280 participants (41.4%). Out of 17 candidate signs and symptoms, eight were retained in the prediction model. The model had an optimism-corrected c-index of 0.73; calibration of the model was good. With the model-based strategy, RADT could be avoided in 6.6% of participants (95% confidence interval 4.7% to 8.5%), as compared to a RADT-for-all strategy. This study demonstrated that relying on signs and symptoms for selectively testing children with pharyngitis is not efficient. We recommend using a RADT in all children with pharyngitis.

  2. Performance of conversion efficiency of a crystalline silicon solar cell with base doping density

    NASA Astrophysics Data System (ADS)

    Sahin, Gokhan; Kerimli, Genber; Barro, Fabe Idrissa; Sane, Moustapha; Alma, Mehmet Hakkı

    In this study, we investigate theoretically the electrical parameters of a crystalline silicon solar cell in steady state. Based on a one-dimensional modeling of the cell, the short circuit current density, the open circuit voltage, the shunt and series resistances and the conversion efficiency are calculated, taking into account the base doping density. Either the I-V characteristic, series resistance, shunt resistance and conversion efficiency are determined and studied versus base doping density. The effects applied of base doping density on these parameters have been studied. The aim of this work is to show how short circuit current density, open circuit voltage and parasitic resistances are related to the base doping density and to exhibit the role played by those parasitic resistances on the conversion efficiency of the crystalline silicon solar.

  3. Performance of rural health clinics: an examination of efficiency and Medicare beneficiary outcomes.

    PubMed

    Ortiz, J; Wan, T H

    2012-01-01

    In 2011, some 3800 Rural Health Clinics (RHCs) delivered primary care in underserved rural areas throughout the USA. To date, little research has been conducted to identify the variability in RHC performance. In an effort to address the knowledge gaps, a national, longitudinal study was conducted of a panel of 3565 RHCs. The goals of the study were to determine: (1) the relationship between two aspects of performance: efficiency and effectiveness; and (2) the factors that influence variation in RHC performance. A non-experimental study of RHC performance was conducted using 2 years of secondary data from multiple sources. A study panel of RHCs was formed. This panel was composed of all RHCs continuously in operation during the period 2006-2007. The study panel was divided into two subsets - one for the provider-based clinics; another for the independent clinics. The individual RHC was the unit of analysis throughout the study. Descriptive statistics were calculated for each subset. Bivariate analyses was conducted of the relationships between the clinic characteristics and the performance outcome measures, as well as the interrelationships between various clinic characteristics using χ², t-tests, Cramer's V, Pearson correlation, and Spearman correlation statistics. Next, using covariance structure analysis, the interrelationships were examined among the context (community or demographic factors), design (organizational structure and other mediating factors), and performance (efficiency and effectiveness) of RHCs. Three hypotheses were tested: (1) the effectiveness of RHCs is positively influenced by efficiency; (2) there is a reciprocal relationship between RHC efficiency and effectiveness; and (3) large RHCs are more efficient than small RHCs. To test the hypotheses that effectiveness of RHCs is positively influenced by efficiency and that there is a reciprocal relationship between efficiency and effectiveness, two covariance structure models were developed and revised: one for independent and one for provider-based RHCs. However, the revised models were not supported by the data. To test the hypothesis that large RHCs are more efficient than small ones, two additional efficiency-based structural equation models were constructed (one for independent RHCs and another for provider-based RHCs). Both of these models were supported by the data (independent model: χ² = 13.8, df = 8, p = 0.088, relative χ² = 1.723, adjusted goodness of fit index [AGFI] = .981, root mean square error of approximation [RMSEA] = .034; provider-based model: χ² = 19.011, df = 8, p = 0.015, relative χ² = 2.376, AGFI = .978, RMSEA = .043). This study examined the relationship between efficiency and effectiveness of RHCs. In addition, it identified several factors that influence the variation in RHC performance. The study has implications for optimizing RHC performance, providing quality services to rural populations, and enhancing the value of RHC data. The present is a critical time in the history of RHCs as they transition to meet the goals and expectations of the US health system reform. Additional research is needed to quantify and trend RHCs' contribution to the rural health delivery system in order to optimize their service to rural populations.

  4. Constructing Self-Modeling Videos: Procedures and Technology

    ERIC Educational Resources Information Center

    Collier-Meek, Melissa A.; Fallon, Lindsay M.; Johnson, Austin H.; Sanetti, Lisa M. H.; Delcampo, Marisa A.

    2012-01-01

    Although widely recommended, evidence-based interventions are not regularly utilized by school practitioners. Video self-modeling is an effective and efficient evidence-based intervention for a variety of student problem behaviors. However, like many other evidence-based interventions, it is not frequently used in schools. As video creation…

  5. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments

    PubMed Central

    Avalappampatty Sivasamy, Aneetha; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668

  6. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments.

    PubMed

    Sivasamy, Aneetha Avalappampatty; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better.

  7. Airport security inspection process model and optimization based on GSPN

    NASA Astrophysics Data System (ADS)

    Mao, Shuainan

    2018-04-01

    Aiming at the efficiency of airport security inspection process, Generalized Stochastic Petri Net is used to establish the security inspection process model. The model is used to analyze the bottleneck problem of airport security inspection process. The solution to the bottleneck is given, which can significantly improve the efficiency and reduce the waiting time by adding the place for people to remove their clothes and the X-ray detector.

  8. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.

  9. An Application on Merton Model in the Non-efficient Market

    NASA Astrophysics Data System (ADS)

    Feng, Yanan; Xiao, Qingxian

    Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.

  10. Markov Switching Autoregressive Conditional Heteroscedasticity (SWARCH) Model to Detect Financial Crisis in Indonesia Based on Import and Export Indicators

    NASA Astrophysics Data System (ADS)

    Sugiyanto; Zukhronah, Etik; Susanti, Yuliana; Rahma Dwi, Sisca

    2017-06-01

    A country is said to be a crisis when the financial system is experiencing a disruption that affects systems that can not function efficiently. The performance efficiency of macroeconomic indicators especially in imports and exports can be used to detect the financial crisis in Indonesia. Based on the import and export indicators from 1987 to 2015, the movement of these indicators can be modelled using SWARCH three states. The results showed that SWARCH (3,1) model was able to detect the crisis that occurred in Indonesia in 1997 and 2008. Using this model, it can be concluded that Indonesia is prone to financial crisis in 2016.

  11. A physical-based gas-surface interaction model for rarefied gas flow simulation

    NASA Astrophysics Data System (ADS)

    Liang, Tengfei; Li, Qi; Ye, Wenjing

    2018-01-01

    Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.

  12. Ecological efficiency in China and its influencing factors-a super-efficient SBM metafrontier-Malmquist-Tobit model study.

    PubMed

    Ma, Xiaojun; Wang, Changxin; Yu, Yuanbo; Li, Yudong; Dong, Biying; Zhang, Xinyu; Niu, Xueqi; Yang, Qian; Chen, Ruimin; Li, Yifan; Gu, Yihan

    2018-05-15

    Ecological problem is one of the core issues that restrain China's economic development at present, and it is urgently needed to be solved properly and effectively. Based on panel data from 30 regions, this paper uses a super efficiency slack-based measure (SBM) model that introduces the undesirable output to calculate the ecological efficiency, and then uses traditional and metafrontier-Malmquist index method to study regional change trends and technology gap ratios (TGRs). Finally, the Tobit regression and principal component analysis methods are used to analysis the main factors affecting eco-efficiency and impact degree. The results show that about 60% of China's provinces have effective eco-efficiency, and the overall ecological efficiency of China is at the superior middling level, but there is a serious imbalance among different provinces and regions. Ecological efficiency has an obvious spatial cluster effect. There are differences among regional TGR values. Most regions show a downward trend and the phenomenon of focusing on economic development at the expense of ecological protection still exists. Expansion of opening to the outside, increases in R&D spending, and improvement of population urbanization rate have positive effects on eco-efficiency. Blind economic expansion, increases of industrial structure, and proportion of energy consumption have negative effects on eco-efficiency.

  13. Modeling qRT-PCR dynamics with application to cancer biomarker quantification.

    PubMed

    Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A

    2017-01-01

    Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.

  14. Multiscale musculoskeletal modelling, data–model fusion and electromyography-informed modelling

    PubMed Central

    Zhang, J.; Heidlauf, T.; Sartori, M.; Besier, T.; Röhrle, O.; Lloyd, D.

    2016-01-01

    This paper proposes methods and technologies that advance the state of the art for modelling the musculoskeletal system across the spatial and temporal scales; and storing these using efficient ontologies and tools. We present population-based modelling as an efficient method to rapidly generate individual morphology from only a few measurements and to learn from the ever-increasing supply of imaging data available. We present multiscale methods for continuum muscle and bone models; and efficient mechanostatistical methods, both continuum and particle-based, to bridge the scales. Finally, we examine both the importance that muscles play in bone remodelling stimuli and the latest muscle force prediction methods that use electromyography-assisted modelling techniques to compute musculoskeletal forces that best reflect the underlying neuromuscular activity. Our proposal is that, in order to have a clinically relevant virtual physiological human, (i) bone and muscle mechanics must be considered together; (ii) models should be trained on population data to permit rapid generation and use underlying principal modes that describe both muscle patterns and morphology; and (iii) these tools need to be available in an open-source repository so that the scientific community may use, personalize and contribute to the database of models. PMID:27051510

  15. Modeling the Player: Predictability of the Models of Bartle and Kolb Based on NEO-FFI (Big5) and the Implications for Game Based Learning

    ERIC Educational Resources Information Center

    Konert, Johannes; Gutjahr, Michael; Göbel, Stefan; Steinmetz, Ralf

    2014-01-01

    For adaptation and personalization of game play sophisticated player models and learner models are used in game-based learning environments. Thus, the game flow can be optimized to increase efficiency and effectiveness of gaming and learning in parallel. In the field of gaming still the Bartle model is commonly used due to its simplicity and good…

  16. A novel vortex tube-based N2-expander liquefaction process for enhancing the energy efficiency of natural gas liquefaction

    NASA Astrophysics Data System (ADS)

    Qyyum, Muhammad Abdul; Wei, Feng; Hussain, Arif; Ali, Wahid; Sehee, Oh; Lee, Moonyong

    2017-11-01

    This research work unfolds a simple, safe, and environment-friendly energy efficient novel vortex tube-based natural gas liquefaction process (LNG). A vortex tube was introduced to the popular N2-expander liquefaction process to enhance the liquefaction efficiency. The process structure and condition were modified and optimized to take a potential advantage of the vortex tube on the natural gas liquefaction cycle. Two commercial simulators ANSYS® and Aspen HYSYS® were used to investigate the application of vortex tube in the refrigeration cycle of LNG process. The Computational fluid dynamics (CFD) model was used to simulate the vortex tube with nitrogen (N2) as a working fluid. Subsequently, the results of the CFD model were embedded in the Aspen HYSYS® to validate the proposed LNG liquefaction process. The proposed natural gas liquefaction process was optimized using the knowledge-based optimization (KBO) approach. The overall energy consumption was chosen as an objective function for optimization. The performance of the proposed liquefaction process was compared with the conventional N2-expander liquefaction process. The vortex tube-based LNG process showed a significant improvement of energy efficiency by 20% in comparison with the conventional N2-expander liquefaction process. This high energy efficiency was mainly due to the isentropic expansion of the vortex tube. It turned out that the high energy efficiency of vortex tube-based process is totally dependent on the refrigerant cold fraction, operating conditions as well as refrigerant cycle configurations.

  17. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  18. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  19. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  20. The Influences of Quantum Coherence on the Positive Work and the Efficiency of Quantum Heat Engine with Working Substance of Two-Qubit Heisenberg XXX Model

    NASA Astrophysics Data System (ADS)

    Peng, Hu-Ping; Fang, Mao-Fa; Yu, Min; Zou, Hong-Mei

    2018-03-01

    We study the influences of quantum coherence on the positive work and the efficiency of quantum heat engine (QHE) based on working substance of two-qubit Heisenberg model under a constant external magnetic field. By using analytical and numerical solution, we give the relation expressions for both the positive work and the efficiency with quantum coherence, and in detail discuss the effects of the quantum coherence on the positive work and the efficiency of QHE in the absence or presence of external magnetic field, respectively.

  1. The Influences of Quantum Coherence on the Positive Work and the Efficiency of Quantum Heat Engine with Working Substance of Two-Qubit Heisenberg XXX Model

    NASA Astrophysics Data System (ADS)

    Peng, Hu-Ping; Fang, Mao-Fa; Yu, Min; Zou, Hong-Mei

    2018-06-01

    We study the influences of quantum coherence on the positive work and the efficiency of quantum heat engine (QHE) based on working substance of two-qubit Heisenberg model under a constant external magnetic field. By using analytical and numerical solution, we give the relation expressions for both the positive work and the efficiency with quantum coherence, and in detail discuss the effects of the quantum coherence on the positive work and the efficiency of QHE in the absence or presence of external magnetic field, respectively.

  2. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  3. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  4. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  5. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  6. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE PAGES

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  7. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    ERIC Educational Resources Information Center

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  8. An analytical probabilistic model of the quality efficiency of a sewer tank

    NASA Astrophysics Data System (ADS)

    Balistrocchi, Matteo; Grossi, Giovanna; Bacchi, Baldassare

    2009-12-01

    The assessment of the efficiency of a storm water storage facility devoted to the sewer overflow control in urban areas strictly depends on the ability to model the main features of the rainfall-runoff routing process and the related wet weather pollution delivery. In this paper the possibility of applying the analytical probabilistic approach for developing a tank design method, whose potentials are similar to the continuous simulations, is proved. In the model derivation the quality issues of such devices were implemented. The formulation is based on a Weibull probabilistic model of the main characteristics of the rainfall process and on a power law describing the relationship between the dimensionless storm water cumulative runoff volume and the dimensionless cumulative pollutograph. Following this approach, efficiency indexes were established. The proposed model was verified by comparing its results to those obtained by continuous simulations; satisfactory agreement is shown for the proposed efficiency indexes.

  9. [Modeling and analysis of volume conduction based on field-circuit coupling].

    PubMed

    Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming

    2012-08-01

    Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.

  10. Research on optimization of combustion efficiency of thermal power unit based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Qiongyang

    2018-04-01

    In order to improve the economic performance and reduce pollutant emissions of thermal power units, the characteristics of neural network in establishing boiler combustion model are analyzed based on the analysis of the main factors affecting boiler efficiency by using orthogonal method. In addition, on the basis of this model, the genetic algorithm is used to find the best control amount of the furnace combustion in a certain working condition. Through the genetic algorithm based on real number encoding and roulette selection is concluded: the best control quantity at a condition of furnace combustion can be combined with the boiler combustion system model for neural network training. The precision of the neural network model is further improved, and the basic work is laid for the research of the whole boiler combustion optimization system.

  11. Research on efficiency evaluation model of integrated energy system based on hybrid multi-attribute decision-making.

    PubMed

    Li, Yan

    2017-05-25

    The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.

  12. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    NASA Astrophysics Data System (ADS)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  13. The components of crop productivity: measuring and modeling plant metabolism

    NASA Technical Reports Server (NTRS)

    Bugbee, B.

    1995-01-01

    Several investigators in the CELSS program have demonstrated that crop plants can be remarkably productive in optimal environments where plants are limited only by incident radiation. Radiation use efficiencies of 0.4 to 0.7 g biomass per mol of incident photons have been measured for crops in several laboratories. Some early published values for radiation use efficiency (1 g mol-1) were inflated due to the effect of side lighting. Sealed chambers are the basic research module for crop studies for space. Such chambers allow the measurement of radiation and CO2 fluxes, thus providing values for three determinants of plant growth: radiation absorption, photosynthetic efficiency (quantum yield), and respiration efficiency (carbon use efficiency). Continuous measurement of each of these parameters over the plant life cycle has provided a blueprint for daily growth rates, and is the basis for modeling crop productivity based on component metabolic processes. Much of what has been interpreted as low photosynthetic efficiency is really the result of reduced leaf expansion and poor radiation absorption. Measurements and models of short-term (minutes to hours) and long-term (days to weeks) plant metabolic rates have enormously improved our understanding of plant environment interactions in ground-based growth chambers and are critical to understanding plant responses to the space environment.

  14. Development of Efficient Real-Fluid Model in Simulating Liquid Rocket Injector Flows

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Farmer, Richard

    2003-01-01

    The characteristics of propellant mixing near the injector have a profound effect on the liquid rocket engine performance. However, the flow features near the injector of liquid rocket engines are extremely complicated, for example supercritical-pressure spray, turbulent mixing, and chemical reactions are present. Previously, a homogeneous spray approach with a real-fluid property model was developed to account for the compressibility and evaporation effects such that thermodynamics properties of a mixture at a wide range of pressures and temperatures can be properly calculated, including liquid-phase, gas- phase, two-phase, and dense fluid regions. The developed homogeneous spray model demonstrated a good success in simulating uni- element shear coaxial injector spray combustion flows. However, the real-fluid model suffered a computational deficiency when applied to a pressure-based computational fluid dynamics (CFD) code. The deficiency is caused by the pressure and enthalpy being the independent variables in the solution procedure of a pressure-based code, whereas the real-fluid model utilizes density and temperature as independent variables. The objective of the present research work is to improve the computational efficiency of the real-fluid property model in computing thermal properties. The proposed approach is called an efficient real-fluid model, and the improvement of computational efficiency is achieved by using a combination of a liquid species and a gaseous species to represent a real-fluid species.

  15. Analytical modeling of relative luminescence efficiency of Al2O3:C optically stimulated luminescence detectors exposed to high-energy heavy charged particles.

    PubMed

    Sawakuchi, Gabriel O; Yukihara, Eduardo G

    2012-01-21

    The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.

  16. Partial least squares density modeling (PLS-DM) - a new class-modeling strategy applied to the authentication of olives in brine by near-infrared spectroscopy.

    PubMed

    Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia

    2014-12-03

    A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Development of efficient and cost-effective distributed hydrological modeling tool MWEasyDHM based on open-source MapWindow GIS

    NASA Astrophysics Data System (ADS)

    Lei, Xiaohui; Wang, Yuhui; Liao, Weihong; Jiang, Yunzhong; Tian, Yu; Wang, Hao

    2011-09-01

    Many regions are still threatened with frequent floods and water resource shortage problems in China. Consequently, the task of reproducing and predicting the hydrological process in watersheds is hard and unavoidable for reducing the risks of damage and loss. Thus, it is necessary to develop an efficient and cost-effective hydrological tool in China as many areas should be modeled. Currently, developed hydrological tools such as Mike SHE and ArcSWAT (soil and water assessment tool based on ArcGIS) show significant power in improving the precision of hydrological modeling in China by considering spatial variability both in land cover and in soil type. However, adopting developed commercial tools in such a large developing country comes at a high cost. Commercial modeling tools usually contain large numbers of formulas, complicated data formats, and many preprocessing or postprocessing steps that may make it difficult for the user to carry out simulation, thus lowering the efficiency of the modeling process. Besides, commercial hydrological models usually cannot be modified or improved to be suitable for some special hydrological conditions in China. Some other hydrological models are open source, but integrated into commercial GIS systems. Therefore, by integrating hydrological simulation code EasyDHM, a hydrological simulation tool named MWEasyDHM was developed based on open-source MapWindow GIS, the purpose of which is to establish the first open-source GIS-based distributed hydrological model tool in China by integrating modules of preprocessing, model computation, parameter estimation, result display, and analysis. MWEasyDHM provides users with a friendly manipulating MapWindow GIS interface, selectable multifunctional hydrological processing modules, and, more importantly, an efficient and cost-effective hydrological simulation tool. The general construction of MWEasyDHM consists of four major parts: (1) a general GIS module for hydrological analysis, (2) a preprocessing module for modeling inputs, (3) a model calibration module, and (4) a postprocessing module. The general GIS module for hydrological analysis is developed on the basis of totally open-source GIS software, MapWindow, which contains basic GIS functions. The preprocessing module is made up of three submodules including a DEM-based submodule for hydrological analysis, a submodule for default parameter calculation, and a submodule for the spatial interpolation of meteorological data. The calibration module contains parallel computation, real-time computation, and visualization. The postprocessing module includes model calibration and model results spatial visualization using tabular form and spatial grids. MWEasyDHM makes it possible for efficient modeling and calibration of EasyDHM, and promises further development of cost-effective applications in various watersheds.

  18. [Ideas and methods on efficient screening of traditional medicines for anti-osteoporosis activity based on M-Act/Tox integrated evaluation using zebrafish].

    PubMed

    Wang, Mo; Ling, Jie; Chen, Ying; Song, Jie; Sun, E; Shi, Zi-Qi; Feng, Liang; Jia, Xiao-Bin; Wei, Ying-Jie

    2017-11-01

    The increasingly apparent liver injury problems of bone strengthening Chinese medicines have brought challenges for clinical application, and it is necessary to consider both effectiveness and safety in screening anti-osteoporosis Chinese medicines. Metabolic transformation is closely related to drug efficacy and toxicity, so it is significant to comprehensively consider metabolism-action/toxicity(M-Act/Tox) for screening anti-osteoporosis Chinese medicines. The current evaluation models and the number of compounds(including metabolites) severely restrict efficient screening in vivo. By referring to previous relevant research and domestic and abroad literature, zebrafish M-Act/Tox integrative method was put forward for efficiently screening anti-osteoporosis herb medicines, which has organically integrated zebrafish metabolism model, osteoporosis model and toxicity evaluation method. This method can break through the bottleneck and blind spots that trace compositions can't achieve efficient and integrated in vivo evaluation, and realize both efficient and comprehensive screening on anti-osteoporosis traditional medicines based on in vivo process taking both safety and effectiveness into account, which is significant to accelerate discovery of effective and safe innovative traditional Chinese medicines for osteoporosis. Copyright© by the Chinese Pharmaceutical Association.

  19. Novel thermal efficiency-based model for determination of thermal conductivity of membrane distillation membranes

    DOE PAGES

    Vanneste, Johan; Bush, John A.; Hickenbottom, Kerri L.; ...

    2017-11-21

    Development and selection of membranes for membrane distillation (MD) could be accelerated if all performance-determining characteristics of the membrane could be obtained during MD operation without the need to recur to specialized or cumbersome porosity or thermal conductivity measurement techniques. By redefining the thermal efficiency, the Schofield method could be adapted to describe the flux without prior knowledge of membrane porosity, thickness, or thermal conductivity. A total of 17 commercially available membranes were analyzed in terms of flux and thermal efficiency to assess their suitability for application in MD. The thermal-efficiency based model described the flux with an average %RMSEmore » of 4.5%, which was in the same range as the standard deviation on the measured flux. The redefinition of the thermal efficiency also enabled MD to be used as a novel thermal conductivity measurement device for thin porous hydrophobic films that cannot be measured with the conventional laser flash diffusivity technique.« less

  20. Novel thermal efficiency-based model for determination of thermal conductivity of membrane distillation membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanneste, Johan; Bush, John A.; Hickenbottom, Kerri L.

    Development and selection of membranes for membrane distillation (MD) could be accelerated if all performance-determining characteristics of the membrane could be obtained during MD operation without the need to recur to specialized or cumbersome porosity or thermal conductivity measurement techniques. By redefining the thermal efficiency, the Schofield method could be adapted to describe the flux without prior knowledge of membrane porosity, thickness, or thermal conductivity. A total of 17 commercially available membranes were analyzed in terms of flux and thermal efficiency to assess their suitability for application in MD. The thermal-efficiency based model described the flux with an average %RMSEmore » of 4.5%, which was in the same range as the standard deviation on the measured flux. The redefinition of the thermal efficiency also enabled MD to be used as a novel thermal conductivity measurement device for thin porous hydrophobic films that cannot be measured with the conventional laser flash diffusivity technique.« less

  1. Evaluating the efficiency of a zakat institution over a period of time using data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Krishnan, Anath Rau; Hamzah, Ahmad Aizuddin

    2017-08-01

    It is crucial for a zakat institution to evaluate and understand how efficiently they have operated in the past, thus ideal strategies could be developed for future improvement. However, evaluating the efficiency of a zakat institution is actually a challenging process as it involves the presence of multiple inputs or/and outputs. This paper proposes a step-by-step procedure comprising two data envelopment analysis models, namely dual Charnes-Cooper-Rhodes and slack-based model to quantitatively measure the overall efficiency of a zakat institution over a period of time. The applicability of the proposed procedure was demonstrated by evaluating the efficiency of Pusat Zakat Sabah, Malaysia from the year of 2007 up to 2015 by treating each year as a decision making unit. Two inputs (i.e. number of staff and number of branches) and two outputs (i.e. total collection and total distribution) were used to measure the overall efficiency achieved each year. The causes of inefficiency and strategy for future improvement were discussed based on the results.

  2. Estimation and modeling of electrofishing capture efficiency for fishes in wadeable warmwater streams

    USGS Publications Warehouse

    Price, A.; Peterson, James T.

    2010-01-01

    Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.

  3. Evaluation model of wind energy resources and utilization efficiency of wind farm

    NASA Astrophysics Data System (ADS)

    Ma, Jie

    2018-04-01

    Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.

  4. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks

    PubMed Central

    Lee, JongHyup; Pak, Dohyun

    2016-01-01

    For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743

  5. Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    PubMed Central

    Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  6. Efficient Band-to-Trap Tunneling Model Including Heterojunction Band Offset

    DOE PAGES

    Gao, Xujiao; Huang, Andy; Kerr, Bert

    2017-10-25

    In this paper, we present an efficient band-to-trap tunneling model based on the Schenk approach, in which an analytic density-of-states (DOS) model is developed based on the open boundary scattering method. The new model explicitly includes the effect of heterojunction band offset, in addition to the well-known field effect. Its analytic form enables straightforward implementation into TCAD device simulators. It is applicable to all one-dimensional potentials, which can be approximated to a good degree such that the approximated potentials lead to piecewise analytic wave functions with open boundary conditions. The model allows for simulating both the electric-field-enhanced and band-offset-enhanced carriermore » recombination due to the band-to-trap tunneling near the heterojunction in a heterojunction bipolar transistor (HBT). Simulation results of an InGaP/GaAs/GaAs NPN HBT show that the proposed model predicts significantly increased base currents, due to the hole-to-trap tunneling enhanced by the emitter-base junction band offset. Finally, the results compare favorably with experimental observation.« less

  7. Efficient Band-to-Trap Tunneling Model Including Heterojunction Band Offset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Xujiao; Huang, Andy; Kerr, Bert

    In this paper, we present an efficient band-to-trap tunneling model based on the Schenk approach, in which an analytic density-of-states (DOS) model is developed based on the open boundary scattering method. The new model explicitly includes the effect of heterojunction band offset, in addition to the well-known field effect. Its analytic form enables straightforward implementation into TCAD device simulators. It is applicable to all one-dimensional potentials, which can be approximated to a good degree such that the approximated potentials lead to piecewise analytic wave functions with open boundary conditions. The model allows for simulating both the electric-field-enhanced and band-offset-enhanced carriermore » recombination due to the band-to-trap tunneling near the heterojunction in a heterojunction bipolar transistor (HBT). Simulation results of an InGaP/GaAs/GaAs NPN HBT show that the proposed model predicts significantly increased base currents, due to the hole-to-trap tunneling enhanced by the emitter-base junction band offset. Finally, the results compare favorably with experimental observation.« less

  8. Interactions Between Mineral Surfaces, Substrates, Enzymes, and Microbes Result in Hysteretic Temperature Sensitivities and Microbial Carbon Use Efficiencies and Weaker Predicted Carbon-Climate Feedbacks

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Tang, J.

    2014-12-01

    We hypothesize that the large observed variability in decomposition temperature sensitivity and carbon use efficiency arises from interactions between temperature, microbial biogeochemistry, and mineral surface sorptive reactions. To test this hypothesis, we developed a numerical model that integrates the Dynamic Energy Budget concept for microbial physiology, microbial trait-based community structure and competition, process-specific thermodynamically ­­based temperature sensitivity, a non-linear mineral sorption isotherm, and enzyme dynamics. We show, because mineral surfaces interact with substrates, enzymes, and microbes, both temperature sensitivity and microbial carbon use efficiency are hysteretic and highly variable. Further, by mimicking the traditional approach to interpreting soil incubation observations, we demonstrate that the conventional labile and recalcitrant substrate characterization for temperature sensitivity is flawed. In a 4 K temperature perturbation experiment, our fully dynamic model predicted more variable but weaker carbon-climate feedbacks than did the static temperature sensitivity and carbon use efficiency model when forced with yearly, daily, and hourly variable temperatures. These results imply that current earth system models likely over-estimate the response of soil carbon stocks to global warming.

  9. Understanding Cirrus Ice Crystal Number Variability for Different Heterogeneous Ice Nucleation Spectra

    NASA Technical Reports Server (NTRS)

    Sullivan, Sylvia C.; Betancourt, Ricardo Morales; Barahona, Donifan; Nenes, Athanasios

    2016-01-01

    Along with minimizing parameter uncertainty, understanding the cause of temporal and spatial variability of the nucleated ice crystal number, Ni, is key to improving the representation of cirrus clouds in climate models. To this end, sensitivities of Ni to input variables like aerosol number and diameter provide valuable information about nucleation regime and efficiency for a given model formulation. Here we use the adjoint model of the adjoint of a cirrus formation parameterization (Barahona and Nenes, 2009b) to understand Ni variability for various ice-nucleating particle (INP) spectra. Inputs are generated with the Community Atmosphere Model version 5, and simulations are done with a theoretically derived spectrum, an empirical lab-based spectrum and two field-based empirical spectra that differ in the nucleation threshold for black carbon particles and in the active site density for dust. The magnitude and sign of Ni sensitivity to insoluble aerosol number can be directly linked to nucleation regime and efficiency of various INP. The lab-based spectrum calculates much higher INP efficiencies than field-based ones, which reveals a disparity in aerosol surface properties. Ni sensitivity to temperature tends to be low, due to the compensating effects of temperature on INP spectrum parameters; this low temperature sensitivity regime has been experimentally reported before but never deconstructed as done here.

  10. Modeling of human movement monitoring using Bluetooth Low Energy technology.

    PubMed

    Mokhtari, G; Zhang, Q; Karunanithi, M

    2015-01-01

    Bluetooth Low Energy (BLE) is a wireless communication technology which can be used to monitor human movements. In this monitoring system, a BLE signal scanner scans signal strength of BLE tags carried by people, to thus infer human movement patterns within its monitoring zone. However to the extent of our knowledge one main aspect of this monitoring system which has not yet been thoroughly investigated in literature is how to build a sound theoretical model, based on tunable BLE communication parameters such as scanning time interval and advertising time interval, to enable the study and design of effective and efficient movement monitoring systems. In this paper, we proposed and developed a statistical model based on Monte-Carlo simulation, which can be utilized to assess impacts of BLE technology parameters in terms of latency and efficiency, on a movement monitoring system, and can thus benefit a more efficient system design.

  11. Efficient finite element modeling of radiation forces on elastic particles of arbitrary size and geometry.

    PubMed

    Glynne-Jones, Peter; Mishra, Puja P; Boltryk, Rosemary J; Hill, Martyn

    2013-04-01

    A finite element based method is presented for calculating the acoustic radiation force on arbitrarily shaped elastic and fluid particles. Importantly for future applications, this development will permit the modeling of acoustic forces on complex structures such as biological cells, and the interactions between them and other bodies. The model is based on a non-viscous approximation, allowing the results from an efficient, numerical, linear scattering model to provide the basis for the second-order forces. Simulation times are of the order of a few seconds for an axi-symmetric structure. The model is verified against a range of existing analytical solutions (typical accuracy better than 0.1%), including those for cylinders, elastic spheres that are of significant size compared to the acoustic wavelength, and spheroidal particles.

  12. Comparison of Predicted Thermoelectric Energy Conversion Efficiency by Cumulative Properties and Reduced Variables Approaches

    NASA Astrophysics Data System (ADS)

    Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt

    2018-06-01

    The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.

  13. Wildlife tradeoffs based on landscape models of habitat preference

    USGS Publications Warehouse

    Loehle, C.; Mitchell, M.S.; White, M.

    2000-01-01

    Wildlife tradeoffs based on landscape models of habitat preference were presented. Multiscale logistic regression models were used and based on these models a spatial optimization technique was utilized to generate optimal maps. The tradeoffs were analyzed by gradually increasing the weighting on a single species in the objective function over a series of simulations. Results indicated that efficiency of habitat management for species diversity could be maximized for small landscapes by incorporating spatial context.

  14. Multiscale finite element modeling of sheet molding compound (SMC) composite structure based on stochastic mesostructure reconstruction

    DOE PAGES

    Chen, Zhangxing; Huang, Tianyu; Shao, Yimin; ...

    2018-03-15

    Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less

  15. Multiscale finite element modeling of sheet molding compound (SMC) composite structure based on stochastic mesostructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhangxing; Huang, Tianyu; Shao, Yimin

    Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less

  16. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  17. Efficiency and Productivity of County-level Public Hospitals Based on the Data Envelopment Analysis Model and Malmquist Index in Anhui, China

    PubMed Central

    Li, Nian-Nian; Wang, Cun-Hui; Ni, Hong; Wang, Heng

    2017-01-01

    Background: China began to implement the national medical and health system and public hospital reforms in 2009 and 2012, respectively. Anhui Province is one of the four pilot provinces, and the medical reform measures received wide attention nationwide. The effectiveness of the above reform needs to get attention. This study aimed to master the efficiency and productivity of county-level public hospitals based on the data envelopment analysis (DEA) model and Malmquist index in Anhui, China, and then provide improvement measures for the future hospital development. Methods: We chose 12 country-level hospitals based on geographical distribution and the economic development level in Anhui Province. Relevant data that were collected in the field and then sorted were provided by the administrative departments of the hospitals. DEA models were used to calculate the dynamic efficiency and Malmquist index factors for the 12 institutions. Results: During 2010–2015, the overall average relative service efficiency of 12 county-level public hospitals was 0.926, and the number of hospitals achieved an effective DEA for each year from 2010 to 2015 was 4, 6, 7, 7, 6, and 8, respectively, as measured using DEA. During this same period, the average overall production efficiency was 0.983, and the total productivity factor had declined. The overall production efficiency of five hospitals was >1, and the rest are <1 between 2010 and 2015. Conclusions: In 2010–2015, the relative service efficiency of 12 county-level public hospitals in Anhui Province showed a decreasing trend, and the service efficiency of each hospital changed. In the past 6 years, although some hospitals have been effective, the efficiency of the county-level public hospitals in Anhui Province has not improved significantly, and the total factor productivity has not been effectively improved. County-level public hospitals need to combine their own reality to find their own deficiencies. PMID:29176142

  18. Efficiency and Productivity of County-level Public Hospitals Based on the Data Envelopment Analysis Model and Malmquist Index in Anhui, China.

    PubMed

    Li, Nian-Nian; Wang, Cun-Hui; Ni, Hong; Wang, Heng

    2017-12-05

    China began to implement the national medical and health system and public hospital reforms in 2009 and 2012, respectively. Anhui Province is one of the four pilot provinces, and the medical reform measures received wide attention nationwide. The effectiveness of the above reform needs to get attention. This study aimed to master the efficiency and productivity of county-level public hospitals based on the data envelopment analysis (DEA) model and Malmquist index in Anhui, China, and then provide improvement measures for the future hospital development. We chose 12 country-level hospitals based on geographical distribution and the economic development level in Anhui Province. Relevant data that were collected in the field and then sorted were provided by the administrative departments of the hospitals. DEA models were used to calculate the dynamic efficiency and Malmquist index factors for the 12 institutions. During 2010-2015, the overall average relative service efficiency of 12 county-level public hospitals was 0.926, and the number of hospitals achieved an effective DEA for each year from 2010 to 2015 was 4, 6, 7, 7, 6, and 8, respectively, as measured using DEA. During this same period, the average overall production efficiency was 0.983, and the total productivity factor had declined. The overall production efficiency of five hospitals was >1, and the rest are <1 between 2010 and 2015. In 2010-2015, the relative service efficiency of 12 county-level public hospitals in Anhui Province showed a decreasing trend, and the service efficiency of each hospital changed. In the past 6 years, although some hospitals have been effective, the efficiency of the county-level public hospitals in Anhui Province has not improved significantly, and the total factor productivity has not been effectively improved. County-level public hospitals need to combine their own reality to find their own deficiencies.

  19. Chitosan-based water-propelled micromotors with strong antibacterial activity.

    PubMed

    Delezuk, Jorge A M; Ramírez-Herrera, Doris E; Esteban-Fernández de Ávila, Berta; Wang, Joseph

    2017-02-09

    A rapid and efficient micromotor-based bacteria killing strategy is described. The new antibacterial approach couples the attractive antibacterial properties of chitosan with the efficient water-powered propulsion of magnesium (Mg) micromotors. These Janus micromotors consist of Mg microparticles coated with the biodegradable and biocompatible polymers poly(lactic-co-glycolic acid) (PLGA), alginate (Alg) and chitosan (Chi), with the latter responsible for the antibacterial properties of the micromotor. The distinct speed and efficiency advantages of the new micromotor-based environmentally friendly antibacterial approach have been demonstrated in various control experiments by treating drinking water contaminated with model Escherichia coli (E. coli) bacteria. The new dynamic antibacterial strategy offers dramatic improvements in the antibacterial efficiency, compared to static chitosan-coated microparticles (e.g., 27-fold enhancement), with a 96% killing efficiency within 10 min. Potential real-life applications of these chitosan-based micromotors for environmental remediation have been demonstrated by the efficient treatment of seawater and fresh water samples contaminated with unknown bacteria. Coupling the efficient water-driven propulsion of such biodegradable and biocompatible micromotors with the antibacterial properties of chitosan holds great considerable promise for advanced antimicrobial water treatment operation.

  20. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  1. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    PubMed

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Research on TCP/IP network communication based on Node.js

    NASA Astrophysics Data System (ADS)

    Huang, Jing; Cai, Lixiong

    2018-04-01

    In the face of big data, long connection and high synchronization, TCP/IP network communication will cause performance bottlenecks due to its blocking multi-threading service model. This paper presents a method of TCP/IP network communication protocol based on Node.js. On the basis of analyzing the characteristics of Node.js architecture and asynchronous non-blocking I/O model, the principle of its efficiency is discussed, and then compare and analyze the network communication model of TCP/IP protocol to expound the reasons why TCP/IP protocol stack is widely used in network communication. Finally, according to the large data and high concurrency in the large-scale grape growing environment monitoring process, a TCP server design based on Node.js is completed. The results show that the example runs stably and efficiently.

  3. Self-reconfigurable ship fluid-network modeling for simulation-based design

    NASA Astrophysics Data System (ADS)

    Moon, Kyungjin

    Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models representing different design configurations of the fluid system were created, and damage analyses were performed with them in order to find an optimal design configuration for system survivability. Finally, the benefits and drawbacks of the developed method were discussed based on the result of the demonstration.

  4. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  5. Fiber-coupling efficiency of Gaussian-Schell model beams through an ocean to fiber optical communication link

    NASA Astrophysics Data System (ADS)

    Hu, Beibei; Shi, Haifeng; Zhang, Yixin

    2018-06-01

    We theoretically study the fiber-coupling efficiency of Gaussian-Schell model beams propagating through oceanic turbulence. The expression of the fiber-coupling efficiency is derived based on the spatial power spectrum of oceanic turbulence and the cross-spectral density function. Our work shows that the salinity fluctuation has a greater impact on the fiber-coupling efficiency than temperature fluctuation does. We can select longer λ in the "ocean window" and higher spatial coherence of light source to improve the fiber-coupling efficiency of the communication link. We also can achieve the maximum fiber-coupling efficiency by choosing design parameter according specific oceanic turbulence condition. Our results are able to help the design of optical communication link for oceanic turbulence to fiber sensor.

  6. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  7. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  8. An efficient adaptive sampling strategy for global surrogate modeling with applications in multiphase flow simulation

    NASA Astrophysics Data System (ADS)

    Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.

    2016-12-01

    Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.

  9. Mathematical modeling of photovoltaic thermal PV/T system with v-groove collector

    NASA Astrophysics Data System (ADS)

    Zohri, M.; Fudholi, A.; Ruslan, M. H.; Sopian, K.

    2017-07-01

    The use of v-groove in solar collector has a higher thermal efficiency in references. Dropping the working heat of photovoltaic panel was able to raise the electrical efficiency performance. Electrical and thermal efficiency were produced by photovoltaic thermal (PV/T) system concurrently. Mathematical modeling based on steady-state thermal analysis of PV/T system with v-groove was conducted. With matrix inversion method, the energy balance equations are explained by means of the investigative method. The comparison results show that in the PV/T system with the V-groove collector is higher temperature, thermal and electrical efficiency than other collectors.

  10. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  11. Spatio-Temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    NASA Astrophysics Data System (ADS)

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian; Wu, Jin; Wu, Xiaocui; Gioli, Beniamino; Wohlfahrt, Georg; Cescatti, Alessandro; van der Tol, Christiaan; Zhou, Sha; Gough, Christopher M.; Gentine, Pierre; Zhang, Yongguang; Steinbrecher, Rainer; Ardö, Jonas

    2018-04-01

    Light-use efficiency (LUE), which quantifies the plants' efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production estimation. Here we use satellite-based solar-induced chlorophyll fluorescence as a proxy for photosynthetically active radiation absorbed by chlorophyll (APARchl) and derive an estimation of the fraction of APARchl (fPARchl) from four remotely sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (ɛmaxchl), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPARchl, suggesting the corresponding ɛmaxchl to have less seasonal variation. This spatio-temporal convergence of LUE derived from fPARchl can be used to build simple but robust gross primary production models and to better constrain process-based models.

  12. Does the covariance structure matter in longitudinal modelling for the prediction of future CD4 counts?

    PubMed

    Taylor, J M; Law, N

    1998-10-30

    We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated Ornstein-Uhlenbeck stochastic process; one based on Brownian motion, and two derived from standard linear and quadratic random-effects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study, we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in efficiency. The quadratic random-effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on the integrated Ornstein-Uhlenbeck stochastic process is the preferred one of the four considered because of its efficiency and robustness properties. We also use the difference between the future predicted and observed CD4 counts to assess an appropriate transformation of CD4 counts; a fourth root, cube root and square root all appear reasonable choices.

  13. Using the nonlinear aquifer storage-discharge relationship to simulate the base flow of glacier- and snowmelt-dominated basins in northwest China

    NASA Astrophysics Data System (ADS)

    Gan, R.; Luo, Y.

    2013-09-01

    Base flow is an important component in hydrological modeling. This process is usually modeled by using the linear aquifer storage-discharge relation approach, although the outflow from groundwater aquifers is nonlinear. To identify the accuracy of base flow estimates in rivers dominated by snowmelt and/or glacier melt in arid and cold northwestern China, a nonlinear storage-discharge relationship for use in SWAT (Soil Water Assessment Tool) modeling was developed and applied to the Manas River basin in the Tian Shan Mountains. Linear reservoir models and a digital filter program were used for comparisons. Meanwhile, numerical analysis of recession curves from 78 river gauge stations revealed variation in the parameters of the nonlinear relationship. It was found that the nonlinear reservoir model can improve the streamflow simulation, especially for low-flow period. The higher Nash-Sutcliffe efficiency, logarithmic efficiency, and volumetric efficiency, and lower percent bias were obtained when compared to the one-linear reservoir approach. The parameter b of the aquifer storage-discharge function varied mostly between 0.0 and 0.1, which is much smaller than the suggested value of 0.5. The coefficient a of the function is related to catchment properties, primarily the basin and glacier areas.

  14. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  15. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  16. Research on the influencing factors of financing efficiency of big data industry based on panel data model--Empirical evidence from Guizhou province

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Feng, Yujia

    2018-03-01

    This paper mainly studies the influence factors of financing efficiency of Guizhou big data industry, and selects the financial and macro data of 20 Guizhou big data enterprises from 2010 to 2016. Using the DEA model to obtain the financing efficiency of Guizhou big data enterprises. A panel data model is constructed to select the six macro and micro influencing factors for panel data analysis. The results show that the external economic environment, the turnover rate of the total assets of the enterprises, the increase of operating income, the increase of the revenue per share of each share of the business income have positive impact on the financing efficiency of of the big data industry in Guizhou. The key to improve the financing efficiency of Guizhou big data enterprises is to improve.

  17. Analysis of regional total factor energy efficiency in China under environmental constraints: based on undesirable-minds and DEA window model

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Li, Deshan; Li, Shuangqiang; Jiang, Hanyu; Shen, Yuqing

    2017-06-01

    With China’s entrance into the new economy, the improvement of energy efficiency has become an important indicator to measure the quality of ecological civilization construction and economic development. According to the panel data of Chinese regions in 1996-2014, the nearest distance to the efficient frontier of Undesirable-MinDS Xeon model and DEA window model have been used to calculate the total factor energy efficiency of China’s regions. Study found that: Under environmental constraints, China’s total factor energy efficiency has increased after the first drop in the overall 1996-2014, and then increases again. And the difference between the regions is very large, showing a characteristic of “the east is the highest, the west is lower, and lowest is in the central” finally, this paper puts forward relevant policy suggestions.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hyeokjin; Chen, Hua; Maksimovic, Dragan

    An experimental 30 kW boost composite converter is described in this paper. The composite converter architecture, which consists of a buck module, a boost module, and a dual active bridge module that operates as a DC transformer (DCX), leads to substantial reductions in losses at partial power points, and to significant improvements in weighted efficiency in applications that require wide variations in power and conversion ratio. A comprehensive loss model is developed, accounting for semiconductor conduction and switching losses, capacitor losses, as well as dc and ac losses in magnetic components. Based on the developed loss model, the module andmore » system designs are optimized to maximize efficiency at a 50% power point. Experimental results for the 30 kW prototype demonstrate 98.5%peak efficiency, very high efficiency over wide ranges of power and voltage conversion ratios, as well as excellent agreements between model predictions and measured efficiency curves.« less

  19. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231

  20. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  1. Impact of the Local Public Hospital Reform on the Efficiency of Medium-Sized Hospitals in Japan: An Improved Slacks-Based Measure Data Envelopment Analysis Approach.

    PubMed

    Zhang, Xing; Tone, Kaoru; Lu, Yingzhe

    2018-04-01

    To assess the change in efficiency and total factor productivity (TFP) of the local public hospitals in Japan after the local public hospital reform launched in late 2007, which was aimed at improving the financial capability and operational efficiency of hospitals. Secondary data were collected from the Ministry of Internal Affairs and Communications on 213 eligible medium-sized hospitals, each operating 100-400 beds from FY2006 to FY2011. The improved slacks-based measure nonoriented data envelopment analysis models (Quasi-Max SBM nonoriented DEA models) were used to estimate dynamic efficiency score and Malmquist Index. The dynamic efficiency measure indicated an efficiency gain in the first several years of the reform and then was followed by a decrease. Malmquist Index analysis showed a significant decline in the TFP between 2006 and 2011. The financial improvement of medium-sized hospitals was not associated with enhancement of efficiency. Hospital efficiency was not significantly different among ownership structure and law-application system groups, but it was significantly affected by hospital location. The results indicate a need for region-tailored health care policies and for a more comprehensive reform to overcome the systemic constraints that might contribute to the decline of the TFP. © Health Research and Educational Trust.

  2. Physico-chemical processes for landfill leachate treatment: Experiments and mathematical models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, W.; Ngo, H.H.; Kim, S.H.

    2008-07-01

    In this study, the adsorption of synthetic landfill leachate onto four kinds of activated carbon has been investigated. From the equilibrium and kinetics experiments, it was observed that coal based PAC presented the highest organic pollutants removal efficiency (54%), followed by coal based GAC (50%), wood based GAC (33%) and wood based PAC (14%). The adsorption equilibrium of PAC and GAC was successfully predicted by Henry-Freundlich adsorption model whilst LDFA + Dual isotherm Kinetics model could describe well the batch adsorption kinetics. The flocculation and flocculation-adsorption experiments were also conducted. The results indicated that flocculation did not perform well onmore » organics removal because of the dominance of low molecular weight organic compounds in synthetic landfill leachate. Consequently, flocculation as pretreatment to adsorption and a combination of flocculation-adsorption could not improve much the organic removal efficiency for the single adsorption process.« less

  3. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  4. Unified Model for the Overall Efficiency of Inlets Sampling from Horizontal Aerosol Flows

    NASA Astrophysics Data System (ADS)

    Hangal, Sunil Pralhad

    When sampling aerosols from ambient or industrial air environments, the sampled aerosol must be representative of the aerosol in the free stream. The changes that occur during sampling must be assessed quantitatively so that sampling errors can be compensated for. In this study, unified models have been developed for the overall efficiency of tubular sharp-edged inlets sampling from horizontal aerosol flows oriented at 0 to 90^circ relative to the wind direction in the vertical (pitch) and horizontal plane(yaw). In the unified model, based on experimental data, the aspiration efficiency is represented by a single equation with different inertial parameters at 0 to 60^ circ and 45 to 90^circ . Tnt transmission efficiency is separated into two components: one due to gravitational settling in the boundary layer and the other due to impaction. The gravitational settling component is determined by extending a previously developed isoaxial sampling model to nonisoaxial sampling. The impaction component is determined by a new model that quantifies the particle losses caused by wall impaction. The model also quantifies the additional particle losses resulting from turbulent motion in the vena contracta which is formed in the inlet when the inlet velocity is higher than the wind velocity. When sampling aerosols in ambient or industrial environments with an inlet, small changes in wind direction or physical constraints in positioning the inlet in the system necessitates the assessment of sampling efficiency in both the vertical and horizontal plane. The overall sampling efficiency of tubular inlets has been experimentally investigated in yaw and pitch orientations at 0 to 20 ^circ from horizontal aerosol flows using a wind tunnel facility. The model for overall sampling efficiency has been extended to include both yaw and pitch sampling based on the new data. In this model, the difference between yaw and pitch is expressed by the effect of gravity on the impaction process inside the inlet described by a newly developed gravity effect angle. At yaw, the gravity effect angle on the wall impaction process does not change with sampling angle. At pitch, the gravity effect on the impaction process results in particle loss increase for upward and decrease for downward sampling. Using the unified model, graphical representations have been developed for sampling at small angles. These can be used in the field to determine the overall sampling efficiency of inlets at several operating conditions and the operating conditions that result in an acceptable sampling error. Pitch and diameter factors have been introduced for relating the efficiency values over a wide range of conditions to those of a reference condition. The pitch factor determines the overall sampling efficiency at pitch from yaw values, and the diameter factor determines the overall sampling efficiency at different inlet diameters.

  5. A comparison of DEA and SFA using micro- and macro-level perspectives: Efficiency of Chinese local banks

    NASA Astrophysics Data System (ADS)

    Silva, Thiago Christiano; Tabak, Benjamin Miranda; Cajueiro, Daniel Oliveira; Dias, Marina Villas Boas

    2017-03-01

    This study investigates to which extent results produced by a single frontier model are reliable, based on the application of data envelopment analysis and stochastic frontier approach to a sample of Chinese local banks. Our findings show they produce a consistent trend on global efficiency scores over the years. However, rank correlations indicate they diverge with respect to individual performance diagnoses. Therefore, these models provide steady information on the efficiency of the banking system as a whole, but they become divergent at the individual level.

  6. A highly efficient approach to protein interactome mapping based on collaborative filtering framework.

    PubMed

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-09

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  7. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    PubMed Central

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly. PMID:25572661

  8. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    NASA Astrophysics Data System (ADS)

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  9. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  10. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  11. Climate and land use controls over terrestrial water use efficiency in monsoon Asia.

    Treesearch

    Hanqin Tian; Chaoqun Lu; Guangsheng Chen; Xiaofeng Xu; Mingliang Liu; et al

    2011-01-01

    Much concern has been raised regarding how and to what extent climate change and intensive human activities have altered water use efficiency (WUE, amount of carbon uptake per unit of water use) in monsoon Asia. By using a process-based ecosystem model [dynamic land ecosystem model (DLEM)], we examined effects of climate change, land use/cover change, and land...

  12. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  13. Pabon Lasso and Data Envelopment Analysis: A Complementary Approach to Hospital Performance Measurement

    PubMed Central

    Mehrtak, Mohammad; Yusefzadeh, Hasan; Jaafaripooyan, Ebrahim

    2014-01-01

    Background: Performance measurement is essential to the management of health care organizations to which efficiency is per se a vital indicator. Present study accordingly aims to measure the efficiency of hospitals employing two distinct methods. Methods: Data Envelopment Analysis and Pabon Lasso Model were jointly applied to calculate the efficiency of all general hospitals located in Iranian Eastern Azerbijan Province. Data was collected using hospitals’ monthly performance forms and analyzed and displayed by MS Visio and DEAP software. Results: In accord with Pabon Lasso model, 44.5% of the hospitals were entirely efficient, whilst DEA revealed 61% to be efficient. As such, 39% of the hospitals, by the Pabon Lasso, were wholly inefficient; based on DEA though; the relevant figure was only 22.2%. Finally, 16.5% of hospitals as calculated by Pabon Lasso and 16.7% by DEA were relatively efficient. DEA appeared to show more hospitals as efficient as opposed to the Pabon Lasso model. Conclusion: Simultaneous use of two models rendered complementary and corroborative results as both evidently reveal efficient hospitals. However, their results should be compared with prudence. Whilst the Pabon Lasso inefficient zone is fully clear, DEA does not provide such a crystal clear limit for inefficiency. PMID:24999147

  14. Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.

    2013-12-01

    Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.

  15. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  16. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  17. Teaching Concepts of Natural Sciences to Foreigners through Content-Based Instruction: The Adjunct Model

    ERIC Educational Resources Information Center

    Satilmis, Yilmaz; Yakup, Doganay; Selim, Guvercin; Aybarsha, Islam

    2015-01-01

    This study investigates three models of content-based instruction in teaching concepts and terms of natural sciences in order to increase the efficiency of teaching these kinds of concepts in realization and to prove that the content-based instruction is a teaching strategy that helps students understand concepts of natural sciences. Content-based…

  18. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  19. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...

  20. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...

  1. Development of a Computationally Efficient, High Fidelity, Finite Element Based Hall Thruster Model

    NASA Technical Reports Server (NTRS)

    Jacobson, David (Technical Monitor); Roy, Subrata

    2004-01-01

    This report documents the development of a two dimensional finite element based numerical model for efficient characterization of the Hall thruster plasma dynamics in the framework of multi-fluid model. Effect of the ionization and the recombination has been included in the present model. Based on the experimental data, a third order polynomial in electron temperature is used to calculate the ionization rate. The neutral dynamics is included only through the neutral continuity equation in the presence of a uniform neutral flow. The electrons are modeled as magnetized and hot, whereas ions are assumed magnetized and cold. The dynamics of Hall thruster is also investigated in the presence of plasma-wall interaction. The plasma-wall interaction is a function of wall potential, which in turn is determined by the secondary electron emission and sputtering yield. The effect of secondary electron emission and sputter yield has been considered simultaneously, Simulation results are interpreted in the light of experimental observations and available numerical solutions in the literature.

  2. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  3. Enhancement of the output emission efficiency of thin-film photoluminescence composite structures based on PbSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.

    2010-12-15

    The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less

  4. CARSVM: a class association rule-based classification framework and its application to gene expression data.

    PubMed

    Kianmehr, Keivan; Alhajj, Reda

    2008-09-01

    In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM algorithm. In the context of applicability, according to the results obtained from gene expression analysis, we can conclude that the CARSVM system can be utilized in a variety of real world applications with some adjustments.

  5. Measurement of X-ray emission efficiency for K-lines.

    PubMed

    Procop, M

    2004-08-01

    Results for the X-ray emission efficiency (counts per C per sr) of K-lines for selected elements (C, Al, Si, Ti, Cu, Ge) and for the first time also for compounds and alloys (SiC, GaP, AlCu, TiAlC) are presented. An energy dispersive X-ray spectrometer (EDS) of known detection efficiency (counts per photon) has been used to record the spectra at a takeoff angle of 25 degrees determined by the geometry of the secondary electron microscope's specimen chamber. Overall uncertainty in measurement could be reduced to 5 to 10% in dependence on the line intensity and energy. Measured emission efficiencies have been compared with calculated efficiencies based on models applied in standardless analysis. The widespread XPP and PROZA models give somewhat too low emission efficiencies. The best agreement between measured and calculated efficiencies could be achieved by replacing in the modular PROZA96 model the original expression for the ionization cross section by the formula given by Casnati et al. (1982) A discrepancy remains for carbon, probably due to the high overvoltage ratio.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  7. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  8. Overcoming limitations of model-based diagnostic reasoning systems

    NASA Technical Reports Server (NTRS)

    Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.

    1989-01-01

    The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.

  9. A prototype computer-aided modelling tool for life-support system models

    NASA Technical Reports Server (NTRS)

    Preisig, H. A.; Lee, Tae-Yeong; Little, Frank

    1990-01-01

    Based on the canonical decomposition of physical-chemical-biological systems, a prototype kernel has been developed to efficiently model alternative life-support systems. It supports (1) the work in an interdisciplinary group through an easy-to-use mostly graphical interface, (2) modularized object-oriented model representation, (3) reuse of models, (4) inheritance of structures from model object to model object, and (5) model data base. The kernel is implemented in Modula-II and presently operates on an IBM PC.

  10. Fluid dilution and efficiency of Na(+) transport in a mathematical model of a thick ascending limb cell.

    PubMed

    Nieves-González, Aniel; Clausen, Chris; Marcano, Mariano; Layton, Anita T; Layton, Harold E; Moore, Leon C

    2013-03-15

    Thick ascending limb (TAL) cells are capable of reducing tubular fluid Na(+) concentration to as low as ~25 mM, and yet they are thought to transport Na(+) efficiently owing to passive paracellular Na(+) absorption. Transport efficiency in the TAL is of particular importance in the outer medulla where O(2) availability is limited by low blood flow. We used a mathematical model of a TAL cell to estimate the efficiency of Na(+) transport and to examine how tubular dilution and cell volume regulation influence transport efficiency. The TAL cell model represents 13 major solutes and the associated transporters and channels; model equations are based on mass conservation and electroneutrality constraints. We analyzed TAL transport in cells with conditions relevant to the inner stripe of the outer medulla, the cortico-medullary junction, and the distal cortical TAL. At each location Na(+) transport efficiency was computed as functions of changes in luminal NaCl concentration ([NaCl]), [K(+)], [NH(4)(+)], junctional Na(+) permeability, and apical K(+) permeability. Na(+) transport efficiency was calculated as the ratio of total net Na(+) transport to transcellular Na(+) transport. Transport efficiency is predicted to be highest at the cortico-medullary boundary where the transepithelial Na(+) gradient is the smallest. Transport efficiency is lowest in the cortex where luminal [NaCl] approaches static head.

  11. Modeling the focusing efficiency of lobster-eye optics for image shifting depending on the soft x-ray wavelength.

    PubMed

    Su, Luning; Li, Wei; Wu, Mingxuan; Su, Yun; Guo, Chongling; Ruan, Ningjuan; Yang, Bingxin; Yan, Feng

    2017-08-01

    Lobster-eye optics is widely applied to space x-ray detection missions and x-ray security checks for its wide field of view and low weight. This paper presents a theoretical model to obtain spatial distribution of focusing efficiency based on lobster-eye optics in a soft x-ray wavelength. The calculations reveal the competition mechanism of contributions to the focusing efficiency between the geometrical parameters of lobster-eye optics and the reflectivity of the iridium film. In addition, the focusing efficiency image depending on x-ray wavelengths further explains the influence of different geometrical parameters of lobster-eye optics and different soft x-ray wavelengths on focusing efficiency. These results could be beneficial to optimize parameters of lobster-eye optics in order to realize maximum focusing efficiency.

  12. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  13. Conditional analysis of mixed Poisson processes with baseline counts: implications for trial design and analysis.

    PubMed

    Cook, Richard J; Wei, Wei

    2003-07-01

    The design of clinical trials is typically based on marginal comparisons of a primary response under two or more treatments. The considerable gains in efficiency afforded by models conditional on one or more baseline responses has been extensively studied for Gaussian models. The purpose of this article is to present methods for the design and analysis of clinical trials in which the response is a count or a point process, and a corresponding baseline count is available prior to randomization. The methods are based on a conditional negative binomial model for the response given the baseline count and can be used to examine the effect of introducing selection criteria on power and sample size requirements. We show that designs based on this approach are more efficient than those proposed by McMahon et al. (1994).

  14. Empirically based device modeling of bulk heterojunction organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Pierre, Adrien; Lu, Shaofeng; Howard, Ian A.; Facchetti, Antonio; Arias, Ana Claudia

    2013-10-01

    An empirically based, open source, optoelectronic model is constructed to accurately simulate organic photovoltaic (OPV) devices. Bulk heterojunction OPV devices based on a new low band gap dithienothiophene- diketopyrrolopyrrole donor polymer (P(TBT-DPP)) are blended with PC70BM and processed under various conditions, with efficiencies up to 4.7%. The mobilities of electrons and holes, bimolecular recombination coefficients, exciton quenching efficiencies in donor and acceptor domains and optical constants of these devices are measured and input into the simulator to yield photocurrent with less than 7% error. The results from this model not only show carrier activity in the active layer but also elucidate new routes of device optimization by varying donor-acceptor composition as a function of position. Sets of high and low performance devices are investigated and compared side-by-side.

  15. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  16. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  17. Updating the limit efficiency of silicon solar cells

    NASA Technical Reports Server (NTRS)

    Wolf, M.

    1979-01-01

    Evaluation of the limit efficiency based on the simplest, most basic mathematical method that is appropriate for the conditions imposed by the cell model is discussed. The methodology, the solar cell structure, and the selection of the material parameters used in the evaluation are described. The results are discussed including a set of design goals derived from the limit efficiency.

  18. Fuzzy Constraint Based Model for Efficient Management of Dynamic Purchasing Environments

    NASA Astrophysics Data System (ADS)

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    2007-12-01

    This paper considers the application of a fuzzy constraint based model for handling dynamic environments where only one of possibly many bundles of items must be purchased and quotes for items open and close over time. Simulation results are presented and compared with the optimal solution.

  19. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models.

    PubMed

    Shah, A A; Xing, W W; Triantafyllidis, V

    2017-04-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.

  20. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models

    PubMed Central

    Xing, W. W.; Triantafyllidis, V.

    2017-01-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327

  1. LOW EMISSION AND HIGH EFFICIENCY RESIDENTIAL PELLET-FIRED HEATERS

    EPA Science Inventory

    The paper gives results of air emissions testing and efficiency testing on new commercially available under-feed and top-feed residential heaters burning hardwood- and softwood-based pellets. The results were compared with data from earlier models. Reductions in air emissions w...

  2. In defense of compilation: A response to Davis' form and content in model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard

    1990-01-01

    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.

  3. A simulation-based efficiency comparison of AC and DC power distribution networks in commercial buildings

    DOE PAGES

    Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei; ...

    2017-06-12

    Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less

  4. A simulation-based efficiency comparison of AC and DC power distribution networks in commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei

    Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less

  5. Classification of breast tissue in mammograms using efficient coding.

    PubMed

    Costa, Daniel D; Campos, Lúcio F; Barros, Allan K

    2011-06-24

    Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.

  6. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    PubMed

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  7. Understanding cirrus ice crystal number variability for different heterogeneous ice nucleation spectra

    DOE PAGES

    Sullivan, Sylvia C.; Morales Betancourt, Ricardo; Barahona, Donifan; ...

    2016-03-03

    Along with minimizing parameter uncertainty, understanding the cause of temporal and spatial variability of the nucleated ice crystal number, N i, is key to improving the representation of cirrus clouds in climate models. To this end, sensitivities of N i to input variables like aerosol number and diameter provide valuable information about nucleation regime and efficiency for a given model formulation. Here we use the adjoint model of the adjoint of a cirrus formation parameterization (Barahona and Nenes, 2009b) to understand N i variability for various ice-nucleating particle (INP) spectra. Inputs are generated with the Community Atmosphere Model version 5, andmore » simulations are done with a theoretically derived spectrum, an empirical lab-based spectrum and two field-based empirical spectra that differ in the nucleation threshold for black carbon particles and in the active site density for dust. The magnitude and sign of N i sensitivity to insoluble aerosol number can be directly linked to nucleation regime and efficiency of various INP. The lab-based spectrum calculates much higher INP efficiencies than field-based ones, which reveals a disparity in aerosol surface properties. In conclusion, N i sensitivity to temperature tends to be low, due to the compensating effects of temperature on INP spectrum parameters; this low temperature sensitivity regime has been experimentally reported before but never deconstructed as done here.« less

  8. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Treesearch

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  9. A Structural Model Decomposition Framework for Hybrid Systems Diagnosis

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2015-01-01

    Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.

  10. Changing R&D models in research-based pharmaceutical companies.

    PubMed

    Schuhmacher, Alexander; Gassmann, Oliver; Hinder, Markus

    2016-04-27

    New drugs serving unmet medical needs are one of the key value drivers of research-based pharmaceutical companies. The efficiency of research and development (R&D), defined as the successful approval and launch of new medicines (output) in the rate of the monetary investments required for R&D (input), has declined since decades. We aimed to identify, analyze and describe the factors that impact the R&D efficiency. Based on publicly available information, we reviewed the R&D models of major research-based pharmaceutical companies and analyzed the key challenges and success factors of a sustainable R&D output. We calculated that the R&D efficiencies of major research-based pharmaceutical companies were in the range of USD 3.2-32.3 billion (2006-2014). As these numbers challenge the model of an innovation-driven pharmaceutical industry, we analyzed the concepts that companies are following to increase their R&D efficiencies: (A) Activities to reduce portfolio and project risk, (B) activities to reduce R&D costs, and (C) activities to increase the innovation potential. While category A comprises measures such as portfolio management and licensing, measures grouped in category B are outsourcing and risk-sharing in late-stage development. Companies made diverse steps to increase their innovation potential and open innovation, exemplified by open source, innovation centers, or crowdsourcing, plays a key role in doing so. In conclusion, research-based pharmaceutical companies need to be aware of the key factors, which impact the rate of innovation, R&D cost and probability of success. Depending on their company strategy and their R&D set-up they can opt for one of the following open innovators: knowledge creator, knowledge integrator or knowledge leverager.

  11. Transport properties and efficiency of elastically coupled particles in asymmetric periodic potentials

    NASA Astrophysics Data System (ADS)

    Igarashi, Akito; Tsukamoto, Shinji

    2000-02-01

    Biological molecular motors drive unidirectional transport and transduce chemical energy to mechanical work. In order to identify this energy conversion which is a common feature of molecular motors, many workers have studied various physical models, which consist of Brownian particles in spatially periodic potentials. Most of the models are, however, based on "single-particle" dynamics and too simple as models for biological motors, especially for actin-myosin motors, which cause muscle contraction. In this paper, particles coupled by elastic strings in an asymmetric periodic potential are considered as a model for the motors. We investigate the dynamics of the model and calculate the efficiency of energy conversion with the use of molecular dynamical method. In particular, we find that the velocity and efficiency of the elastically coupled particles where the natural length of the springs is incommensurable with the period of the periodic potential are larger than those of the corresponding single particle model.

  12. Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.

    2005-12-01

    A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.

  13. Behavioral modeling of VCSELs for high-speed optical interconnects

    NASA Astrophysics Data System (ADS)

    Szczerba, Krzysztof; Kocot, Chris

    2018-02-01

    Transition from on-off keying to 4-level pulse amplitude modulation (PAM) in VCSEL based optical interconnects allows for an increase of data rates, at the cost of 4.8 dB sensitivity penalty. The resulting strained link budget creates a need for accurate VCSEL models for driver integrated circuit (IC) design and system level simulations. Rate equation based equivalent circuit models are convenient for the IC design, but system level analysis requires computationally efficient closed form behavioral models based Volterra series and neural networks. In this paper we present and compare these models.

  14. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model.

    PubMed

    Neic, Aurel; Campos, Fernando O; Prassl, Anton J; Niederer, Steven A; Bishop, Martin J; Vigmond, Edward J; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  15. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model

    NASA Astrophysics Data System (ADS)

    Neic, Aurel; Campos, Fernando O.; Prassl, Anton J.; Niederer, Steven A.; Bishop, Martin J.; Vigmond, Edward J.; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  16. 7Be and hydrological model for more efficient implementation of erosion control measure

    NASA Astrophysics Data System (ADS)

    Al-Barri, Bashar; Bode, Samuel; Blake, William; Ryken, Nick; Cornelis, Wim; Boeckx, Pascal

    2014-05-01

    Increased concern about the on-site and off-site impacts of soil erosion in agricultural and forested areas has endorsed interest in innovative methods to assess in an unbiased way spatial and temporal soil erosion rates and redistribution patterns. Hence, interest in precisely estimating the magnitude of the problem and therefore applying erosion control measures (ECM) more efficiently. The latest generation of physically-based hydrological models, which fully couple overland flow and subsurface flow in three dimensions, permit implementing ECM in small and large scales more effectively if coupled with a sediment transport algorithm. While many studies focused on integrating empirical or numerical models based on traditional erosion budget measurements into 3D hydrological models, few studies evaluated the efficiency of ECM on watershed scale and very little attention is given to the potentials of environmental Fallout Radio-Nuclides (FRNs) in such applications. The use of FRN tracer 7Be in soil erosion/deposition research proved to overcome many (if not all) of the problems associated with the conventional approaches providing reliable data for efficient land use management. This poster will underline the pros and cones of using conventional methods and 7Be tracers to evaluate the efficiency of coconuts dams installed as ECM in experimental field in Belgium. It will also outline the potentials of 7Be in providing valuable inputs for evolving the numerical sediment transport algorithm needed for the hydrological model on field scale leading to assess the possibility of using this short-lived tracer as a validation tool for the upgraded hydrological model on watershed scale in further steps. Keywords: FRN, erosion control measures, hydrological modes

  17. Design and Analysis of Cost-Efficient Sensor Deployment for Tracking Small UAS with Agent-Based Modeling.

    PubMed

    Shin, Sangmi; Park, Seongha; Kim, Yongho; Matson, Eric T

    2016-04-22

    Recently, commercial unmanned aerial systems (UAS) have gained popularity. However, these UAS are potential threats to people in terms of safety in public places, such as public parks or stadiums. To reduce such threats, we consider a design, modeling, and evaluation of a cost-efficient sensor system that detects and tracks small UAS. In this research, we focus on discovering the best sensor deployments by simulating different types and numbers of sensors in a designated area, which provide reasonable detection rates at low costs. Also, the system should cover the crowded areas more thoroughly than vacant areas to reduce direct threats to people underneath. This research study utilized the Agent-Based Modeling (ABM) technique to model a system consisting of independent and heterogeneous agents that interact with each other. Our previous work presented the ability to apply ABM to analyze the sensor configurations with two types of radars in terms of cost-efficiency. The results from the ABM simulation provide a list of candidate configurations and deployments that can be referred to for applications in the real world environment.

  18. Design and Analysis of Cost-Efficient Sensor Deployment for Tracking Small UAS with Agent-Based Modeling

    PubMed Central

    Shin, Sangmi; Park, Seongha; Kim, Yongho; Matson, Eric T.

    2016-01-01

    Recently, commercial unmanned aerial systems (UAS) have gained popularity. However, these UAS are potential threats to people in terms of safety in public places, such as public parks or stadiums. To reduce such threats, we consider a design, modeling, and evaluation of a cost-efficient sensor system that detects and tracks small UAS. In this research, we focus on discovering the best sensor deployments by simulating different types and numbers of sensors in a designated area, which provide reasonable detection rates at low costs. Also, the system should cover the crowded areas more thoroughly than vacant areas to reduce direct threats to people underneath. This research study utilized the Agent-Based Modeling (ABM) technique to model a system consisting of independent and heterogeneous agents that interact with each other. Our previous work presented the ability to apply ABM to analyze the sensor configurations with two types of radars in terms of cost-efficiency. The results from the ABM simulation provide a list of candidate configurations and deployments that can be referred to for applications in the real world environment. PMID:27110790

  19. Arranging ISO 13606 archetypes into a knowledge base.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.

  20. Energy Efficient Operation of Ammonia Refrigeration Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Abdul Qayyum; Wenning, Thomas J; Sever, Franc

    Ammonia refrigeration systems typically offer many energy efficiency opportunities because of their size and complexity. This paper develops a model for simulating single-stage ammonia refrigeration systems, describes common energy saving opportunities, and uses the model to quantify those opportunities. The simulation model uses data that are typically available during site visits to ammonia refrigeration plants and can be calibrated to actual consumption and performance data if available. Annual electricity consumption for a base-case ammonia refrigeration system is simulated. The model is then used to quantify energy savings for six specific energy efficiency opportunities; reduce refrigeration load, increase suction pressure, employmore » dual suction, decrease minimum head pressure set-point, increase evaporative condenser capacity, and reclaim heat. Methods and considerations for achieving each saving opportunity are discussed. The model captures synergistic effects that result when more than one component or parameter is changed. This methodology represents an effective method to model and quantify common energy saving opportunities in ammonia refrigeration systems. The results indicate the range of savings that might be expected from common energy efficiency opportunities.« less

  1. Analytical method to estimate waterflood performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cremonini, A.S.

    A method to predict oil production resulting from the injection of immiscible fluids is described. The method is based on two models: one of them considers the vertical and displacement efficiencies, assuming unit areal efficiency and, therefore, a linear flow. It is a layered model without crossflow in which Buckley-Leveret`s displacement theory is used for each layer. The results obtained in the linear model are applied to a streamchannel model similar to the one used by Higgins and Leighton. In this way, areal efficiency is taken into account. The principal innovation is the possibility of applying different relative permeability curvesmore » to each layer. A numerical example in a five-spot pattern which uses relative permeability data obtained from reservoir core samples is presented.« less

  2. Aerosol transport and wet scavenging in deep convective clouds: a case study and model evaluation using a multiple passive tracer analysis approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Qing; Easter, Richard C.; Campuzano-Jost, Pedro

    2015-08-20

    The effect of wet scavenging on ambient aerosols in deep, continental convective clouds in the mid-latitudes is studied for a severe storm case in Oklahoma during the Deep Convective Clouds and Chemistry (DC3) field campaign. A new passive-tracer based transport analysis framework is developed to characterize the convective transport based on the vertical distribution of several slowly reacting and nearly insoluble trace gases. The passive gas concentration in the upper troposphere convective outflow results from a mixture of 47% from the lower level (0-3 km), 21% entrained from the upper troposphere, and 32% from mid-atmosphere based on observations. The transportmore » analysis framework is applied to aerosols to estimate aerosol transport and wet-scavenging efficiency. Observations yield high overall scavenging efficiencies of 81% and 68% for aerosol mass (Dp < 1μm) and aerosol number (0.03< Dp < 2.5μm), respectively. Little chemical selectivity to wet scavenging is seen among observed submicron sulfate (84%), organic (82%), and ammonium (80%) aerosols, while nitrate has a much lower scavenging efficiency of 57% likely due to the uptake of nitric acid. Observed larger size particles (0.15 - 2.5μm) are scavenged more efficiently (84%) than smaller particles (64%; 0.03 - 0.15μm). The storm is simulated using the chemistry version of the WRF model. Compared to the observation based analysis, the standard model underestimates the wet scavenging efficiency for both mass and number concentrations with low biases of 31% and 40%, respectively. Adding a new treatment of secondary activation significantly improves simulation results, so that the bias in scavenging efficiency in mass and number concentrations is reduced to <10%. This supports the hypothesis that secondary activation is an important process for wet removal of aerosols in deep convective storms.« less

  3. Simulation of subsurface storage and recovery of treated effluent injected in a saline aquifer, St. Petersburg, Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    1996-01-01

    The potential for subsurface storage and recovery of treated effluent into the uppermost producing zone (zone A) of the Upper Floridan aquifer in St. Petersburg, Florida, is being studied by the U.S. Geological Survey, in cooperation with the city of St. Petersburg and the Southwest Florida Water Management District. A measure of the success of this practice is the recovery efficiency, or the quantity of water relative to the quantity injected, that can be recovered before the water that is withdrawn fails to meet water-quality standards. The feasibility of this practice will depend upon the ability of the injected zone to receive, store, and discharge the injected fluid. A cylindrical model of ground-water flow and solute transport, incorporating available data on aquifer properties and water quality, was developed to determine the relation of recovery efficiency to various aquifer and fluid properties that could prevail in the study area. The reference case for testing was a base model considered representative of the saline aquifer underlying St. Petersburg. Parameter variations in the tests represent possible variations in aquifer conditions in the area. The model also was used to study the effect of various cyclic injection and withdrawal schemes on the recovery efficiency of the well and aquifer system. A base simulation assuming 15 days of injection of effluent at a rate of 1.0 million gallons per day and 15 days of withdrawal at a rate of 1.0 million gallons per day was used as reference to compare changes in various hydraulic and chemical parameters on recovery efficiency. A recovery efficiency of 20 percent was estimated for the base simulation. For practical ranges of hydraulic and fluid properties that could prevail in the study area, the model analysis indicates that (1) the greater the density contrast between injected and resident formation water, the lower the recovery efficiency, (2) recovery efficiency decreases significantly as dispersion increases, (3) high formation permeability favors low recovery efficiencies, and (4) porosity and anisotropy have little effect on recovery efficiencies. In several hypothetical tests, the recovery efficiency fluctuated between about 4 and 76 percent. The sensitivity of recovery efficiency to variations in the rate and duration of injection (0.25, 0.50, 1.0, and 2.0 million gallons per day) and withdrawal cycles (60, 180, and 365 days) was determined. For a given operational scheme, recovery efficiency increased as the injection and withdrawal rate is increased. Model results indicate that recovery efficiencies of between about 23 and 37 percent can be obtained for different subsurface storage and recovery schemes. Five successive injection, storage, and recovery cycles can increase the recovery efficiency to about 46 to 62 percent. There is a larger rate of increase at smaller rates than at larger rates. Over the range of variables studied, recovery efficiency improved with successive cycles, increasing rapidly during initial cycles tyhen more slowly at later cycles. The operation of a single well used for subsurface storage and recovery appears to be technically feasible under moderately favorable conditions; however, the recovery efficiency is higly dependent upon local physical and operational parameters. A combination of hydraulic, chemical, and operational parameters that minimize dispersion and buoyancy flow, maximizes recovery efficiency. Recovery efficiency was optimal where resident formation water density and permeabilities were relatively similar and low.

  4. Bayes factors based on robust TDT-type tests for family trio design.

    PubMed

    Yuan, Min; Pan, Xiaoqing; Yang, Yaning

    2015-06-01

    Adaptive transmission disequilibrium test (aTDT) and MAX3 test are two robust-efficient association tests for case-parent family trio data. Both tests incorporate information of common genetic models including recessive, additive and dominant models and are efficient in power and robust to genetic model specifications. The aTDT uses information of departure from Hardy-Weinberg disequilibrium to identify the potential genetic model underlying the data and then applies the corresponding TDT-type test, and the MAX3 test is defined as the maximum of the absolute value of three TDT-type tests under the three common genetic models. In this article, we propose three robust Bayes procedures, the aTDT based Bayes factor, MAX3 based Bayes factor and Bayes model averaging (BMA), for association analysis with case-parent trio design. The asymptotic distributions of aTDT under the null and alternative hypothesis are derived in order to calculate its Bayes factor. Extensive simulations show that the Bayes factors and the p-values of the corresponding tests are generally consistent and these Bayes factors are robust to genetic model specifications, especially so when the priors on the genetic models are equal. When equal priors are used for the underlying genetic models, the Bayes factor method based on aTDT is more powerful than those based on MAX3 and Bayes model averaging. When the prior placed a small (large) probability on the true model, the Bayes factor based on aTDT (BMA) is more powerful. Analysis of a simulation data about RA from GAW15 is presented to illustrate applications of the proposed methods.

  5. A simulation-based approach for solving assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyu

    2017-09-01

    Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.

  6. Design, Modeling, Fabrication & Characterization of Industrial Si Solar Cells

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahrar Ahmed

    Photovoltaic is a viable solution towards meeting the energy demand in an ecofriendly environment. To ensure the mass access in photovoltaic electricity, cost effective approach needs to be adapted. This thesis aims towards substrate independent fabrication process in order to achieve high efficiency cost effective industrial Silicon (Si) solar cells. Most cost-effective structures, such as, Al-BSF (Aluminum Back Surface Field), FSF (Front Surface Field) and bifacial cells are investigated in detail to exploit the efficiency potentials. First off, we introduced two-dimensional simulation model to design and modeling of most commonly used Si solar cells in today's PV arena. Best modelled results of high efficiency Al-BSF, FSF and bifacial cells are 20.50%, 22% and 21.68% respectively. Special attentions are given on the metallization design on all the structures in order to reduce the Ag cost. Furthermore, detail design and modeling were performed on FSF and bifacial cells. The FSF cells has potentials to gain 0.42%abs efficiency by combining the emitter design and front surface passivation. The prospects of bifacial cells can be revealed with the optimization of gridline widths and gridline numbers. Since, bifacial cells have metallization on both sides, a double fold cost saving is possible via innovative metallization design. Following modeling an effort is undertaken to reach the modelled result in fabrication the process. We proposed substrate independent fabrication process aiming towards establishing simultaneous processing sequences for both monofacial and bifacial cells. Subsequently, for the contact formation cost effective screen-printed technology is utilized throughout this thesis. The best Al-BSF cell attained efficiency ˜19.40%. Detail characterization was carried out to find a roadmap of achieving >20.50% efficiency Al-BSF cell. Since, n-type cell is free from Light Induced degradation (LID), recently there is a growing interest on FSF cell. Our best fabricated result of FSF cell achieved ˜18.40% efficiency. Characterizations on such cells provide that, cell performance can be further improved by utilizing high lifetime base wafer. We showed a step by step improvement on the device parameters to achieve ˜22% efficiency FSF cell. Finally, bifacial cells were fabricated with 13.32% front and 9.65% rear efficiency. The efficiency limitation is due to the quality of base wafer. Detail resistance breakdown was conducted on these cells to analyze parasitic resistance losses. It was found that base and gridline resistances dominated the FF loss. However, very low contact resistance of 20 mO-cm 2 at front side and 2 mO-cm2 at the rear side was observed by utilizing same Ag paste for front and rear contact formation. This might provide a pathway towards the search of an optimized Ag paste to attain high efficiency screen-printed bifacial cell. Detail investigations needs to be carried out to unveil the property of this Ag paste. In future work, more focus will be given on the metallization design to incorporate further reduction in Ag cost. Al2O3 passivation layer will be incorporated as a means to attain ˜23% screen-printed bifacial cell.

  7. XML-Based SHINE Knowledge Base Interchange Language

    NASA Technical Reports Server (NTRS)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  8. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  9. Modelling and analysis of solar cell efficiency distributions

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  10. Estimation of optimum density and temperature for maximum efficiency of tin ions in Z discharge extreme ultraviolet sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masnavi, Majid; Nakajima, Mitsuo; Hotta, Eiki

    Extreme ultraviolet (EUV) discharge-based lamps for EUV lithography need to generate extremely high power in the narrow spectrum band of 13.5{+-}0.135 nm. A simplified collisional-radiative model and radiative transfer solution for an isotropic medium were utilized to investigate the wavelength-integrated light outputs in tin (Sn) plasma. Detailed calculations using the Hebrew University-Lawrence Livermore atomic code were employed for determination of necessary atomic data of the Sn{sup 4+} to Sn{sup 13+} charge states. The result of model is compared with experimental spectra from a Sn-based discharge-produced plasma. The analysis reveals that considerably larger efficiency compared to the so-called efficiency of amore » black-body radiator is formed for the electron density {approx_equal}10{sup 18} cm{sup -3}. For higher electron density, the spectral efficiency of Sn plasma reduces due to the saturation of resonance transitions.« less

  11. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks.

    PubMed

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-10-09

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms.

  12. Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks

    PubMed Central

    Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid

    2017-01-01

    The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms. PMID:28991200

  13. Travelling Wave Pulse Coupled Oscillator (TWPCO) Using a Self-Organizing Scheme for Energy-Efficient Wireless Sensor Networks.

    PubMed

    Al-Mekhlafi, Zeyad Ghaleb; Hanapi, Zurina Mohd; Othman, Mohamed; Zukarnain, Zuriati Ahmad

    2017-01-01

    Recently, Pulse Coupled Oscillator (PCO)-based travelling waves have attracted substantial attention by researchers in wireless sensor network (WSN) synchronization. Because WSNs are generally artificial occurrences that mimic natural phenomena, the PCO utilizes firefly synchronization of attracting mating partners for modelling the WSN. However, given that sensor nodes are unable to receive messages while transmitting data packets (due to deafness), the PCO model may not be efficient for sensor network modelling. To overcome this limitation, this paper proposed a new scheme called the Travelling Wave Pulse Coupled Oscillator (TWPCO). For this, the study used a self-organizing scheme for energy-efficient WSNs that adopted travelling wave biologically inspired network systems based on phase locking of the PCO model to counteract deafness. From the simulation, it was found that the proposed TWPCO scheme attained a steady state after a number of cycles. It also showed superior performance compared to other mechanisms, with a reduction in the total energy consumption of 25%. The results showed that the performance improved by 13% in terms of data gathering. Based on the results, the proposed scheme avoids the deafness that occurs in the transmit state in WSNs and increases the data collection throughout the transmission states in WSNs.

  14. Travelling Wave Pulse Coupled Oscillator (TWPCO) Using a Self-Organizing Scheme for Energy-Efficient Wireless Sensor Networks

    PubMed Central

    Hanapi, Zurina Mohd; Othman, Mohamed; Zukarnain, Zuriati Ahmad

    2017-01-01

    Recently, Pulse Coupled Oscillator (PCO)-based travelling waves have attracted substantial attention by researchers in wireless sensor network (WSN) synchronization. Because WSNs are generally artificial occurrences that mimic natural phenomena, the PCO utilizes firefly synchronization of attracting mating partners for modelling the WSN. However, given that sensor nodes are unable to receive messages while transmitting data packets (due to deafness), the PCO model may not be efficient for sensor network modelling. To overcome this limitation, this paper proposed a new scheme called the Travelling Wave Pulse Coupled Oscillator (TWPCO). For this, the study used a self-organizing scheme for energy-efficient WSNs that adopted travelling wave biologically inspired network systems based on phase locking of the PCO model to counteract deafness. From the simulation, it was found that the proposed TWPCO scheme attained a steady state after a number of cycles. It also showed superior performance compared to other mechanisms, with a reduction in the total energy consumption of 25%. The results showed that the performance improved by 13% in terms of data gathering. Based on the results, the proposed scheme avoids the deafness that occurs in the transmit state in WSNs and increases the data collection throughout the transmission states in WSNs. PMID:28056020

  15. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-02-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  16. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  17. An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture.

    PubMed

    Zhang, Xiaopu; Lin, Jun; Chen, Zubin; Sun, Feng; Zhu, Xi; Fang, Gengfa

    2018-06-05

    Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.

  18. Phenomenological modeling of nonlinear holograms based on metallic geometric metasurfaces.

    PubMed

    Ye, Weimin; Li, Xin; Liu, Juan; Zhang, Shuang

    2016-10-31

    Benefiting from efficient local phase and amplitude control at the subwavelength scale, metasurfaces offer a new platform for computer generated holography with high spatial resolution. Three-dimensional and high efficient holograms have been realized by metasurfaces constituted by subwavelength meta-atoms with spatially varying geometries or orientations. Metasurfaces have been recently extended to the nonlinear optical regime to generate holographic images in harmonic generation waves. Thus far, there has been no vector field simulation of nonlinear metasurface holograms because of the tremendous computational challenge in numerically calculating the collective nonlinear responses of the large number of different subwavelength meta-atoms in a hologram. Here, we propose a general phenomenological method to model nonlinear metasurface holograms based on the assumption that every meta-atom could be described by a localized nonlinear polarizability tensor. Applied to geometric nonlinear metasurfaces, we numerically model the holographic images formed by the second-harmonic waves of different spins. We show that, in contrast to the metasurface holograms operating in the linear optical regime, the wavelength of incident fundamental light should be slightly detuned from the fundamental resonant wavelength to optimize the efficiency and quality of nonlinear holographic images. The proposed modeling provides a general method to simulate nonlinear optical devices based on metallic metasurfaces.

  19. Role of Edges in Complex Network Epidemiology

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Jiang, Zhi-Hong; Wang, Hui; Xie, Fei; Chen, Chao

    2012-09-01

    In complex network epidemiology, diseases spread along contacting edges between individuals and different edges may play different roles in epidemic outbreaks. Quantifying the efficiency of edges is an important step towards arresting epidemics. In this paper, we study the efficiency of edges in general susceptible-infected-recovered models, and introduce the transmission capability to measure the efficiency of edges. Results show that deleting edges with the highest transmission capability will greatly decrease epidemics on scale-free networks. Basing on the message passing approach, we get exact mathematical solution on configuration model networks with edge deletion in the large size limit.

  20. Efficient FFT Algorithm for Psychoacoustic Model of the MPEG-4 AAC

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Seong; Lee, Chang-Joon; Park, Young-Cheol; Youn, Dae-Hee

    This paper proposes an efficient FFT algorithm for the Psycho-Acoustic Model (PAM) of MPEG-4 AAC. The proposed algorithm synthesizes FFT coefficients using MDCT and MDST coefficients through circular convolution. The complexity of the MDCT and MDST coefficients is approximately half of the original FFT. We also design a new PAM based on the proposed FFT algorithm, which has 15% lower computational complexity than the original PAM without degradation of sound quality. Subjective as well as objective test results are presented to confirm the efficiency of the proposed FFT computation algorithm and the PAM.

  1. Enhancing thermoelectric properties through a three-terminal benzene molecule

    NASA Astrophysics Data System (ADS)

    Sartipi, Z.; Vahedi, J.

    2018-05-01

    The thermoelectric transport through a benzene molecule with three metallic terminals is discussed. Using general local and non-local transport coefficients, we investigated different conductance and thermopower coefficients within the linear response regime. Based on the Onsager coefficients which depend on the number of terminal efficiencies, efficiency at maximum power is also studied. In the three-terminal setup with tuning temperature differences, a great enhancement of the figure of merit is observed. Results also show that the third terminal model can be useful in improving the efficiency at maximum output power compared to the two-terminal model.

  2. Online Monitoring System of Air Distribution in Pulverized Coal-Fired Boiler Based on Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Żymełka, Piotr; Nabagło, Daniel; Janda, Tomasz; Madejski, Paweł

    2017-12-01

    Balanced distribution of air in coal-fired boiler is one of the most important factors in the combustion process and is strongly connected to the overall system efficiency. Reliable and continuous information about combustion airflow and fuel rate is essential for achieving optimal stoichiometric ratio as well as efficient and safe operation of a boiler. Imbalances in air distribution result in reduced boiler efficiency, increased gas pollutant emission and operating problems, such as corrosion, slagging or fouling. Monitoring of air flow trends in boiler is an effective method for further analysis and can help to appoint important dependences and start optimization actions. Accurate real-time monitoring of the air distribution in boiler can bring economical, environmental and operational benefits. The paper presents a novel concept for online monitoring system of air distribution in coal-fired boiler based on real-time numerical calculations. The proposed mathematical model allows for identification of mass flow rates of secondary air to individual burners and to overfire air (OFA) nozzles. Numerical models of air and flue gas system were developed using software for power plant simulation. The correctness of the developed model was verified and validated with the reference measurement values. The presented numerical model for real-time monitoring of air distribution is capable of giving continuous determination of the complete air flows based on available digital communication system (DCS) data.

  3. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    PubMed Central

    Dinh, Thanh; Kim, Younghan

    2016-01-01

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud. PMID:27367689

  4. A multi-objective model for closed-loop supply chain optimization and efficient supplier selection in a competitive environment considering quantity discount policy

    NASA Astrophysics Data System (ADS)

    Jahangoshai Rezaee, Mustafa; Yousefi, Samuel; Hayati, Jamileh

    2017-06-01

    Supplier selection and allocation of optimal order quantity are two of the most important processes in closed-loop supply chain (CLSC) and reverse logistic (RL). So that providing high quality raw material is considered as a basic requirement for a manufacturer to produce popular products, as well as achieve more market shares. On the other hand, considering the existence of competitive environment, suppliers have to offer customers incentives like discounts and enhance the quality of their products in a competition with other manufacturers. Therefore, in this study, a model is presented for CLSC optimization, efficient supplier selection, as well as orders allocation considering quantity discount policy. It is modeled using multi-objective programming based on the integrated simultaneous data envelopment analysis-Nash bargaining game. In this study, maximizing profit and efficiency and minimizing defective and functions of delivery delay rate are taken into accounts. Beside supplier selection, the suggested model selects refurbishing sites, as well as determining the number of products and parts in each network's sector. The suggested model's solution is carried out using global criteria method. Furthermore, based on related studies, a numerical example is examined to validate it.

  5. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud.

    PubMed

    Dinh, Thanh; Kim, Younghan

    2016-06-28

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  6. Efficient multidimensional regularization for Volterra series estimation

    NASA Astrophysics Data System (ADS)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  7. Where neuroscience and dynamic system theory meet autonomous robotics: a contracting basal ganglia model for action selection.

    PubMed

    Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J

    2008-05-01

    Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.

  8. Scalable methodology for large scale building energy improvement: Relevance of calibration in model-based retrofit analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heo, Yeonsook; Augenbroe, Godfried; Graziano, Diane

    2015-05-01

    The increasing interest in retrofitting of existing buildings is motivated by the need to make a major contribution to enhancing building energy efficiency and reducing energy consumption and CO2 emission by the built environment. This paper examines the relevance of calibration in model-based analysis to support decision-making for energy and carbon efficiency retrofits of individual buildings and portfolios of buildings. The authors formulate a set of real retrofit decision-making situations and evaluate the role of calibration by using a case study that compares predictions and decisions from an uncalibrated model with those of a calibrated model. The case study illustratesmore » both the mechanics and outcomes of a practical alternative to the expert- and time-intense application of dynamic energy simulation models for large-scale retrofit decision-making under uncertainty.« less

  9. Realizable feed-element patterns and optimum aperture efficiency in multibeam antenna systems

    NASA Technical Reports Server (NTRS)

    Yngvesson, K. S.; Rahmat-Samii, Y.; Johansson, J. F.; Kim, Y. S.

    1988-01-01

    The results of an earlier paper by Rahmat-Samii et al. (1981), regarding realizable patterns from feed elements that are part of an array that feeds a reflector antenna, are extended. The earlier paper used a cos exp q theta model for the element radiation pattern, whereas here a parametric study is performed, using a model that assumes a central beam of cos exp q theta shape, with a constant sidelobe level outside the central beam. Realizable q-values are constrained by the maximum directivity based on feed element area. The optimum aperture efficiency (excluding array feed network losses) in an array-reflector system is evaluated as a function of element spacing using this model as well as the model of the earlier paper. Experimental data for tapered slot antenna (TSA) arrays are in agreement with the conclusions based on the model.

  10. Indoor Residual Spraying Delivery Models to Prevent Malaria: Comparison of Community- and District-Based Approaches in Ethiopia

    PubMed Central

    Johns, Benjamin; Yihdego, Yemane Yeebiyo; Kolyada, Lena; Dengela, Dereje; Chibsa, Sheleme; Dissanayake, Gunawardena; George, Kristen; Taffese, Hiwot Solomon; Lucas, Bradford

    2016-01-01

    ABSTRACT Background: Indoor residual spraying (IRS) for malaria prevention has traditionally been implemented in Ethiopia by the district health office with technical and operational inputs from regional, zonal, and central health offices. The United States President's Malaria Initiative (PMI) in collaboration with the Government of Ethiopia tested the effectiveness and efficiency of integrating IRS into the government-funded community-based rural health services program. Methods: Between 2012 and 2014, PMI conducted a mixed-methods study in 11 districts of Oromia region to compare district-based IRS (DB IRS) and community-based IRS (CB IRS) models. In the DB IRS model, each district included 2 centrally located operational sites where spray teams camped during the IRS campaign and from which they traveled to the villages to conduct spraying. In the CB IRS model, spray team members were hired from the communities in which they operated, thus eliminating the need for transport and camping facilities. The study team evaluated spray coverage, the quality of spraying, compliance with environmental and safety standards, and cost and performance efficiency. Results: The average number of eligible structures found and sprayed in the CB IRS districts increased by 19.6% and 20.3%, respectively, between 2012 (before CB IRS) and 2013 (during CB IRS). Between 2013 and 2014, the numbers increased by about 14%. In contrast, in the DB IRS districts the number of eligible structures found increased by only 8.1% between 2012 and 2013 and by 0.4% between 2013 and 2014. The quality of CB IRS operations was good and comparable to that in the DB IRS model, according to wall bioassay tests. Some compliance issues in the first year of CB IRS implementation were corrected in the second year, bringing compliance up to the level of the DB IRS model. The CB IRS model had, on average, higher amortized costs per district than the DB IRS model but lower unit costs per structure sprayed and per person protected because the community-based model found and sprayed more structures. Conclusion: Established community-based service delivery systems can be adapted to include a seasonal IRS campaign alongside the community-based health workers' routine activities to improve performance efficiency. Further modifications of the community-based IRS model may reduce the total cost of the intervention and increase its financial sustainability. PMID:27965266

  11. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  12. IEEE Photovoltaic Specialists Conference, 20th, Las Vegas, NV, Sept. 26-30, 1988, Conference Record. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    Various papers on photovoltaics are presented. The general topics considered include: amorphous materials and cells; amorphous silicon-based solar cells and modules; amorphous silicon-based materials and processes; amorphous materials characterization; amorphous silicon; high-efficiency single crystal solar cells; multijunction and heterojunction cells; high-efficiency III-V cells; modeling and characterization of high-efficiency cells; LIPS flight experience; space mission requirements and technology; advanced space solar cell technology; space environmental effects and modeling; space solar cell and array technology; terrestrial systems and array technology; terrestrial utility and stand-alone applications and testing; terrestrial concentrator and storage technology; terrestrial stand-alone systems applications; terrestrial systems test and evaluation; terrestrial flatplate and concentrator technology; use of polycrystalline materials; polycrystalline II-VI compound solar cells; analysis of and fabrication procedures for compound solar cells.

  13. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Effective and Efficient Stochastic Global Optimization

    DTIC Science & Technology

    2012-02-01

    parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires

  14. Discrete Spin Vector Approach for Monte Carlo-based Magnetic Nanoparticle Simulations

    NASA Astrophysics Data System (ADS)

    Senkov, Alexander; Peralta, Juan; Sahay, Rahul

    The study of magnetic nanoparticles has gained significant popularity due to the potential uses in many fields such as modern medicine, electronics, and engineering. To study the magnetic behavior of these particles in depth, it is important to be able to model and simulate their magnetic properties efficiently. Here we utilize the Metropolis-Hastings algorithm with a discrete spin vector model (in contrast to the standard continuous model) to model the magnetic hysteresis of a set of protected pure iron nanoparticles. We compare our simulations with the experimental hysteresis curves and discuss the efficiency of our algorithm.

  15. Fair and efficient network congestion control based on minority game

    NASA Astrophysics Data System (ADS)

    Wang, Zuxi; Wang, Wen; Hu, Hanping; Deng, Zhaozhang

    2011-12-01

    Low link utility, RTT unfairness and unfairness of Multi-Bottleneck network are the existing problems in the present network congestion control algorithms at large. Through the analogy of network congestion control with the "El Farol Bar" problem, we establish a congestion control model based on minority game(MG), and then present a novel network congestion control algorithm based on the model. The result of simulations indicates that the proposed algorithm can make the achievements of link utility closing to 100%, zero packet lose rate, and small of queue size. Besides, the RTT unfairness and the unfairness of Multi-Bottleneck network can be solved, to achieve the max-min fairness in Multi-Bottleneck network, while efficiently weaken the "ping-pong" oscillation caused by the overall synchronization.

  16. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  17. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  18. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  19. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  20. An integrated experiment for identification of best decision styles and teamworks with respect to HSE and ergonomics program: The case of a large oil refinery.

    PubMed

    Azadeh, A; Mokhtari, Z; Sharahi, Z Jiryaei; Zarrin, M

    2015-12-01

    Decision making failure is a predominant human error in emergency situations. To demonstrate the subject model, operators of an oil refinery were asked to answer a health, safety and environment HSE-decision styles (DS) questionnaire. In order to achieve this purpose, qualitative indicators in HSE and ergonomics domain have been collected. Decision styles, related to the questions, have been selected based on Driver taxonomy of human decision making approach. Teamwork efficiency has been assessed based on different decision style combinations. The efficiency has been ranked based on HSE performance. Results revealed that efficient decision styles resulted from data envelopment analysis (DEA) optimization model is consistent with the plant's dominant styles. Therefore, improvement in system performance could be achieved using the best operator for critical posts or in team arrangements. This is the first study that identifies the best decision styles with respect to HSE and ergonomics factors. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Energy efficiency technologies in cement and steel industry

    NASA Astrophysics Data System (ADS)

    Zanoli, Silvia Maria; Cocchioni, Francesco; Pepe, Crescenzo

    2018-02-01

    In this paper, Advanced Process Control strategies aimed at energy efficiency achievement and improvement in cement and steel industry are proposed. A flexible and smart control structure constituted by several functional modules and blocks has been developed. The designed control strategy is based on Model Predictive Control techniques, formulated on linear models. Two industrial control solutions have been developed, oriented to energy efficiency and process control improvement in cement industry clinker rotary kilns (clinker production phase) and in steel industry billets reheating furnaces. Tailored customization procedures for the design of ad hoc control systems have been executed, based on the specific needs and specifications of the analysed processes. The installation of the developed controllers on cement and steel plants produced significant benefits in terms of process control which resulted in working closer to the imposed operating limits. With respect to the previous control systems, based on local controllers and/or operators manual conduction, more profitable configurations of the crucial process variables have been provided.

  2. Design Guidelines for High-Performance Particle-Based Photoanodes for Water Splitting: Lanthanum Titanium Oxynitride as a Model.

    PubMed

    Landsmann, Steve; Maegli, Alexandra E; Trottmann, Matthias; Battaglia, Corsin; Weidenkaff, Anke; Pokrant, Simone

    2015-10-26

    Semiconductor powders are perfectly suited for the scalable fabrication of particle-based photoelectrodes, which can be used to split water using the sun as a renewable energy source. This systematic study is focused on variation of the electrode design using LaTiO2 N as a model system. We present the influence of particle morphology on charge separation and transport properties combined with post-treatment procedures, such as necking and size-dependent co-catalyst loading. Five rules are proposed to guide the design of high-performance particle-based photoanodes by adding or varying several process steps. We also specify how much efficiency improvement can be achieved using each of the steps. For example, implementation of a connectivity network and surface area enhancement leads to thirty times improvement in efficiency and co-catalyst loading achieves an improvement in efficiency by a factor of seven. Some of these guidelines can be adapted to non-particle-based photoelectrodes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Spatio-temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    DOE PAGES

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian; ...

    2018-04-03

    Light-use efficiency (LUE), which quantifies the plants’ efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production (GPP) estimation. Here we use satellite-based solar-induced chlorophyll fluorescence (SIF) as a proxy for photosynthetically active radiation absorbed by chlorophyll (APAR chl) and derive an estimation of the fraction of APAR chl (fPAR chl) from four remotely-sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (εmore » $$chl\\atop{max}$$), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPAR chl, suggesting the corresponding (ε$$chl\\atop{max}$$}$) to have less seasonal variation. Finally, this spatio-temporal convergence of LUE derived from fPAR chl can be used to build simple but robust GPP models and to better constrain process-based models.« less

  4. Spatio-temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian

    Light-use efficiency (LUE), which quantifies the plants’ efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production (GPP) estimation. Here we use satellite-based solar-induced chlorophyll fluorescence (SIF) as a proxy for photosynthetically active radiation absorbed by chlorophyll (APAR chl) and derive an estimation of the fraction of APAR chl (fPAR chl) from four remotely-sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (εmore » $$chl\\atop{max}$$), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPAR chl, suggesting the corresponding (ε$$chl\\atop{max}$$}$) to have less seasonal variation. Finally, this spatio-temporal convergence of LUE derived from fPAR chl can be used to build simple but robust GPP models and to better constrain process-based models.« less

  5. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  6. Thermal modeling of high efficiency AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanenok, J.F. III; Sievers, R.K.; Crowley, C.J.

    1995-12-31

    Remotely condensed Alkali Metal Thermal to Electric Conversion (AMTEC) cells achieve high efficiency by thermally isolating the hot {beta} Alumina Solid Electrolyte (BASE) tube from the cold condensing region. In order to design high efficiency AMTEC cells the designer must understand the heat losses associated with the AMTEC process. The major parasitic heat losses are due to conduction and radiation, and significant coupling of the two mechanisms occurs. This paper describes an effort to characterize the thermal aspects of the model PL-6 AMTEC cell and apply this understanding to the design of a higher efficiency AMTEC cell, model PL-8. Twomore » parallel analyses were used to model the thermal characteristics of PL-6. The first was a lumped node model using the classical electric circuit analogy and the second was a detailed finite-difference model. The lumped node model provides high speed and reasonable accuracy, and the detailed finite-difference model provides a more accurate, as well as visual, description of the cell temperature profiles. The results of the two methods are compared to the as-measured PL-6 data. PL-6 was the first cell to use a micromachined condenser to lower the radiation losses to the condenser, and it achieved a conversion efficiency of 15% (3 W output/20 W Input) at a temperature of 1050 K.« less

  7. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    NASA Technical Reports Server (NTRS)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  8. Lebedev acceleration and comparison of different photometric models in the inversion of lightcurves for asteroids

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao

    2018-04-01

    In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.

  9. Health economics, equity, and efficiency: are we almost there?

    PubMed

    Ferraz, Marcos Bosi

    2015-01-01

    Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health.

  10. Health economics, equity, and efficiency: are we almost there?

    PubMed Central

    Ferraz, Marcos Bosi

    2015-01-01

    Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health. PMID:25709481

  11. Quantitative Analysis of the Efficiency of OLEDs.

    PubMed

    Sim, Bomi; Moon, Chang-Ki; Kim, Kwon-Hyeon; Kim, Jang-Joo

    2016-12-07

    We present a comprehensive model for the quantitative analysis of factors influencing the efficiency of organic light-emitting diodes (OLEDs) as a function of the current density. The model takes into account the contribution made by the charge carrier imbalance, quenching processes, and optical design loss of the device arising from various optical effects including the cavity structure, location and profile of the excitons, effective radiative quantum efficiency, and out-coupling efficiency. Quantitative analysis of the efficiency can be performed with an optical simulation using material parameters and experimental measurements of the exciton profile in the emission layer and the lifetime of the exciton as a function of the current density. This method was applied to three phosphorescent OLEDs based on a single host, mixed host, and exciplex-forming cohost. The three factors (charge carrier imbalance, quenching processes, and optical design loss) were influential in different ways, depending on the device. The proposed model can potentially be used to optimize OLED configurations on the basis of an analysis of the underlying physical processes.

  12. Measuring Efficiency of Knowledge Production in Health Research Centers Using Data Envelopment Analysis (DEA): A Case Study in Iran.

    PubMed

    Amiri, Mohammad Meskarpour; Nasiri, Taha; Saadat, Seyed Hassan; Anabad, Hosein Amini; Ardakan, Payman Mahboobi

    2016-11-01

    Efficiency analysis is necessary in order to avoid waste of materials, energy, effort, money, and time during scientific research. Therefore, analyzing efficiency of knowledge production in health areas is necessary, especially for developing and in-transition countries. As the first step in this field, the aim of this study was the analysis of selected health research center efficiency using data envelopment analysis (DEA). This retrospective and applied study was conducted in 2015 using input and output data of 16 health research centers affiliated with a health sciences university in Iran during 2010-2014. The technical efficiency of health research centers was evaluated based on three basic data envelopment analysis (DEA) models: input-oriented, output-oriented, and hyperbolic-oriented. The input and output data of each health research center for years 2010-2014 were collected from the Iran Ministry of Health and Medical Education (MOHE) profile and analyzed by R software. The mean efficiency score in input-oriented, output-oriented, and hyperbolic-oriented models was 0.781, 0.671, and 0.798, respectively. Based on results of the study, half of the health research centers are operating below full efficiency, and about one-third of them are operating under the average efficiency level. There is also a large gap between health research center efficiency relative to each other. It is necessary for health research centers to improve their efficiency in knowledge production through better management of available resources. The higher level of efficiency in a significant number of health research centers is achievable through more efficient management of human resources and capital. Further research is needed to measure and follow the efficiency of knowledge production by health research centers around the world and over a period of time.

  13. Flexible language constructs for large parallel programs

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Schnabel, Robert

    1993-01-01

    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.

  14. Combining observations in the reflective solar and thermal domains for improved carbon and energy flux estimation

    USDA-ARS?s Scientific Manuscript database

    This study investigates the utility of integrating remotely sensed estimates of leaf chlorophyll (Cab) into a therma-based Two-Source Energy Balance (TSEB) model that estimates land-surface CO2 and energy fluxes using an analytical, light-use-efficiency (LUE) based model of canopy resistance. The LU...

  15. Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.

    EPA Science Inventory

    Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.Evans, M.V., Sawyer, M.E., Isaacs, K.K, and Wambaugh, J.With the development of efficient high-throughput (HT) in ...

  16. Feedback and Feed-Forward for Promoting Problem-Based Learning in Online Learning Environments

    ERIC Educational Resources Information Center

    Webb, Ashley; Moallem, Mahnaz

    2016-01-01

    Purpose: The study aimed to (1) review the literature to construct conceptual models that could guide instructional designers in developing problem/project-based learning environments while applying effective feedback strategies, (2) use the models to design, develop, and implement an online graduate course, and (3) assess the efficiency of the…

  17. A dynamic integrated fault diagnosis method for power transformers.

    PubMed

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  18. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    PubMed Central

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  19. Low Emissions and Delay Optimization for an Isolated Signalized Intersection Based on Vehicular Trajectories

    PubMed Central

    2015-01-01

    A traditional traffic signal control system is established based on vehicular delay, queue length, saturation and other indicators. However, due to the increasing severity of urban environmental pollution issues and the development of a resource-saving and environmentally friendly social philosophy, the development of low-carbon and energy-efficient urban transport is required. This paper first defines vehicular trajectories and the calculation of vehicular emissions based on VSP. Next, a regression analysis method is used to quantify the relationship between vehicular emissions and delay, and a traffic signal control model is established to reduce emissions and delay using the enumeration method combined with saturation constraints. Finally, one typical intersection of Changchun is selected to verify the model proposed in this paper; its performance efficiency is also compared using simulations in VISSIM. The results of this study show that the proposed model can significantly reduce vehicle delay and traffic emissions simultaneously. PMID:26720095

  20. Low Emissions and Delay Optimization for an Isolated Signalized Intersection Based on Vehicular Trajectories.

    PubMed

    Lin, Ciyun; Gong, Bowen; Qu, Xin

    2015-01-01

    A traditional traffic signal control system is established based on vehicular delay, queue length, saturation and other indicators. However, due to the increasing severity of urban environmental pollution issues and the development of a resource-saving and environmentally friendly social philosophy, the development of low-carbon and energy-efficient urban transport is required. This paper first defines vehicular trajectories and the calculation of vehicular emissions based on VSP. Next, a regression analysis method is used to quantify the relationship between vehicular emissions and delay, and a traffic signal control model is established to reduce emissions and delay using the enumeration method combined with saturation constraints. Finally, one typical intersection of Changchun is selected to verify the model proposed in this paper; its performance efficiency is also compared using simulations in VISSIM. The results of this study show that the proposed model can significantly reduce vehicle delay and traffic emissions simultaneously.

  1. A Compact Energy Harvesting System for Outdoor Wireless Sensor Nodes Based on a Low-Cost In Situ Photovoltaic Panel Characterization-Modelling Unit.

    PubMed

    Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Martínez, Pedro A

    2017-08-04

    This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units-characterization and modelling-are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions.

  2. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun

    2017-12-01

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.

  3. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  4. Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Lomunscio, Alessio

    2004-01-01

    We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.

  5. Improving production efficiency through genetic selection

    USDA-ARS?s Scientific Manuscript database

    The goal of dairy cattle breeding is to increase productivity and efficiency by means of genetic selection. This is possible because related animals share some of their DNA in common, and we can use statistical models to predict the genetic merit animals based on the performance of their relatives. ...

  6. Determination of a Limited Scope Network's Lightning Detection Efficiency

    NASA Technical Reports Server (NTRS)

    Rompala, John T.; Blakeslee, R.

    2008-01-01

    This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.

  7. Modeling stochastic frontier based on vine copulas

    NASA Astrophysics Data System (ADS)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  8. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  9. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  10. FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.

    2018-01-01

    The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.

  11. Modeling of layered anisotropic composite material based on effective medium theory

    NASA Astrophysics Data System (ADS)

    Bao, Yang; Song, Jiming

    2018-04-01

    In this paper, we present an efficient method to simulate multilayered anisotropic composite material with effective medium theory. Effective permittivity, permeability and orientation angle for a layered anisotropic composite medium are extracted with this equivalent model. We also derive analytical expressions for effective parameters and orientation angle with low frequency (LF) limit, which will be shown in detail. Numerical results are shown in comparing extracted effective parameters and orientation angle with analytical results from low frequency limit. Good agreements are achieved to demonstrate the accuracy of our efficient model.

  12. Current-flow efficiency of networks

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Yan, Xiaoyong

    2018-02-01

    Many real-world networks, from infrastructure networks to social and communication networks, can be formulated as flow networks. How to realistically measure the transport efficiency of these networks is of fundamental importance. The shortest-path-based efficiency measurement has limitations, as it assumes that flow travels only along those shortest paths. Here, we propose a new metric named current-flow efficiency, in which we calculate the average reciprocal effective resistance between all pairs of nodes in the network. This metric takes the multipath effect into consideration and is more suitable for measuring the efficiency of many real-world flow equilibrium networks. Moreover, this metric can handle a disconnected graph and can thus be used to identify critical nodes and edges from the efficiency-loss perspective. We further analyze how the topological structure affects the current-flow efficiency of networks based on some model and real-world networks. Our results enable a better understanding of flow networks and shed light on the design and improvement of such networks with higher transport efficiency.

  13. Efficient implementation of a real-time estimation system for thalamocortical hidden Parkinsonian properties

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.

    2017-01-01

    Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.

  14. [Analysis of cost and efficiency of a medical nursing unit using time-driven activity-based costing].

    PubMed

    Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi

    2011-08-01

    Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.

  15. Comparing administered and market-based water allocation systems using an agent-based modeling approach

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Cai, X.; Wang, Z.

    2009-12-01

    It also has been well recognized that market-based systems can have significant advantages over administered systems for water allocation. However there are not many successful water markets around the world yet and administered systems exist commonly in water allocation management practice. This paradox has been under discussion for decades and still calls for attention for both research and practice. This paper explores some insights for the paradox and tries to address why market systems have not been widely implemented for water allocation. Adopting the theory of agent-based system we develop a consistent analytical model to interpret both systems. First we derive some theorems based on the analytical model, with respect to the necessary conditions for economic efficiency of water allocation. Following that the agent-based model is used to illustrate the coherence and difference between administered and market-based systems. The two systems are compared from three aspects: 1) the driving forces acting on the system state, 2) system efficiency, and 3) equity. Regarding economic efficiency, penalty on the violation of water use permits (or rights) under an administered system can lead to system-wide economic efficiency, as well as being acceptable by some agents, which follows the theory of the so-call rational violation. Ideal equity will be realized if penalty equals incentive with an administered system and if transaction costs are zero with a market system. The performances of both agents and the over system are explained with an administered system and market system, respectively. The performances of agents are subject to different mechanisms of interactions between agents under the two systems. The system emergency (i.e., system benefit, equilibrium market price, etc), resulting from the performance at the agent level, reflects the different mechanism of the two systems, the “invisible hand” with the market system and administrative measures (penalty and subsidy) with the administered system. Furthermore, the impact of hydrological uncertainty on the performance of water users under the two systems is analyzed by extending the deterministic model to a stochastic one subject to the uncertainty of water availability. It is found that the system response to hydrologic uncertainty depends on risk management mechanics - sharing risk equally among the agents or by prescribed priorities on some agents. Figure1. Agent formulation and its implications in administered system and market-based system

  16. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  17. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  18. Predicting commuter flows in spatial networks using a radiation model based on temporal ranges

    NASA Astrophysics Data System (ADS)

    Ren, Yihui; Ercsey-Ravasz, Mária; Wang, Pu; González, Marta C.; Toroczkai, Zoltán

    2014-11-01

    Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and human mobility. Here we show a first-principles based method for traffic prediction using a cost-based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events.

  19. Highly efficient model updating for structural condition assessment of large-scale bridges.

    DOT National Transportation Integrated Search

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  20. Collaborative video caching scheme over OFDM-based long-reach passive optical networks

    NASA Astrophysics Data System (ADS)

    Li, Yan; Dai, Shifang; Chang, Xiangmao

    2018-07-01

    Long-reach passive optical networks (LR-PONs) are now considered as a desirable access solution for cost-efficiently delivering broadband services by integrating metro network with access network, among which orthogonal frequency division multiplexing (OFDM)-based LR-PONs gain greater research interests due to their good robustness and high spectrum efficiency. In such attractive OFDM-based LR-PONs, however, it is still challenging to effectively provide video service, which is one of the most popular and profitable broadband services, for end users. Given that more video requesters (i.e., end users) far away from optical line terminal (OLT) are served in OFDM-based LR-PONs, it is efficiency-prohibitive to use traditional video delivery model, which relies on the OLT to transmit videos to requesters, for providing video service, due to the model will incur not only larger video playback delay but also higher downstream bandwidth consumption. In this paper, we propose a novel video caching scheme that to collaboratively cache videos on distributed optical network units (ONUs) which are closer to end users, and thus to timely and cost-efficiently provide videos for requesters by ONUs over OFDM-based LR-PONs. We firstly construct an OFDM-based LR-PON architecture to enable the cooperation among ONUs while caching videos. Given a limited storage capacity of each ONU, we then propose collaborative approaches to cache videos on ONUs with the aim to maximize the local video hit ratio (LVHR), i.e., the proportion of video requests that can be directly satisfied by ONUs, under diverse resources requirements and requests distributions of videos. Simulations are finally conducted to evaluate the efficiency of our proposed scheme.

  1. Energy Efficient Engine acoustic supporting technology report

    NASA Technical Reports Server (NTRS)

    Lavin, S. P.; Ho, P. Y.

    1985-01-01

    The acoustic development of the Energy Efficient Engine combined testing and analysis using scale model rigs and an integrated Core/Low Spool demonstration engine. The scale model tests show that a cut-on blade/vane ratio fan with a large spacing (S/C = 2.3) is as quiet as a cut-off blade/vane ratio with a tighter spacing (S/C = 1.27). Scale model mixer tests show that separate flow nozzles are the noisiest, conic nozzles the quietest, with forced mixers in between. Based on projections of ICLS data the Energy Efficient Engine (E3) has FAR 36 margins of 3.7 EPNdB at approach, 4.5 EPNdB at full power takeoff, and 7.2 EPNdB at sideline conditions.

  2. Analysis of the interrelationship of energy, economy, and environment: A model of a sustainable energy future for Korea

    NASA Astrophysics Data System (ADS)

    Boo, Kyung-Jin

    The primary purpose of this dissertation is to provide the groundwork for a sustainable energy future in Korea. For this purpose, a conceptual framework of sustainable energy development was developed to provide a deeper understanding of interrelationships between energy, the economy, and the environment (E 3). Based on this theoretical work, an empirical simulation model was developed to investigate the ways in which E3 interact. This dissertation attempts to develop a unified concept of sustainable energy development by surveying multiple efforts to integrate various definitions of sustainability. Sustainable energy development should be built on the basis of three principles: ecological carrying capacity, economic efficiency, and socio-political equity. Ecological carrying capacity delineates the earth's resource constraints as well as its ability to assimilate wastes. Socio-political equity implies an equitable distribution of the benefits and costs of energy consumption and an equitable distribution of environmental burdens. Economic efficiency dictates efficient allocation of scarce resources. The simulation model is composed of three modules: an energy module, an environmental module and an economic module. Because the model is grounded on economic structural behaviorism, the dynamic nature of the current economy is effectively depicted and simulated through manipulating exogenous policy variables. This macro-economic model is used to simulate six major policy intervention scenarios. Major findings from these policy simulations were: (1) carbon taxes are the most effective means of reducing air-pollutant emissions; (2) sustainable energy development can be achieved through reinvestment of carbon taxes into energy efficiency and renewable energy programs; and (3) carbon taxes would increase a nation's welfare if reinvested in relevant areas. The policy simulation model, because it is based on neoclassical economics, has limitations such that it cannot fully account for socio-political realities (inter- and intra-generational equity) which are core feature of sustainability. Thus, alternative approaches based on qualitative analysis, such as the multi-criteria approach, will be required to complement the current policy simulation model.

  3. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    PubMed

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  5. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  6. Increasing the efficiency of designing hemming processes by using an element-based metamodel approach

    NASA Astrophysics Data System (ADS)

    Kaiser, C.; Roll, K.; Volk, W.

    2017-09-01

    In the automotive industry, the manufacturing of automotive outer panels requires hemming processes in which two sheet metal parts are joined together by bending the flange of the outer part over the inner part. Because of decreasing development times and the steadily growing number of vehicle derivatives, an efficient digital product and process validation is necessary. Commonly used simulations, which are based on the finite element method, demand significant modelling effort, which results in disadvantages especially in the early product development phase. To increase the efficiency of designing hemming processes this paper presents a hemming-specific metamodel approach. The approach includes a part analysis in which the outline of the automotive outer panels is initially split into individual segments. By doing a para-metrization of each of the segments and assigning basic geometric shapes, the outline of the part is approximated. Based on this, the hemming parameters such as flange length, roll-in, wrinkling and plastic strains are calculated for each of the geometric basic shapes by performing a meta-model-based segmental product validation. The metamodel is based on an element similar formulation that includes a reference dataset of various geometric basic shapes. A random automotive outer panel can now be analysed and optimized based on the hemming-specific database. By implementing this approach into a planning system, an efficient optimization of designing hemming processes will be enabled. Furthermore, valuable time and cost benefits can be realized in a vehicle’s development process.

  7. Optimum use of air tankers in initial attack: selection, basing, and transfer rules

    Treesearch

    Francis E. Greulich; William G. O' Regan

    1982-01-01

    Fire managers face two interrelated problems in deciding the most efficient use of air tankers: where best to base them, and how best to reallocate them each day in anticipation of fire occurrence. A computerized model based on a mixed integer linear program can help in assigning air tankers throughout the fire season. The model was tested using information from...

  8. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  9. Research and development of energy-efficient appliance motor-compressors. Volume IV. Production demonstration and field test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, M.G.; Sauber, R.S.

    Two models of a high-efficiency compressor were manufactured in a pilot production run. These compressors were for low back-pressure applications. While based on a production compressor, there were many changes that required production process changes. Some changes were performed within our company and others were made by outside vendors. The compressors were used in top mount refrigerator-freezers and sold in normal distribution channels. Forty units were placed in residences for a one-year field test. Additional compressors were built so that a life test program could be performed. The results of the field test reveal a 27.0% improvement in energy consumptionmore » for the 18 ft/sup 3/ high-efficiency model and a 15.6% improvement in the 21 ft/sup 3/ improvement in the 21 ft/sup 3/ high-efficiency model as compared to the standard production unit.« less

  10. Screening Magnetic Resonance Imaging-Based Prediction Model for Assessing Immediate Therapeutic Response to Magnetic Resonance Imaging-Guided High-Intensity Focused Ultrasound Ablation of Uterine Fibroids.

    PubMed

    Kim, Young-sun; Lim, Hyo Keun; Park, Min Jung; Rhim, Hyunchul; Jung, Sin-Ho; Sohn, Insuk; Kim, Tae-Joong; Keserci, Bilgin

    2016-01-01

    The aim of this study was to fit and validate screening magnetic resonance imaging (MRI)-based prediction models for assessing immediate therapeutic responses of uterine fibroids to MRI-guided high-intensity focused ultrasound (MR-HIFU) ablation. Informed consent from all subjects was obtained for our institutional review board-approved study. A total of 240 symptomatic uterine fibroids (mean diameter, 6.9 cm) in 152 women (mean age, 43.3 years) treated with MR-HIFU ablation were retrospectively analyzed (160 fibroids for training, 80 fibroids for validation). Screening MRI parameters (subcutaneous fat thickness [mm], x1; relative peak enhancement [%] in semiquantitative perfusion MRI, x2; T2 signal intensity ratio of fibroid to skeletal muscle, x3) were used to fit prediction models with regard to ablation efficiency (nonperfused volume/treatment cell volume, y1) and ablation quality (grade 1-5, poor to excellent, y2), respectively, using the generalized estimating equation method. Cutoff values for achievement of treatment intent (efficiency >1.0; quality grade 4/5) were determined based on receiver operating characteristic curve analysis. Prediction performances were validated by calculating positive and negative predictive values. Generalized estimating equation analyses yielded models of y1 = 2.2637 - 0.0415x1 - 0.0011x2 - 0.0772x3 and y2 = 6.8148 - 0.1070x1 - 0.0050x2 - 0.2163x3. Cutoff values were 1.312 for ablation efficiency (area under the curve, 0.7236; sensitivity, 0.6882; specificity, 0.6866) and 4.019 for ablation quality (0.8794; 0.7156; 0.9020). Positive and negative predictive values were 0.917 and 0.500 for ablation efficiency and 0.978 and 0.600 for ablation quality, respectively. Screening MRI-based prediction models for assessing immediate therapeutic responses of uterine fibroids to MR-HIFU ablation were fitted and validated, which may reduce the risk of unsuccessful treatment.

  11. Three-dimensional deformable-model-based localization and recognition of road vehicles.

    PubMed

    Zhang, Zhaoxiang; Tan, Tieniu; Huang, Kaiqi; Wang, Yunhong

    2012-01-01

    We address the problem of model-based object recognition. Our aim is to localize and recognize road vehicles from monocular images or videos in calibrated traffic scenes. A 3-D deformable vehicle model with 12 shape parameters is set up as prior information, and its pose is determined by three parameters, which are its position on the ground plane and its orientation about the vertical axis under ground-plane constraints. An efficient local gradient-based method is proposed to evaluate the fitness between the projection of the vehicle model and image data, which is combined into a novel evolutionary computing framework to estimate the 12 shape parameters and three pose parameters by iterative evolution. The recovery of pose parameters achieves vehicle localization, whereas the shape parameters are used for vehicle recognition. Numerous experiments are conducted in this paper to demonstrate the performance of our approach. It is shown that the local gradient-based method can evaluate accurately and efficiently the fitness between the projection of the vehicle model and the image data. The evolutionary computing framework is effective for vehicles of different types and poses is robust to all kinds of occlusion.

  12. A COMPREHENSIVE APPROACH FOR PHYSIOLOGICALLY BASED PHARMACOKINETIC (PBPK) MODELS USING THE EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM) SYSTEM

    EPA Science Inventory

    The implementation of a comprehensive PBPK modeling approach resulted in ERDEM, a complex PBPK modeling system. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. ERDEM efficiently m...

  13. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  14. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  15. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE PAGES

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.; ...

    2018-03-27

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  16. Toward efficient biomechanical-based deformable image registration of lungs for image-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Al-Mayah, Adil; Moseley, Joanne; Velec, Mike; Brock, Kristy

    2011-08-01

    Both accuracy and efficiency are critical for the implementation of biomechanical model-based deformable registration in clinical practice. The focus of this investigation is to evaluate the potential of improving the efficiency of the deformable image registration of the human lungs without loss of accuracy. Three-dimensional finite element models have been developed using image data of 14 lung cancer patients. Each model consists of two lungs, tumor and external body. Sliding of the lungs inside the chest cavity is modeled using a frictionless surface-based contact model. The effect of the type of element, finite deformation and elasticity on the accuracy and computing time is investigated. Linear and quadrilateral tetrahedral elements are used with linear and nonlinear geometric analysis. Two types of material properties are applied namely: elastic and hyperelastic. The accuracy of each of the four models is examined using a number of anatomical landmarks representing the vessels bifurcation points distributed across the lungs. The registration error is not significantly affected by the element type or linearity of analysis, with an average vector error of around 2.8 mm. The displacement differences between linear and nonlinear analysis methods are calculated for all lungs nodes and a maximum value of 3.6 mm is found in one of the nodes near the entrance of the bronchial tree into the lungs. The 95 percentile of displacement difference ranges between 0.4 and 0.8 mm. However, the time required for the analysis is reduced from 95 min in the quadratic elements nonlinear geometry model to 3.4 min in the linear element linear geometry model. Therefore using linear tetrahedral elements with linear elastic materials and linear geometry is preferable for modeling the breathing motion of lungs for image-guided radiotherapy applications.

  17. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    PubMed

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Optimizing the recovery of copper from electroplating rinse bath solution by hollow fiber membrane.

    PubMed

    Oskay, Kürşad Oğuz; Kul, Mehmet

    2015-01-01

    This study aimed to recover and remove copper from industrial model wastewater solution by non-dispersive solvent extraction (NDSX). Two mathematical models were developed to simulate the performance of an integrated extraction-stripping process, based on the use of hollow fiber contactors using the response surface method. The models allow one to predict the time dependent efficiencies of the two phases involved in individual extraction or stripping processes. The optimal recovery efficiency parameters were determined as 227 g/L of H2SO4 concentration, 1.22 feed/strip ratio, 450 mL/min flow rate (115.9 cm/min. flow velocity) and 15 volume % LIX 84-I concentration in 270 min by central composite design (CCD). At these optimum conditions, the experimental value of recovery efficiency was 95.88%, which was in close agreement with the 97.75% efficiency value predicted by the model. At the end of the process, almost all the copper in the model wastewater solution was removed and recovered as CuSO4.5H2O salt, which can be reused in the copper electroplating industry.

  19. Raising Public Awareness: The Role of the Household Sector in Mitigating Climate Change

    PubMed Central

    Lin, Shis-Ping

    2015-01-01

    In addition to greenhouse gas emissions from the industrial, transportation and commercial sectors, emissions from the household sector also contribute to global warming. By examining residents of Taiwan (N = 236), this study aims to reveal the factors that influence households’ intention to purchase energy-efficient appliances. The assessment in this study is based on the theory of planned behavior (TPB), and perceived benefit or cost (BOC) is introduced as an independent variable in the proposed efficiency action toward climate change (ECC) model. According to structural equation modeling, most of the indicators presented a good fit to the corresponding ECC model constructs. The analysis indicated that BOC is a good complementary variable to the TPB, as the ECC model explained 61.9% of the variation in intention to purchase energy-efficient appliances, which was higher than that explained by the TPB (58.4%). This result indicates that the ECC model is superior to the TPB. Thus, the strategy of promoting energy-efficient appliances in the household sector should emphasize global warming and include the concept of BOC. PMID:26492262

  20. Raising Public Awareness: The Role of the Household Sector in Mitigating Climate Change.

    PubMed

    Lin, Shis-Ping

    2015-10-20

    In addition to greenhouse gas emissions from the industrial, transportation and commercial sectors, emissions from the household sector also contribute to global warming. By examining residents of Taiwan (N = 236), this study aims to reveal the factors that influence households' intention to purchase energy-efficient appliances. The assessment in this study is based on the theory of planned behavior (TPB), and perceived benefit or cost (BOC) is introduced as an independent variable in the proposed efficiency action toward climate change (ECC) model. According to structural equation modeling, most of the indicators presented a good fit to the corresponding ECC model constructs. The analysis indicated that BOC is a good complementary variable to the TPB, as the ECC model explained 61.9% of the variation in intention to purchase energy-efficient appliances, which was higher than that explained by the TPB (58.4%). This result indicates that the ECC model is superior to the TPB. Thus, the strategy of promoting energy-efficient appliances in the household sector should emphasize global warming and include the concept of BOC.

  1. [Efficiency of preventive dental care in school dentistry system].

    PubMed

    Avraamova, O G; Kolesnik, A G; Kulazhenko, T V; Zadapaeva, S V; Shevchenko, S S

    2014-01-01

    Ways of development of Russian school dentistry are defined and justified based on the analysis according to logistics, personnel, legal, financial and economic basis for the reorientation of the service for preventive direction, which should be a priority in the current conditions. The implemented model of school dental care based on team work of the dentist and dental hygienist proved to be highly efficient and may be recommended for wide introduction in practice.

  2. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  3. An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.

    DTIC Science & Technology

    1997-09-01

    The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at

  4. Genome wide association analyses based on a multiple trait approach for modeling feed efficiency

    USDA-ARS?s Scientific Manuscript database

    Genome wide association (GWA) of feed efficiency (FE) could help target important genomic regions influencing FE. Data provided by an international dairy FE research consortium consisted of phenotypic records on dry matter intakes (DMI), milk energy (MILKE), and metabolic body weight (MBW) on 6,937 ...

  5. Possible world based consistency learning model for clustering and classifying uncertain data.

    PubMed

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Arranging ISO 13606 archetypes into a knowledge base using UML connectors.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher-level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analysed for their ability to be applied in the implementation of a higher-level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.

  7. P-HS-SFM: a parallel harmony search algorithm for the reproduction of experimental data in the continuous microscopic crowd dynamic models

    NASA Astrophysics Data System (ADS)

    Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod

    2018-03-01

    Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.

  8. Overland Flow Analysis Using Time Series of Suas-Derived Elevation Models

    NASA Astrophysics Data System (ADS)

    Jeziorska, J.; Mitasova, H.; Petrasova, A.; Petras, V.; Divakaran, D.; Zajkowski, T.

    2016-06-01

    With the advent of the innovative techniques for generating high temporal and spatial resolution terrain models from Unmanned Aerial Systems (UAS) imagery, it has become possible to precisely map overland flow patterns. Furthermore, the process has become more affordable and efficient through the coupling of small UAS (sUAS) that are easily deployed with Structure from Motion (SfM) algorithms that can efficiently derive 3D data from RGB imagery captured with consumer grade cameras. We propose applying the robust overland flow algorithm based on the path sampling technique for mapping flow paths in the arable land on a small test site in Raleigh, North Carolina. By comparing a time series of five flights in 2015 with the results of a simulation based on the most recent lidar derived DEM (2013), we show that the sUAS based data is suitable for overland flow predictions and has several advantages over the lidar data. The sUAS based data captures preferential flow along tillage and more accurately represents gullies. Furthermore the simulated water flow patterns over the sUAS based terrain models are consistent throughout the year. When terrain models are reconstructed only from sUAS captured RGB imagery, however, water flow modeling is only appropriate in areas with sparse or no vegetation cover.

  9. Energy efficiency in waste-to-energy and its relevance with regard to climate control.

    PubMed

    Ragossnig, Arne M; Wartha, Christian; Kirchner, Andreas

    2008-02-01

    This article focuses on systematically highlighting the ways to optimize waste-to-energy plants in terms of their energy efficiency as an indicator of the positive effect with regard to climate control. Potentials for increasing energy efficiency are identified and grouped into categories. The measures mentioned are illustrated by real-world examples. As an example, district cooling as a means for increasing energy efficiency in the district heating network of Vienna is described. Furthermore a scenario analysis shows the relevance of energy efficiency in waste management scenarios based on thermal treatment of waste with regard to climate control. The description is based on a model that comprises all relevant processes from the collection and transportation up to the thermal treatment of waste. The model has been applied for household-like commercial waste. The alternatives compared are a combined heat and power incinerator, which is being introduced in many places as an industrial utility boiler or in metropolitan areas where there is a demand for district heating and a classical municipal solid waste incinerator producing solely electrical power. For comparative purposes a direct landfilling scenario has been included in the scenario analysis. It is shown that the energy efficiency of thermal treatment facilities is crucial to the quantity of greenhouse gases emitted.

  10. Satellite-based terrestrial production efficiency modeling

    PubMed Central

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-01-01

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based biomass measurements to improve Ra estimation; and satellite-based soil moisture data could improve determination of soil water stress. PMID:19765285

  11. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE PAGES

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...

    2017-01-18

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  12. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  13. Finite state projection based bounds to compare chemical master equation models using single-cell data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232

    2016-08-21

    Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less

  14. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  15. Modeling and analysis on ring-type piezoelectric transformers.

    PubMed

    Ho, Shine-Tzong

    2007-11-01

    This paper presents an electromechanical model for a ring-type piezoelectric transformer (PT). To establish this model, vibration characteristics of the piezoelectric ring with free boundary conditions are analyzed in advance. Based on the vibration analysis of the piezoelectric ring, the operating frequency and vibration mode of the PT are chosen. Then, electromechanical equations of motion for the PT are derived based on Hamilton's principle, which can be used to simulate the coupled electromechanical system for the transformer. Such as voltage stepup ratio, input impedance, output impedance, input power, output power, and efficiency are calculated by the equations. The optimal load resistance and the maximum efficiency for the PT will be presented in this paper. Experiments also were conducted to verify the theoretical analysis, and a good agreement was obtained.

  16. A method for development of efficient 3D models for neutronic calculations of ASTRA critical facility using experimental information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balanin, A. L.; Boyarinov, V. F.; Glushkov, E. S.

    The application of experimental information on measured axial distributions of fission reaction rates for development of 3D numerical models of the ASTRA critical facility taking into account azimuthal asymmetry of the assembly simulating a HTGR with annular core is substantiated. Owing to the presence of the bottom reflector and the absence of the top reflector, the application of 2D models based on experimentally determined buckling is impossible for calculation of critical assemblies of the ASTRA facility; therefore, an alternative approach based on the application of the extrapolated assembly height is proposed. This approach is exemplified by the numerical analysis ofmore » experiments on measurement of efficiency of control rods mockups and protection system (CPS).« less

  17. Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.

    PubMed

    Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei

    2016-01-22

    Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.

  18. Improved community model for social networks based on social mobility

    NASA Astrophysics Data System (ADS)

    Lu, Zhe-Ming; Wu, Zhen; Luo, Hao; Wang, Hao-Xian

    2015-07-01

    This paper proposes an improved community model for social networks based on social mobility. The relationship between the group distribution and the community size is investigated in terms of communication rate and turnover rate. The degree distributions, clustering coefficients, average distances and diameters of networks are analyzed. Experimental results demonstrate that the proposed model possesses the small-world property and can reproduce social networks effectively and efficiently.

  19. Popularity Modeling for Mobile Apps: A Sequential Approach.

    PubMed

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  20. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  1. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  2. Nursing research on a first aid model of double personnel for major burn patients.

    PubMed

    Wu, Weiwei; Shi, Kai; Jin, Zhenghua; Liu, Shuang; Cai, Duo; Zhao, Jingchun; Chi, Cheng; Yu, Jiaao

    2015-03-01

    This study explored the effect of a first aid model employing two nurses on the efficient rescue operation time and the efficient resuscitation time for major burn patients. A two-nurse model of first aid was designed for major burn patients. The model includes a division of labor between the first aid nurses and the re-organization of emergency carts. The clinical effectiveness of the process was examined in a retrospective chart review of 156 cases of major burn patients, experiencing shock and low blood volume, who were admitted to the intensive care unit of the department of burn surgery between November 2009 and June 2013. Of the 156 major burn cases, 87 patients who received first aid using the double personnel model were assigned to the test group and the 69 patients who received first aid using the standard first aid model were assigned to the control group. The efficient rescue operation time and the efficient resuscitation time for the patients were compared between the two groups. Student's t tests were used to the compare the mean difference between the groups. Statistically significant differences between the two groups were found on both measures (P's < 0.05), with the test group having lower times than the control group. The efficient rescue operation time was 14.90 ± 3.31 min in the test group and 30.42 ± 5.65 min in the control group. The efficient resuscitation time was 7.4 ± 3.2 h in the test group and 9.5 ± 2.7 h in the control group. A two-nurse first aid model based on scientifically validated procedures and a reasonable division of labor can shorten the efficient rescue operation time and the efficient resuscitation time for major burn patients. Given these findings, the model appears to be worthy of clinical application.

  3. Evaluate and Analysis Efficiency of Safaga Port Using DEA-CCR, BCC and SBM Models-Comparison with DP World Sokhna

    NASA Astrophysics Data System (ADS)

    Elsayed, Ayman; Shabaan Khalil, Nabil

    2017-10-01

    The competition among maritime ports is increasing continuously; the main purpose of Safaga port is to become the best option for companies to carry out their trading activities, particularly importing and exporting The main objective of this research is to evaluate and analyze factors that may significantly affect the levels of Safaga port efficiency in Egypt (particularly the infrastructural capacity). The assessment of such efficiency is a task that must play an important role in the management of Safaga port in order to improve the possibility of development and success in commercial activities. Drawing on Data Envelopment Analysis(DEA)models, this paper develops a manner of assessing the comparative efficiency of Safaga port in Egypt during the study period 2004-2013. Previous research for port efficiencies measurement usually using radial DEA models (DEA-CCR), (DEA-BCC), but not using non radial DEA model. The research applying radial - output oriented (DEA-CCR), (DEA-BCC) and non-radial (DEA-SBM) model with ten inputs and four outputs. The results were obtained from the analysis input and output variables based on DEA-CCR, DEA-BCC and SBM models, by software Max DEA Pro 6.3. DP World Sokhna port higher efficiency for all outputs were compared to Safaga port. DP World Sokhna position is below the southern entrance to the Suez Canal, on the Red Sea, Egypt, makes it strategically located to handle cargo transiting through one of the world's busiest commercial waterways.

  4. Span efficiency of wings with leading edge protuberances

    NASA Astrophysics Data System (ADS)

    Custodio, Derrick; Henoch, Charles; Johari, Hamid

    2013-11-01

    Past work has shown that sinusoidal leading edge protuberances resembling those found on humpback whale flippers alter the lift and drag coefficients of full- and finite-span foils and wings depending on the angle of attack and leading edge geometry. Although the load characteristics of protuberance modified finite-span wings have been reported for flipper-like geometries at higher Reynolds numbers and for rectangular planforms at lower Reynolds numbers, the effects of leading edge geometry on the span efficiency, which is indicative of the deviation of the spanwise lift distribution from elliptical and the viscous effects, for a range of planforms and Reynolds numbers have not been addressed. The lift and drag coefficients of 7 rectangular, 2 swept, and 2 flipper-like planform models with aspect ratios of 4.3, 4.0, and 8.86, respectively, were used to compute the span efficiency at Reynolds numbers ranging from 0.9 to 4.5 × 105. The span efficiency, based on the data at lower angles of attack, of modified wings was compared with the unmodified models. For the cases considered, the span efficiencies of the leading edge modified models were less than those of the equivalent unmodified models. The dependence of span efficiency on the leading edge geometry, planform, and Reynolds number will be presented. Supported by the ONR-ULI program.

  5. Predicting variation in subject thermal response during transcranial magnetic resonance guided focused ultrasound surgery: Comparison in seventeen subject datasets.

    PubMed

    Vyas, Urvi; Ghanouni, Pejman; Halpern, Casey H; Elias, Jeff; Pauly, Kim Butts

    2016-09-01

    In transcranial magnetic resonance-guided focused ultrasound (tcMRgFUS) treatments, the acoustic and spatial heterogeneity of the skull cause reflection, absorption, and scattering of the acoustic beams. These effects depend on skull-specific parameters and can lead to patient-specific thermal responses to the same transducer power. In this work, the authors develop a simulation tool to help predict these different experimental responses using 3D heterogeneous tissue models based on the subject CT images. The authors then validate and compare the predicted skull efficiencies to an experimental metric based on the subject thermal responses during tcMRgFUS treatments in a dataset of seventeen human subjects. Seventeen human head CT scans were used to create tissue acoustic models, simulating the effects of reflection, absorption, and scattering of the acoustic beam as it propagates through a heterogeneous skull. The hybrid angular spectrum technique was used to model the acoustic beam propagation of the InSightec ExAblate 4000 head transducer for each subject, yielding maps of the specific absorption rate (SAR). The simulation assumed the transducer was geometrically focused to the thalamus of each subject, and the focal SAR at the target was used as a measure of the simulated skull efficiency. Experimental skull efficiency for each subject was calculated using the thermal temperature maps from the tcMRgFUS treatments. Axial temperature images (with no artifacts) were reconstructed with a single baseline, corrected using a referenceless algorithm. The experimental skull efficiency was calculated by dividing the reconstructed temperature rise 8.8 s after sonication by the applied acoustic power. The simulated skull efficiency using individual-specific heterogeneous models predicts well (R(2) = 0.84) the experimental energy efficiency. This paper presents a simulation model to predict the variation in thermal responses measured in clinical ctMRGFYS treatments while being computationally feasible.

  6. Estimation efficiency of usage satellite derived and modelled biophysical products for yield forecasting

    NASA Astrophysics Data System (ADS)

    Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara

    2015-04-01

    Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.

  7. Predicting variation in subject thermal response during transcranial magnetic resonance guided focused ultrasound surgery: Comparison in seventeen subject datasets

    PubMed Central

    Vyas, Urvi; Ghanouni, Pejman; Halpern, Casey H.; Elias, Jeff; Pauly, Kim Butts

    2016-01-01

    Purpose: In transcranial magnetic resonance-guided focused ultrasound (tcMRgFUS) treatments, the acoustic and spatial heterogeneity of the skull cause reflection, absorption, and scattering of the acoustic beams. These effects depend on skull-specific parameters and can lead to patient-specific thermal responses to the same transducer power. In this work, the authors develop a simulation tool to help predict these different experimental responses using 3D heterogeneous tissue models based on the subject CT images. The authors then validate and compare the predicted skull efficiencies to an experimental metric based on the subject thermal responses during tcMRgFUS treatments in a dataset of seventeen human subjects. Methods: Seventeen human head CT scans were used to create tissue acoustic models, simulating the effects of reflection, absorption, and scattering of the acoustic beam as it propagates through a heterogeneous skull. The hybrid angular spectrum technique was used to model the acoustic beam propagation of the InSightec ExAblate 4000 head transducer for each subject, yielding maps of the specific absorption rate (SAR). The simulation assumed the transducer was geometrically focused to the thalamus of each subject, and the focal SAR at the target was used as a measure of the simulated skull efficiency. Experimental skull efficiency for each subject was calculated using the thermal temperature maps from the tcMRgFUS treatments. Axial temperature images (with no artifacts) were reconstructed with a single baseline, corrected using a referenceless algorithm. The experimental skull efficiency was calculated by dividing the reconstructed temperature rise 8.8 s after sonication by the applied acoustic power. Results: The simulated skull efficiency using individual-specific heterogeneous models predicts well (R2 = 0.84) the experimental energy efficiency. Conclusions: This paper presents a simulation model to predict the variation in thermal responses measured in clinical ctMRGFYS treatments while being computationally feasible. PMID:27587047

  8. Predicting variation in subject thermal response during transcranial magnetic resonance guided focused ultrasound surgery: Comparison in seventeen subject datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vyas, Urvi, E-mail: urvi.vyas@gmail.com; Ghanouni,

    Purpose: In transcranial magnetic resonance-guided focused ultrasound (tcMRgFUS) treatments, the acoustic and spatial heterogeneity of the skull cause reflection, absorption, and scattering of the acoustic beams. These effects depend on skull-specific parameters and can lead to patient-specific thermal responses to the same transducer power. In this work, the authors develop a simulation tool to help predict these different experimental responses using 3D heterogeneous tissue models based on the subject CT images. The authors then validate and compare the predicted skull efficiencies to an experimental metric based on the subject thermal responses during tcMRgFUS treatments in a dataset of seventeen humanmore » subjects. Methods: Seventeen human head CT scans were used to create tissue acoustic models, simulating the effects of reflection, absorption, and scattering of the acoustic beam as it propagates through a heterogeneous skull. The hybrid angular spectrum technique was used to model the acoustic beam propagation of the InSightec ExAblate 4000 head transducer for each subject, yielding maps of the specific absorption rate (SAR). The simulation assumed the transducer was geometrically focused to the thalamus of each subject, and the focal SAR at the target was used as a measure of the simulated skull efficiency. Experimental skull efficiency for each subject was calculated using the thermal temperature maps from the tcMRgFUS treatments. Axial temperature images (with no artifacts) were reconstructed with a single baseline, corrected using a referenceless algorithm. The experimental skull efficiency was calculated by dividing the reconstructed temperature rise 8.8 s after sonication by the applied acoustic power. Results: The simulated skull efficiency using individual-specific heterogeneous models predicts well (R{sup 2} = 0.84) the experimental energy efficiency. Conclusions: This paper presents a simulation model to predict the variation in thermal responses measured in clinical ctMRGFYS treatments while being computationally feasible.« less

  9. Finite volume model for two-dimensional shallow environmental flow

    USGS Publications Warehouse

    Simoes, F.J.M.

    2011-01-01

    This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.

  10. Final Report: Utilizing Alternative Fuel Ignition Properties to Improve SI and CI Engine Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wooldridge, Margaret; Boehman, Andre; Lavoie, George

    Experimental and modeling studies were completed to explore leveraging physical and chemical fuel properties for improved thermal efficiency of internal combustion engines. Fundamental studies of the ignition chemistry of ethanol and iso-octane blends and constant volume spray chamber studies of gasoline and diesel sprays supported the core research effort which used several reciprocating engine platforms. Single cylinder spark ignition (SI) engine studies were carried out to characterize the impact of ethanol/gasoline, syngas (H 2 and CO)/gasoline and other oxygenate/gasoline blends on engine performance. The results of the single-cylinder engine experiments and other data from the literature were used to trainmore » a GT Power model and to develop a knock criteria based on reaction chemistry. The models were used to interpret the experimental results and project future performance. Studies were also carried out using a state of the art, direct injection (DI) turbocharged multi- cylinder engine with piezo-actuated fuel injectors to demonstrate the promising spray and spark timing strategies from single-cylinder engine studies on the multi-cylinder engine. Key outcomes and conclusions of the studies were: 1. Efficiency benefits of ethanol and gasoline fuel blends were consistent and substantial (e.g. 5-8% absolute improvement in gross indicated thermal efficiency (GITE)). 2. The best ethanol/gasoline blend (based on maximum thermal efficiency) was determined by the engine hardware and limits based on component protection (e.g. peak in-cylinder pressure or maximum turbocharger inlet temperature) – and not by knock limits. Blends with <50% ethanol delivered significant thermal efficiency gains with conventional SI hardware while maintain good safety integrity to the engine hardware. 3. Other compositions of fuel blends including syngas (H 2 and CO) and other dilution strategies provided significant efficiency gains as well (e.g. 5% absolute improvement in ITE). 4. When the combination of engine and fuel system is not knock limited, multiple fuel injection events maintain thermal efficiency while improving engine-out emissions (e.g. CO, UHC, and particulate number).« less

  11. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  12. Sequence determinants of improved CRISPR sgRNA design.

    PubMed

    Xu, Han; Xiao, Tengfei; Chen, Chen-Hao; Li, Wei; Meyer, Clifford A; Wu, Qiu; Wu, Di; Cong, Le; Zhang, Feng; Liu, Jun S; Brown, Myles; Liu, X Shirley

    2015-08-01

    The CRISPR/Cas9 system has revolutionized mammalian somatic cell genetics. Genome-wide functional screens using CRISPR/Cas9-mediated knockout or dCas9 fusion-mediated inhibition/activation (CRISPRi/a) are powerful techniques for discovering phenotype-associated gene function. We systematically assessed the DNA sequence features that contribute to single guide RNA (sgRNA) efficiency in CRISPR-based screens. Leveraging the information from multiple designs, we derived a new sequence model for predicting sgRNA efficiency in CRISPR/Cas9 knockout experiments. Our model confirmed known features and suggested new features including a preference for cytosine at the cleavage site. The model was experimentally validated for sgRNA-mediated mutation rate and protein knockout efficiency. Tested on independent data sets, the model achieved significant results in both positive and negative selection conditions and outperformed existing models. We also found that the sequence preference for CRISPRi/a is substantially different from that for CRISPR/Cas9 knockout and propose a new model for predicting sgRNA efficiency in CRISPRi/a experiments. These results facilitate the genome-wide design of improved sgRNA for both knockout and CRISPRi/a studies. © 2015 Xu et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Modeling of a resonant heat engine

    NASA Astrophysics Data System (ADS)

    Preetham, B. S.; Anderson, M.; Richards, C.

    2012-12-01

    A resonant heat engine in which the piston assembly is replaced by a sealed elastic cavity is modeled and analyzed. A nondimensional lumped-parameter model is derived and used to investigate the factors that control the performance of the engine. The thermal efficiency predicted by the model agrees with that predicted from the relation for the Otto cycle based on compression ratio. The predictions show that for a fixed mechanical load, increasing the heat input results in increased efficiency. The output power and power density are shown to depend on the loading for a given heat input. The loading condition for maximum output power is different from that required for maximum power density.

  14. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  15. The efficiency and budgeting of public hospitals: case study of iran.

    PubMed

    Yusefzadeh, Hasan; Ghaderi, Hossein; Bagherzade, Rafat; Barouni, Mohsen

    2013-05-01

    Hospitals are the most costly and important components of any health care system, so it is important to know their economic values, pay attention to their efficiency and consider factors affecting them. The aim of this study was to assess the technical scale and economic efficiency of hospitals in the West Azerbaijan province of Iran, for which Data Envelopment Analysis (DEA) was used to propose a model for operational budgeting. This study was a descriptive-analysis that was conducted in 2009 and had three inputs and two outputs. Deap2, 1 software was used for data analysis. Slack and radial movements and surplus of inputs were calculated for selected hospitals. Finally, a model was proposed for performance-based budgeting of hospitals and health sectors using the DEA technique. The average scores of technical efficiency, pure technical efficiency (managerial efficiency) and scale efficiency of hospitals were 0.584, 0.782 and 0.771, respectively. In other words the capacity of efficiency promotion in hospitals without any increase in costs and with the same amount of inputs was about 41.5%. Only four hospitals among all hospitals had the maximum level of technical efficiency. Moreover, surplus production factors were evident in these hospitals. Reduction of surplus production factors through comprehensive planning based on the results of the Data Envelopment Analysis can play a major role in cost reduction of hospitals and health sectors. In hospitals with a technical efficiency score of less than one, the original and projected values of inputs were different; resulting in a surplus. Hence, these hospitals should reduce their values of inputs to achieve maximum efficiency and optimal performance. The results of this method was applied to hospitals a benchmark for making decisions about resource allocation; linking budgets to performance results; and controlling and improving hospitals performance.

  16. Tertiary structure-based analysis of microRNA–target interactions

    PubMed Central

    Gan, Hin Hark; Gunsalus, Kristin C.

    2013-01-01

    Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009

  17. Global Aerodynamic Modeling for Stall/Upset Recovery Training Using Efficient Piloted Flight Test Techniques

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.

    2013-01-01

    Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.

  18. Market Model for Resource Allocation in Emerging Sensor Networks with Reinforcement Learning

    PubMed Central

    Zhang, Yue; Song, Bin; Zhang, Ying; Du, Xiaojiang; Guizani, Mohsen

    2016-01-01

    Emerging sensor networks (ESNs) are an inevitable trend with the development of the Internet of Things (IoT), and intend to connect almost every intelligent device. Therefore, it is critical to study resource allocation in such an environment, due to the concern of efficiency, especially when resources are limited. By viewing ESNs as multi-agent environments, we model them with an agent-based modelling (ABM) method and deal with resource allocation problems with market models, after describing users’ patterns. Reinforcement learning methods are introduced to estimate users’ patterns and verify the outcomes in our market models. Experimental results show the efficiency of our methods, which are also capable of guiding topology management. PMID:27916841

  19. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  20. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  1. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    NASA Astrophysics Data System (ADS)

    Shypailo, R. J.; Ellis, K. J.

    2011-05-01

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  2. Parametric modeling and stagger angle optimization of an axial flow fan

    NASA Astrophysics Data System (ADS)

    Li, M. X.; Zhang, C. H.; Liu, Y.; Y Zheng, S.

    2013-12-01

    Axial flow fans are widely used in every field of social production. Improving their efficiency is a sustained and urgent demand of domestic industry. The optimization of stagger angle is an important method to improve fan performance. Parametric modeling and calculation process automation are realized in this paper to improve optimization efficiency. Geometric modeling and mesh division are parameterized based on GAMBIT. Parameter setting and flow field calculation are completed in the batch mode of FLUENT. A control program is developed in Visual C++ to dominate the data exchange of mentioned software. It also extracts calculation results for optimization algorithm module (provided by Matlab) to generate directive optimization control parameters, which as feedback are transferred upwards to modeling module. The center line of the blade airfoil, based on CLARK y profile, is constructed by non-constant circulation and triangle discharge method. Stagger angles of six airfoil sections are optimized, to reduce the influence of inlet shock loss as well as gas leak in blade tip clearance and hub resistance at blade root. Finally an optimal solution is obtained, which meets the total pressure requirement under given conditions and improves total pressure efficiency by about 6%.

  3. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  4. Analysis of financing efficiency of big data industry in Guizhou province based on DEA models

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Pan, Kang; Luo, Cong

    2018-03-01

    Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.

  5. Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression

    PubMed Central

    Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.

    2016-01-01

    The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571

  6. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  7. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  8. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2015-12-01

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.

  9. Energy efficiency in nonprofit agencies: Creating effective program models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, M.A.; Prindle, B.; Scherr, M.I.

    Nonprofit agencies are a critical component of the health and human services system in the US. It has been clearly demonstrated by programs that offer energy efficiency services to nonprofits that, with minimal investment, they can educe their energy consumption by ten to thirty percent. This energy conservation potential motivated the Department of Energy and Oak Ridge National Laboratory to conceive a project to help states develop energy efficiency programs for nonprofits. The purpose of the project was two-fold: (1) to analyze existing programs to determine which design and delivery mechanisms are particularly effective, and (2) to create model programsmore » for states to follow in tailoring their own plans for helping nonprofits with energy efficiency programs. Twelve existing programs were reviewed, and three model programs were devised and put into operation. The model programs provide various forms of financial assistance to nonprofits and serve as a source of information on energy efficiency as well. After examining the results from the model programs (which are still on-going) and from the existing programs, several replicability factors'' were developed for use in the implementation of programs by other states. These factors -- some concrete and practical, others more generalized -- serve as guidelines for states devising program based on their own particular needs and resources.« less

  10. Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model

    NASA Astrophysics Data System (ADS)

    Fu, Li Fang; Meng, Jun; Liu, Ying

    2015-12-01

    Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.

  11. Flow analysis for efficient design of wavy structured microchannel mixing devices

    NASA Astrophysics Data System (ADS)

    Kanchan, Mithun; Maniyeri, Ranjith

    2018-04-01

    Microfluidics is a rapidly growing field of applied research which is strongly driven by demands of bio-technology and medical innovation. Lab-on-chip (LOC) is one such application which deals with integrating bio-laboratory on micro-channel based single fluidic chip. Since fluid flow in such devices is restricted to laminar regime, designing an efficient passive modulator to induce chaotic mixing for such diffusion based flow is a major challenge. In the present work two-dimensional numerical simulation of viscous incompressible flow is carried out using immersed boundary method (IBM) to obtain an efficient design for wavy structured micro-channel mixing devices. The continuity and Navier-Stokes equations governing the flow are solved by fractional step based finite volume method on a staggered Cartesian grid system. IBM uses Eulerian co-ordinates to describe fluid flow and Lagrangian co-ordinates to describe solid boundary. Dirac delta function is used to couple both these co-ordinate variables. A tether forcing term is used to impose the no-slip boundary condition on the wavy structure and fluid interface. Fluid flow analysis by varying Reynolds number is carried out for four wavy structure models and one straight line model. By analyzing fluid accumulation zones and flow velocities, it can be concluded that straight line structure performs better mixing for low Reynolds number and Model 2 for higher Reynolds number. Thus wavy structures can be incorporated in micro-channels to improve mixing efficiency.

  12. An intelligent knowledge-based and customizable home care system framework with ubiquitous patient monitoring and alerting techniques.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.

  13. An Intelligent Knowledge-Based and Customizable Home Care System Framework with Ubiquitous Patient Monitoring and Alerting Techniques

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions. PMID:23112650

  14. A study of Ground Source Heat Pump based on a heat infiltrates coupling model established with FEFLOW

    NASA Astrophysics Data System (ADS)

    Chen, H.; Hu, C.; Chen, G.; Zhang, Q.

    2017-12-01

    Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. it is vital that engineers acquire a proper understanding about the Ground Source Heat Pump (GSHP). In this study, the model of the borehole exchanger under conduction manners and heat infiltrates coupling manners was established with FEFLOW. The energy efficiency, heat transfer endurance and heat transfer in the unit depth were introduced to quantify the energy efficient and the endurance period. The performance of a the Borehole Exchanger (BHE) in soil with and without groundwater seepage was analyzed of heat transfer process between the soil and the working fluid. Basing on the model, the varied regularity of energy efficiency performance an heat transfer endurance with the conditions including the different configuration of the BHE, the soil properties, thermal load characteristic were discussed. Focus on the heat transfer process in multi-layer soil which one layer exist groundwater flow. And an investigation about thermal dispersivity was also analyzed its influence on heat transfer performance. The final result proves that the model of heat infiltrates coupling model established in this context is reasonable, which can be applied to engineering design.

  15. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  16. Optimal laser wavelength for efficient laser power converter operation over temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Höhn, O., E-mail: oliver.hoehn@ise.fraunhofer.de; Walker, A. W.; Bett, A. W.

    2016-06-13

    A temperature dependent modeling study is conducted on a GaAs laser power converter to identify the optimal incident laser wavelength for optical power transmission. Furthermore, the respective temperature dependent maximal conversion efficiencies in the radiative limit as well as in a practically achievable limit are presented. The model is based on the transfer matrix method coupled to a two-diode model, and is calibrated to experimental data of a GaAs photovoltaic device over laser irradiance and temperature. Since the laser wavelength does not strongly influence the open circuit voltage of the laser power converter, the optimal laser wavelength is determined tomore » be in the range where the external quantum efficiency is maximal, but weighted by the photon flux of the laser.« less

  17. Designing an activity-based costing model for a non-admitted prisoner healthcare setting.

    PubMed

    Cai, Xiao; Moore, Elizabeth; McNamara, Martin

    2013-09-01

    To design and deliver an activity-based costing model within a non-admitted prisoner healthcare setting. Key phases from the NSW Health clinical redesign methodology were utilised: diagnostic, solution design and implementation. The diagnostic phase utilised a range of strategies to identify issues requiring attention in the development of the costing model. The solution design phase conceptualised distinct 'building blocks' of activity and cost based on the speciality of clinicians providing care. These building blocks enabled the classification of activity and comparisons of costs between similar facilities. The implementation phase validated the model. The project generated an activity-based costing model based on actual activity performed, gained acceptability among clinicians and managers, and provided the basis for ongoing efficiency and benchmarking efforts.

  18. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  19. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  20. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  1. Improving Visualization and Interpretation of Metabolome-Wide Association Studies: An Application in a Population-Based Cohort Using Untargeted 1H NMR Metabolic Profiling.

    PubMed

    Castagné, Raphaële; Boulangé, Claire Laurence; Karaman, Ibrahim; Campanella, Gianluca; Santos Ferreira, Diana L; Kaluarachchi, Manuja R; Lehne, Benjamin; Moayyeri, Alireza; Lewis, Matthew R; Spagou, Konstantina; Dona, Anthony C; Evangelos, Vangelis; Tracy, Russell; Greenland, Philip; Lindon, John C; Herrington, David; Ebbels, Timothy M D; Elliott, Paul; Tzoulaki, Ioanna; Chadeau-Hyam, Marc

    2017-10-06

    1 H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1 H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies.

  2. Improving Visualization and Interpretation of Metabolome-Wide Association Studies: An Application in a Population-Based Cohort Using Untargeted 1H NMR Metabolic Profiling

    PubMed Central

    2017-01-01

    1H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies. PMID:28823158

  3. Efficiency Considerations in Low Pressure Turbines

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Issues & Topics Discussed: a) Aviation Week reported shortfall In LPT efficiency due to the application of "high lift airfoils". b) Progress in the design technologies in LPTs during the last 20 years: 1) Application of RANS based CFD codes. 2) Integration of recent experimental data and modeling of LPT airfoil specific flows into design methods. c) Opportunities to further enhance LPT efficiency for commercial aviation and military transport application and to impact emissions, noise, weight & cost.

  4. Efficient Personalized Mispronunciation Detection of Taiwanese-Accented English Speech Based on Unsupervised Model Adaptation and Dynamic Sentence Selection

    ERIC Educational Resources Information Center

    Wu, Chung-Hsien; Su, Hung-Yu; Liu, Chao-Hong

    2013-01-01

    This study presents an efficient approach to personalized mispronunciation detection of Taiwanese-accented English. The main goal of this study was to detect frequently occurring mispronunciation patterns of Taiwanese-accented English instead of scoring English pronunciations directly. The proposed approach quickly identifies personalized…

  5. Impact-Based Training Evaluation Model (IBTEM) for School Supervisors in Indonesia

    ERIC Educational Resources Information Center

    Sutarto; Usman, Husaini; Jaedun, Amat

    2016-01-01

    This article represents a study aiming at developing: (1) an IBTEM which is capable to promote partnership between training providers and their client institutions, easy to understand, effective, efficient; and (2) an IBTEM implementation guide which is comprehensive, coherent, easy to understand, effective, and efficient. The method used in the…

  6. Collection Efficiency and Ice Accretion Characteristics of Two Full Scale and One 1/4 Scale Business Jet Horizontal Tails

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Papadakis, Michael

    2005-01-01

    Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.

  7. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less

  8. Efficient Ho:LuLiF4 laser diode-pumped at 1.15 μm.

    PubMed

    Wang, Sheng-Li; Huang, Chong-Yuan; Zhao, Cheng-Chun; Li, Hong-Qiang; Tang, Yu-Long; Yang, Nan; Zhang, Shuai-Yi; Hang, Yin; Xu, Jian-Qiu

    2013-07-15

    We report the first laser operation based on Ho(3+)-doped LuLiF(4) single crystal, which is directly pumped with 1.15-μm laser diode (LD). Based on the numerical model, it is found that the "two-for-one" effect induced by the cross-relaxation plays an important role for the laser efficiency. The maximum continuous wave (CW) output power of 1.4 W is produced with a beam propagation factor of M(2) ~2 at the lasing wavelength of 2.066 μm. The slope efficiency of 29% with respect to absorbed power is obtained.

  9. CO₂ carbonation under aqueous conditions using petroleum coke combustion fly ash.

    PubMed

    González, A; Moreno, N; Navia, R

    2014-12-01

    Fly ash from petroleum coke combustion was evaluated for CO2 capture in aqueous medium. Moreover the carbonation efficiency based on different methodologies and the kinetic parameters of the process were determined. The results show that petroleum coke fly ash achieved a CO2 capture yield of 21% at the experimental conditions of 12 g L(-1), 363°K without stirring. The carbonation efficiency by petroleum coke fly ash based on reactive calcium species was within carbonation efficiencies reported by several authors. In addition, carbonation by petroleum coke fly ash follows a pseudo-second order kinetic model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. An efficient transport solver for tokamak plasmas

    DOE PAGES

    Park, Jin Myung; Murakami, Masanori; St. John, H. E.; ...

    2017-01-03

    A simple approach to efficiently solve a coupled set of 1-D diffusion-type transport equations with a stiff transport model for tokamak plasmas is presented based on the 4th order accurate Interpolated Differential Operator scheme along with a nonlinear iteration method derived from a root-finding algorithm. Here, numerical tests using the Trapped Gyro-Landau-Fluid model show that the presented high order method provides an accurate transport solution using a small number of grid points with robust nonlinear convergence.

  11. Translation elicits a growth rate-dependent, genome-wide, differential protein production in Bacillus subtilis.

    PubMed

    Borkowski, Olivier; Goelzer, Anne; Schaffer, Marc; Calabre, Magali; Mäder, Ulrike; Aymerich, Stéphane; Jules, Matthieu; Fromion, Vincent

    2016-05-17

    Complex regulatory programs control cell adaptation to environmental changes by setting condition-specific proteomes. In balanced growth, bacterial protein abundances depend on the dilution rate, transcript abundances and transcript-specific translation efficiencies. We revisited the current theory claiming the invariance of bacterial translation efficiency. By integrating genome-wide transcriptome datasets and datasets from a library of synthetic gfp-reporter fusions, we demonstrated that translation efficiencies in Bacillus subtilis decreased up to fourfold from slow to fast growth. The translation initiation regions elicited a growth rate-dependent, differential production of proteins without regulators, hence revealing a unique, hard-coded, growth rate-dependent mode of regulation. We combined model-based data analyses of transcript and protein abundances genome-wide and revealed that this global regulation is extensively used in B. subtilis We eventually developed a knowledge-based, three-step translation initiation model, experimentally challenged the model predictions and proposed that a growth rate-dependent drop in free ribosome abundance accounted for the differential protein production. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.

  12. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  13. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.

  14. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    PubMed

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  15. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE PAGES

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...

    2017-12-27

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  16. Combining correlative and mechanistic habitat suitability models to improve ecological compensation.

    PubMed

    Meineri, Eric; Deville, Anne-Sophie; Grémillet, David; Gauthier-Clerc, Michel; Béchet, Arnaud

    2015-02-01

    Only a few studies have shown positive impacts of ecological compensation on species dynamics affected by human activities. We argue that this is due to inappropriate methods used to forecast required compensation in environmental impact assessments. These assessments are mostly descriptive and only valid at limited spatial and temporal scales. However, habitat suitability models developed to predict the impacts of environmental changes on potential species' distributions should provide rigorous science-based tools for compensation planning. Here we describe the two main classes of predictive models: correlative models and individual-based mechanistic models. We show how these models can be used alone or synoptically to improve compensation planning. While correlative models are easier to implement, they tend to ignore underlying ecological processes and lack accuracy. On the contrary, individual-based mechanistic models can integrate biological interactions, dispersal ability and adaptation. Moreover, among mechanistic models, those considering animal energy balance are particularly efficient at predicting the impact of foraging habitat loss. However, mechanistic models require more field data compared to correlative models. Hence we present two approaches which combine both methods for compensation planning, especially in relation to the spatial scale considered. We show how the availability of biological databases and software enabling fast and accurate population projections could be advantageously used to assess ecological compensation requirement efficiently in environmental impact assessments. © 2014 The Authors. Biological Reviews © 2014 Cambridge Philosophical Society.

  17. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  18. AAC Intervention as an Immersion Model

    ERIC Educational Resources Information Center

    Dodd, Janet L.; Gorey, Megan

    2014-01-01

    Augmentative and alternative communication based interventions support individuals with complex communication needs in becoming effective and efficient communicators. However, there is often a disconnect between language models, communication opportunities, and desired intervention outcomes in the intervention process. This article outlines a…

  19. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  20. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  1. A hydrological emulator for global applications - HE v1.0.0

    NASA Astrophysics Data System (ADS)

    Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong

    2018-03-01

    While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.

  2. Comparative modeling of coevolution in communities of unicellular organisms: adaptability and biodiversity.

    PubMed

    Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G

    2010-06-01

    We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.

  3. a Predator-Prey Model Based on the Fully Parallel Cellular Automata

    NASA Astrophysics Data System (ADS)

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  4. An Agent Based Collaborative Simplification of 3D Mesh Model

    NASA Astrophysics Data System (ADS)

    Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro

    Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.

  5. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  6. Heterodyne efficiency of a coherent free-space optical communication model through atmospheric turbulence.

    PubMed

    Ren, Yongxiong; Dang, Anhong; Liu, Ling; Guo, Hong

    2012-10-20

    The heterodyne efficiency of a coherent free-space optical (FSO) communication model under the effects of atmospheric turbulence and misalignment is studied in this paper. To be more general, both the transmitted beam and local oscillator beam are assumed to be partially coherent based on the Gaussian Schell model (GSM). By using the derived analytical form of the cross-spectral function of a GSM beam propagating through atmospheric turbulence, a closed-form expression of heterodyne efficiency is derived, assuming that the propagation directions for the transmitted and local oscillator beams are slightly different. Then the impacts of atmospheric turbulence, configuration of the two beams (namely, beam radius and spatial coherence width), detector radius, and misalignment angle over heterodyne efficiency are examined. Numerical results suggest that the beam radius of the two overlapping beams can be optimized to achieve a maximum heterodyne efficiency according to the turbulence conditions and the detector radius. It is also found that atmospheric turbulence conditions will significantly degrade the efficiency of heterodyne detection, and compared to fully coherent beams, partially coherent beams are less sensitive to the changes in turbulence conditions and more robust against misalignment at the receiver.

  7. Optimizing Cubature for Efficient Integration of Subspace Deformations

    PubMed Central

    An, Steven S.; Kim, Theodore; James, Doug L.

    2009-01-01

    We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777

  8. A Dexterous Optional Randomized Response Model

    ERIC Educational Resources Information Center

    Tarray, Tanveer A.; Singh, Housila P.; Yan, Zaizai

    2017-01-01

    This article addresses the problem of estimating the proportion Pi[subscript S] of the population belonging to a sensitive group using optional randomized response technique in stratified sampling based on Mangat model that has proportional and Neyman allocation and larger gain in efficiency. Numerically, it is found that the suggested model is…

  9. Economic Modeling as a Component of Academic Strategic Planning.

    ERIC Educational Resources Information Center

    MacKinnon, Joyce; Sothmann, Mark; Johnson, James

    2001-01-01

    Computer-based economic modeling was used to enable a school of allied health to define outcomes, identify associated costs, develop cost and revenue models, and create a financial planning system. As a strategic planning tool, it assisted realistic budgeting and improved efficiency and effectiveness. (Contains 18 references.) (SK)

  10. A scalable plant-resolving radiative transfer model based on optimized GPU ray tracing

    USDA-ARS?s Scientific Manuscript database

    A new model for radiative transfer in participating media and its application to complex plant canopies is presented. The goal was to be able to efficiently solve complex canopy-scale radiative transfer problems while also representing sub-plant heterogeneity. In the model, individual leaf surfaces ...

  11. Evaluation of a watershed model for estimating daily flow using limited flow measurements

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool (SWAT) model was evaluated for estimation of continuous daily flow based on limited flow measurements in the Upper Oyster Creek (UOC) watershed. SWAT was calibrated against limited measured flow data and then validated. The Nash-Sutcliffe model Efficiency (NSE) and...

  12. Diffusion lengths in irradiated N/P InP-on-Si solar cells

    NASA Technical Reports Server (NTRS)

    Wojtczuk, Steven; Colerico, Claudia; Summers, Geoffrey P.; Walters, Robert J.; Burke, Edward A.

    1996-01-01

    Indium phosphide (InP) solar cells were made on silicon (Si) wafers (InP/Si) by to take advantage of both the radiation-hardness properties of the InP solar cell and the light weight and low cost of Si wafers. The InP/Si cell application is for long duration and/or high radiation orbit space missions. Spire has made N/P InP/Si cells of sizes up to 2 cm by 4 cm with beginning-of-life (BOL) AM0 efficiencies over 13% (one-sun, 28C). These InP/Si cells have higher absolute efficiency and power density after a high radiation dose than gallium arsenide (GaAs) or silicon (Si) solar cells after a fluence of about 2e15 1 MeV electrons/sq. cm. In this work, we investigate the minority carrier (electron) base diffusion lengths in the N/P InP/Si cells. A quantum efficiency model was constructed for a 12% BOL AM0 N/P InP/Si cell which agreed well with the absolutely measured quantum efficiency and the sun-simulator measured AM0 photocurrent (30.1 mA/sq. cm). This model was then used to generate a table of AM0 photocurrents for a range of base diffusion lengths. AM0 photocurrents were then measured for irradiations up to 7.7e16 1 MeV electrons/sq. cm (the 12% BOL cell was 8% after the final irradiation). By comparing the measured photocurrents with the predicted photocurrents, base diffusion lengths were assigned at each fluence level. A damage coefficient K of 4e-8 and a starting (unirradiated) base electron diffusion length of 0.8 microns fits the data well. The quantum efficiency was measured again at the end of the experiment to verify that the photocurrent predicted by the model (25.5 mA/sq. cm) agreed with the simulator-measured photocurrent after irradiation (25.7 mA/sq. cm).

  13. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    PubMed

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  14. Influence of dislocation density on internal quantum efficiency of GaN-based semiconductors

    NASA Astrophysics Data System (ADS)

    Yu, Jiadong; Hao, Zhibiao; Li, Linsen; Wang, Lai; Luo, Yi; Wang, Jian; Sun, Changzheng; Han, Yanjun; Xiong, Bing; Li, Hongtao

    2017-03-01

    By considering the effects of stress fields coming from lattice distortion as well as charge fields coming from line charges at edge dislocation cores on radiative recombination of exciton, a model of carriers' radiative and non-radiative recombination has been established in GaN-based semiconductors with certain dislocation density. Using vector average of the stress fields and the charge fields, the relationship between dislocation density and the internal quantum efficiency (IQE) is deduced. Combined with related experimental results, this relationship is fitted well to the trend of IQEs of bulk GaN changing with screw and edge dislocation density, meanwhile its simplified form is fitted well to the IQEs of AlGaN multiple quantum well LEDs with varied threading dislocation densities but the same light emission wavelength. It is believed that this model, suitable for different epitaxy platforms such as MOCVD and MBE, can be used to predict to what extent the luminous efficiency of GaN-based semiconductors can still maintain when the dislocation density increases, so as to provide a reasonable rule of thumb for optimizing the epitaxial growth of GaN-based devices.

  15. A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks.

    PubMed

    Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing

    2015-11-12

    One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT.

  16. A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks

    PubMed Central

    Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing

    2015-01-01

    One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT. PMID:26569260

  17. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdelaziz, Omar; Qu, Ming; Sun, Xiao-Guang

    Separate sensible and latent cooling systems offer superior energy efficiency performance compared to conventional vapor compression air conditioning systems. In this paper we describe an innovative non-vapor compression system that uses electrochemical compressor (ECC) to pump hydrogen between 2-metal hydride reservoirs to provide the sensible cooling effect. The heat rejected during this process is used to regenerate the ionic liquid (IL) used for desiccant dehumidification. The overall system design is illustrated. The Xergy version 4C electrochemical compressor, while not designed as a high pressure system, develops in excess of 2 MPa (300 psia) and pressure ratios > 30. The projectedmore » base efficiency improvement of the electrochemical compressor is expected to be ~ 20% with higher efficiency when in low capacity mode due to being throttleable to lower capacity with improved efficiency. The IL was tailored to maximize the absorption/desorption rate of water vapor at moderate regeneration temperature. This IL, namely, [EMIm].OAc, is a hydrophilic IL with a working concentration range of 28.98% when operating between 25 75 C. The ECC metal hydride system is expected to show superior performance to typical vapor compression systems. As such, the combined efficiency gains from the use of ECC and separate and sensible cooling would offer significant potential savings to existing vapor compression cooling technology. A high efficiency Window Air Conditioner system is described based on this novel configuration. The system s schematic is provided. Models compared well with actual operating data obtained by running the prototype system. Finally, a model of an LiCl desiccant system in conjunction with the ECC-based metal hydride heat exchangers is provided.« less

  19. How to Make Our Models More Physically-based

    NASA Astrophysics Data System (ADS)

    Savenije, H. H. G.

    2016-12-01

    Models that are generally called "physically-based" unfortunately only have a partial view of the physical processes at play in hydrology. Although the coupled partial differential equations in these models reflect the water balance equations and the flow descriptors at laboratory scale, they miss essential characteristics of what determines the functioning of catchments. The most important active agent in catchments is the ecosystem (and sometimes people). What these agents do is manipulate the substrate in a way that it supports the essential functions of survival and productivity: infiltration of water, retention of moisture, mobilization and retention of nutrients, and drainage. Ecosystems do this in the most efficient way, in agreement with the landscape, and in response to climatic drivers. In brief, our hydrological system is alive and has a strong capacity to adjust to prevailing and changing circumstances. Although most physically based models take Newtonian theory at heart, as best they can, what they generally miss is Darwinian thinking on how an ecosystem evolves and adjusts its environment to maintain crucial hydrological functions. If this active agent is not reflected in our models, then they miss essential physics. Through a Darwinian approach, we can determine the root zone storage capacity of ecosystems, as a crucial component of hydrological models, determining the partitioning of fluxes and the conservation of moisture to bridge periods of drought. Another crucial element of physical systems is the evolution of drainage patterns, both on and below the surface. On the surface, such patterns facilitate infiltration or surface drainage with minimal erosion; in the unsaturated zone, patterns facilitate efficient replenishment of moisture deficits and preferential drainage when there is excess moisture; in the groundwater, patterns facilitate the efficient and gradual drainage of groundwater, resulting in linear reservoir recession. Models that do not incorporate these patterns are not physical. The parameters in the equations may be adjusted to compensate for the lake of patterns, but this involves scale-dependent calibration. In contrast to what is widely believed, relatively simple conceptual models can accommodate these physical processes accurately and very efficiently.

  20. Mechanical Energy Harvesting Performance of Ferroelectric Polymer Nanowires Grown via Template‐Wetting

    PubMed Central

    Whiter, Richard A.; Boughey, Chess; Smith, Michael

    2018-01-01

    Abstract Nanowires of the ferroelectric co‐polymer poly(vinylidenefluoride‐co‐triufloroethylene) [P(VDF‐TrFE)] are fabricated from solution within nanoporous templates of both “hard” anodic aluminium oxide (AAO) and “soft” polyimide (PI) through a facile and scalable template‐wetting process. The confined geometry afforded by the pores of the templates leads directly to highly crystalline P(VDF‐TrFE) nanowires in a macroscopic “poled” state that precludes the need for external electrical poling procedure typically required for piezoelectric performance. The energy‐harvesting performance of nanogenerators based on these template‐grown nanowires are extensively studied and analyzed in combination with finite element modelling. Both experimental results and computational models probing the role of the templates in determining overall nanogenerator performance, including both materials and device efficiencies, are presented. It is found that although P(VDF‐TrFE) nanowires grown in PI templates exhibit a lower material efficiency due to lower crystallinity as compared to nanowires grown in AAO templates, the overall device efficiency was higher for the PI‐template‐based nanogenerator because of the lower stiffness of the PI template as compared to the AAO template. This work provides a clear framework to assess the energy conversion efficiency of template‐grown piezoelectric nanowires and paves the way towards optimization of template‐based nanogenerator devices.

  1. A structurally based analytic model for estimation of biomass and fuel loads of woodland trees

    Treesearch

    Robin J. Tausch

    2009-01-01

    Allometric/structural relationships in tree crowns are a consequence of the physical, physiological, and fluid conduction processes of trees, which control the distribution, efficient support, and growth of foliage in the crown. The structural consequences of these processes are used to develop an analytic model based on the concept of branch orders. A set of...

  2. Application of Ce3+ single-doped complexes as solar spectral downshifters for enhancing photoelectric conversion efficiencies of a-Si-based solar cells

    NASA Astrophysics Data System (ADS)

    Song, Pei; Jiang, Chun

    2013-05-01

    The effect on photoelectric conversion efficiency of an a-Si-based solar cell by applying a solar spectral downshifter of rare earth ion Ce3+ single-doped complexes including yttrium aluminum garnet Y3Al5O12 single crystals, nanostructured ceramics, microstructured ceramics and B2O3-SiO2-Gd2O3-BaO glass is studied. The photoluminescence excitation spectra in the region 360-460 nm convert effectively into photoluminescence emission spectra in the region 450-550 nm where a-Si-based solar cells exhibit a higher spectral response. When these Ce3+ single-doped complexes are placed on the top of an a-Si-based solar cell as precursors for solar spectral downshifting, theoretical relative photoelectric conversion efficiencies of nc-Si:H and a-Si:H solar cells approach 1.09-1.13 and 1.04-1.07, respectively, by means of AMPS-1D numerical modeling, potentially benefiting an a-Si-based solar cell with a photoelectric efficiency improvement.

  3. Analytical model for effects of capsule shape on the healing efficiency in self-healing materials

    PubMed Central

    Li, Songpeng; Chen, Huisu

    2017-01-01

    The fundamental requirement for the autonomous capsule-based self-healing process to work is that cracks need to reach the capsules and break them such that the healing agent can be released. Ignoring all other aspects, the amount of healing agents released into the crack is essential to obtain a good healing. Meanwhile, from the perspective of the capsule shapes, spherical or elongated capsules (hollow tubes/fibres) are the main morphologies used in capsule-based self-healing materials. The focus of this contribution is the description of the effects of capsule shape on the efficiency of healing agent released in capsule-based self-healing material within the framework of the theory of geometrical probability and integral geometry. Analytical models are developed to characterize the amount of healing agent released per crack area from capsules for an arbitrary crack intersecting with capsules of various shapes in a virtual capsule-based self-healing material. The average crack opening distance is chosen to be a key parameter in defining the healing potential of individual cracks in the models. Furthermore, the accuracy of the developed models was verified by comparison to the data from a published numerical simulation study. PMID:29095862

  4. The usability of the optical parametric amplification of light for high-angular-resolution imaging and fast astrometry

    NASA Astrophysics Data System (ADS)

    Kurek, A. R.; Stachowski, A.; Banaszek, K.; Pollo, A.

    2018-05-01

    High-angular-resolution imaging is crucial for many applications in modern astronomy and astrophysics. The fundamental diffraction limit constrains the resolving power of both ground-based and spaceborne telescopes. The recent idea of a quantum telescope based on the optical parametric amplification (OPA) of light aims to bypass this limit for the imaging of extended sources by an order of magnitude or more. We present an updated scheme of an OPA-based device and a more accurate model of the signal amplification by such a device. The semiclassical model that we present predicts that the noise in such a system will form so-called light speckles as a result of light interference in the optical path. Based on this model, we analysed the efficiency of OPA in increasing the angular resolution of the imaging of extended targets and the precise localization of a distant point source. According to our new model, OPA offers a gain in resolved imaging in comparison to classical optics. For a given time-span, we found that OPA can be more efficient in localizing a single distant point source than classical telescopes.

  5. High power diode laser Master Oscillator-Power Amplifier (MOPA)

    NASA Technical Reports Server (NTRS)

    Andrews, John R.; Mouroulis, P.; Wicks, G.

    1994-01-01

    High power multiple quantum well AlGaAs diode laser master oscillator - power amplifier (MOPA) systems were examined both experimentally and theoretically. For two pass operation, it was found that powers in excess of 0.3 W per 100 micrometers of facet length were achievable while maintaining diffraction-limited beam quality. Internal electrical-to-optical conversion efficiencies as high as 25 percent were observed at an internal amplifier gain of 9 dB. Theoretical modeling of multiple quantum well amplifiers was done using appropriate rate equations and a heuristic model of the carrier density dependent gain. The model gave a qualitative agreement with the experimental results. In addition, the model allowed exploration of a wider design space for the amplifiers. The model predicted that internal electrical-to-optical conversion efficiencies in excess of 50 percent should be achievable with careful system design. The model predicted that no global optimum design exists, but gain, efficiency, and optical confinement (coupling efficiency) can be mutually adjusted to meet a specific system requirement. A three quantum well, low optical confinement amplifier was fabricated using molecular beam epitaxial growth. Coherent beam combining of two high power amplifiers injected from a common master oscillator was also examined. Coherent beam combining with an efficiency of 93 percent resulted in a single beam having diffraction-limited characteristics. This beam combining efficiency is a world record result for such a system. Interferometric observations of the output of the amplifier indicated that spatial mode matching was a significant factor in the less than perfect beam combining. Finally, the system issues of arrays of amplifiers in a coherent beam combining system were investigated. Based upon experimentally observed parameters coherent beam combining could result in a megawatt-scale coherent beam with a 10 percent electrical-to-optical conversion efficiency.

  6. Development and Implementation of Efficiency-Improving Analysis Methods for the SAGE III on ISS Thermal Model Originating

    NASA Technical Reports Server (NTRS)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; Scola, Salvatore; Tobin, Steven; McLeod, Shawn; Mannu, Sergio; Guglielmo, Corrado; Moeller, Timothy

    2013-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle in 2015. A detailed thermal model of the SAGE III payload has been developed in Thermal Desktop (TD). Several novel methods have been implemented to facilitate efficient payload-level thermal analysis, including the use of a design of experiments (DOE) methodology to determine the worst-case orbits for SAGE III while on ISS, use of TD assemblies to move payloads from the Dragon trunk to the Enhanced Operational Transfer Platform (EOTP) to its final home on the Expedite the Processing of Experiments to Space Station (ExPRESS) Logistics Carrier (ELC)-4, incorporation of older models in varying unit sets, ability to change units easily (including hardcoded logic blocks), case-based logic to facilitate activating heaters and active elements for varying scenarios within a single model, incorporation of several coordinate frames to easily map to structural models with differing geometries and locations, and streamlined results processing using an Excel-based text file plotter developed in-house at LaRC. This document presents an overview of the SAGE III thermal model and describes the development and implementation of these efficiency-improving analysis methods.

  7. Space-based laser-driven MHD generator: Feasibility study

    NASA Technical Reports Server (NTRS)

    Choi, S. H.

    1986-01-01

    The feasibility of a laser-driven MHD generator, as a candidate receiver for a space-based laser power transmission system, was investigated. On the basis of reasonable parameters obtained in the literature, a model of the laser-driven MHD generator was developed with the assumptions of a steady, turbulent, two-dimensional flow. These assumptions were based on the continuous and steady generation of plasmas by the exposure of the continuous wave laser beam thus inducing a steady back pressure that enables the medium to flow steadily. The model considered here took the turbulent nature of plasmas into account in the two-dimensional geometry of the generator. For these conditions with the plasma parameters defining the thermal conductivity, viscosity, electrical conductivity for the plasma flow, a generator efficiency of 53.3% was calculated. If turbulent effects and nonequilibrium ionization are taken into account, the efficiency is 43.2%. The study shows that the laser-driven MHD system has potential as a laser power receiver for space applications because of its high energy conversion efficiency, high energy density and relatively simple mechanism as compared to other energy conversion cycles.

  8. Mars Propellant Liquefaction and Storage Performance Modeling using Thermal Desktop with an Integrated Cryocooler Model

    NASA Technical Reports Server (NTRS)

    Desai, Pooja; Hauser, Dan; Sutherlin, Steven

    2017-01-01

    NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.

  9. Flexible Language Constructs for Large Parallel Programs

    DOE PAGES

    Rosing, Matt; Schnabel, Robert

    1994-01-01

    The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less

  10. STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.

    PubMed

    Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik

    2012-05-10

    Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/

  11. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    PubMed Central

    2012-01-01

    Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658

  12. Hysteresis Analysis Based on the Ferroelectric Effect in Hybrid Perovskite Solar Cells.

    PubMed

    Wei, Jing; Zhao, Yicheng; Li, Heng; Li, Guobao; Pan, Jinlong; Xu, Dongsheng; Zhao, Qing; Yu, Dapeng

    2014-11-06

    The power conversion efficiency (PCE) of CH3NH3PbX3 (X = I, Br, Cl) perovskite solar cells has been developed rapidly from 6.5 to 18% within 3 years. However, the anomalous hysteresis found in I-V measurements can cause an inaccurate estimation of the efficiency. We attribute the phenomena to the ferroelectric effect and build a model based on the ferroelectric diode to explain it. The ferroelectric effect of CH3NH3PbI3-xClx is strongly suggested by characterization methods and the E-P (electrical field-polarization) loop. The hysteresis in I-V curves is found to greatly depend on the scan range as well as the velocity, which is well explained by the ferroelectric diode model. We also find that the current signals show exponential decay in ∼10 s under prolonged stepwise measurements, and the anomalous hysteresis disappears using these stabilized current values. The experimental results accord well with the model based on ferroelectric properties and prove that prolonged stepwise measurement is an effective way to evaluate the real efficiency of perovskite solar cells. Most importantly, this work provides a meaningful perspective that the ferroelectric effect (if it really exists) should be paid special attention in the optimization of perovskite solar cells.

  13. Novel electrical energy storage system based on reversible solid oxide cells: System design and operating conditions

    NASA Astrophysics Data System (ADS)

    Wendel, C. H.; Kazempoor, P.; Braun, R. J.

    2015-02-01

    Electrical energy storage (EES) is an important component of the future electric grid. Given that no other widely available technology meets all the EES requirements, reversible (or regenerative) solid oxide cells (ReSOCs) working in both fuel cell (power producing) and electrolysis (fuel producing) modes are envisioned as a technology capable of providing highly efficient and cost-effective EES. However, there are still many challenges and questions from cell materials development to system level operation of ReSOCs that should be addressed before widespread application. This paper presents a novel system based on ReSOCs that employ a thermal management strategy of promoting exothermic methanation within the ReSOC cell-stack to provide thermal energy for the endothermic steam/CO2 electrolysis reactions during charging mode (fuel producing). This approach also serves to enhance the energy density of the stored gases. Modeling and parametric analysis of an energy storage concept is performed using a physically based ReSOC stack model coupled with thermodynamic system component models. Results indicate that roundtrip efficiencies greater than 70% can be achieved at intermediate stack temperature (680 °C) and elevated stack pressure (20 bar). The optimal operating condition arises from a tradeoff between stack efficiency and auxiliary power requirements from balance of plant hardware.

  14. Algebraic model checking for Boolean gene regulatory networks.

    PubMed

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  15. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.

  16. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  17. Flapping wing applied to wind generators

    NASA Astrophysics Data System (ADS)

    Colidiuc, Alexandra; Galetuse, Stelian; Suatean, Bogdan

    2012-11-01

    The new conditions at the international level for energy source distributions and the continuous increasing of energy consumption must lead to a new alternative resource with the condition of keeping the environment clean. This paper offers a new approach for a wind generator and is based on the theoretical aerodynamic model. This new model of wind generator helped me to test what influences would be if there will be a bird airfoil instead of a normal wind generator airfoil. The aim is to calculate the efficiency for the new model of wind generator. A representative direction for using the renewable energy is referred to the transformation of wind energy into electrical energy, with the help of wind turbines; the development of such systems lead to new solutions based on high efficiency, reduced costs and suitable to the implementation conditions.

  18. Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger

    NASA Astrophysics Data System (ADS)

    Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun

    2011-04-01

    This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.

  19. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  20. A high-resolution physically-based global flood hazard map

    NASA Astrophysics Data System (ADS)

    Kaheil, Y.; Begnudelli, L.; McCollum, J.

    2016-12-01

    We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.

Top