Sample records for high efficiency variable

  1. Phytoplankton plasticity drives large variability in carbon fixation efficiency

    NASA Astrophysics Data System (ADS)

    Ayata, Sakina-Dorothée.; Lévy, Marina; Aumont, Olivier; Resplandy, Laure; Tagliabue, Alessandro; Sciandra, Antoine; Bernard, Olivier

    2014-12-01

    Phytoplankton C:N stoichiometry is highly flexible due to physiological plasticity, which could lead to high variations in carbon fixation efficiency (carbon consumption relative to nitrogen). However, the magnitude, as well as the spatial and temporal scales of variability, remains poorly constrained. We used a high-resolution biogeochemical model resolving various scales from small to high, spatially and temporally, in order to quantify and better understand this variability. We find that phytoplankton C:N ratio is highly variable at all spatial and temporal scales (5-12 molC/molN), from mesoscale to regional scale, and is mainly driven by nitrogen supply. Carbon fixation efficiency varies accordingly at all scales (±30%), with higher values under oligotrophic conditions and lower values under eutrophic conditions. Hence, phytoplankton plasticity may act as a buffer by attenuating carbon sequestration variability. Our results have implications for in situ estimations of C:N ratios and for future predictions under high CO2 world.

  2. Determinants of energy efficiency across countries

    NASA Astrophysics Data System (ADS)

    Yao, Guolin

    With economic development, environmental concerns become more important. Economies cannot be developed without energy consumption, which is the major source of greenhouse gas emissions. Higher energy efficiency is one means of reducing emissions, but what determines energy efficiency? In this research we attempt to find answers to this question by using cross-sectional country data; that is, we examine a wide range of possible determinants of energy efficiency at the country level in an attempt to find the most important causal factors. All countries are divided into three income groups: high-income countries, middle-income countries, and low-income countries. Energy intensity is used as a measurement of energy efficiency. All independent variables belong to two categories: quantitative and qualitative. Quantitative variables are measures of the economic conditions, development indicators and energy usage situations. Qualitative variables mainly measure political, societal and economic strengths of a country. The three income groups have different economic and energy attributes. Each group has different sets of variables to explain energy efficiency. Energy prices and winter temperature are both important in high-income and middle-income countries. No qualitative variables appear in the model of high-income countries. Basic economic factors, such as institutions, political stability, urbanization level, population density, are important in low-income countries. Besides similar variables, such as macroeconomic stability and index of rule of law, the hydroelectricity share in total electric generation is also a driver of energy efficiency in middle-income countries. These variables have different policy implications for each group of countries.

  3. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  4. Effect of solar loading on greenhouse containers used in transpiration efficiency screening

    USDA-ARS?s Scientific Manuscript database

    Earlier we described a simple high throughput method of screening sorghum for transpiration efficiency (TE). Subsequently it was observed that while results were consistent between lines exhibiting high and low TE, ranking between lines with similar TE was variable. We hypothesized that variable mic...

  5. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  6. Design of quantum efficiency measurement system for variable doping GaAs photocathode

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Yang, Kai; Liu, HongLin; Chang, Benkang

    2008-03-01

    To achieve high quantum efficiency and good stability has been a main direction to develop GaAs photocathode recently. Through early research, we proved that variable doping structure is executable and practical, and has great potential. In order to optimize variable doping GaAs photocathode preparation techniques and study the variable doping theory deeply, a real-time quantum efficiency measurement system for GaAs Photocathode has been designed. The system uses FPGA (Field-programmable gate array) device, and high speed A/D converter to design a high signal noise ratio and high speed data acquisition card. ARM (Advanced RISC Machines) core processor s3c2410 and real-time embedded system are used to obtain and show measurement results. The measurement precision of photocurrent could reach 1nA, and measurement range of spectral response curve is within 400~1000nm. GaAs photocathode preparation process can be real-time monitored by using this system. This system could easily be added other functions to show the physic variation of photocathode during the preparation process more roundly in the future.

  7. Honeybee economics: optimisation of foraging in a variable world.

    PubMed

    Stabentheiner, Anton; Kovac, Helmut

    2016-06-20

    In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.

  8. High Efficiency Variable Speed Versatile Power Air Conditioning System

    DTIC Science & Technology

    2013-08-08

    Design concept applicable for wide range of HVAC and refrigeration systems • One TXV size can be used for a wide range of cooling capacity...versatility, can run from AC and DC sources Cooling load adaptive, variable Speed Fully operable up to 140 degrees Fahrenheit 15. SUBJECT TERMS 16. SECURITY...ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 High Efficiency HVAC &R Technology

  9. Cost efficiency of university hospitals in the Nordic countries: a cross-country analysis.

    PubMed

    Medin, Emma; Anthun, Kjartan S; Häkkinen, Unto; Kittelsen, Sverre A C; Linna, Miika; Magnussen, Jon; Olsen, Kim; Rehnberg, Clas

    2011-12-01

    This paper estimates cost efficiency scores using the bootstrap bias-corrected procedure, including variables for teaching and research, for the performance of university hospitals in the Nordic countries. Previous research has shown that hospital provision of research and education interferes with patient care routines and inflates the costs of health care services, turning university hospitals into outliers in comparative productivity and efficiency analyses. The organisation of patient care, medical education and clinical research as well as available data at the university hospital level are highly similar in the Nordic countries, creating a data set of comparable decision-making units suitable for a cross-country cost efficiency analysis. The results demonstrate significant differences in university hospital cost efficiency when variables for teaching and research are entered into the analysis, both between and within the Nordic countries. The results of a second-stage analysis show that the most important explanatory variables are geographical location of the hospital and the share of discharges with a high case weight. However, a substantial amount of the variation in cost efficiency at the university hospital level remains unexplained.

  10. High Efficiency Variable Speed Versatile Power Air Conditioning System for Military Vehicles

    DTIC Science & Technology

    2013-08-01

    MOBILITY (P&M) MINI-SYMPOSIUM AUGUST 21-22, 2013 - TROY , MICHIGAN High efficiency variable speed versatile power air conditioning system for...power draw was measured using a calibrated Watt meter. The schematic of the setup is shown in Figure 5 and the setup is shown in Figure 6. Figure...Rocky Research environmental chamber. Cooling Capacity was directly measured in Btu/hr or Watts via measuring the Air flow velocity and the air

  11. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    DTIC Science & Technology

    2014-04-01

    surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital

  12. Design study and performance analysis of a high-speed multistage variable-geometry fan for a variable cycle engine

    NASA Technical Reports Server (NTRS)

    Sullivan, T. J.; Parker, D. E.

    1979-01-01

    A design technology study was performed to identify a high speed, multistage, variable geometry fan configuration capable of achieving wide flow modulation with near optimum efficiency at the important operating condition. A parametric screening study of the front and rear block fans was conducted in which the influence of major fan design features on weight and efficiency was determined. Key design parameters were varied systematically to determine the fan configuration most suited for a double bypass, variable cycle engine. Two and three stage fans were considered for the front block. A single stage, core driven fan was studied for the rear block. Variable geometry concepts were evaluated to provide near optimum off design performance. A detailed aerodynamic design and a preliminary mechanical design were carried out for the selected fan configuration. Performance predictions were made for the front and rear block fans.

  13. Aerosol Drug Delivery During Noninvasive Positive Pressure Ventilation: Effects of Intersubject Variability and Excipient Enhanced Growth

    PubMed Central

    Walenga, Ross L.; Kaviratna, Anubhav; Hindle, Michael

    2017-01-01

    Abstract Background: Nebulized aerosol drug delivery during the administration of noninvasive positive pressure ventilation (NPPV) is commonly implemented. While studies have shown improved patient outcomes for this therapeutic approach, aerosol delivery efficiency is reported to be low with high variability in lung-deposited dose. Excipient enhanced growth (EEG) aerosol delivery is a newly proposed technique that may improve drug delivery efficiency and reduce intersubject aerosol delivery variability when coupled with NPPV. Materials and Methods: A combined approach using in vitro experiments and computational fluid dynamics (CFD) was used to characterize aerosol delivery efficiency during NPPV in two new nasal cavity models that include face mask interfaces. Mesh nebulizer and in-line dry powder inhaler (DPI) sources of conventional and EEG aerosols were both considered. Results: Based on validated steady-state CFD predictions, EEG aerosol delivery improved lung penetration fraction (PF) values by factors ranging from 1.3 to 6.4 compared with conventional-sized aerosols. Furthermore, intersubject variability in lung PF was very high for conventional aerosol sizes (relative differences between subjects in the range of 54.5%–134.3%) and was reduced by an order of magnitude with the EEG approach (relative differences between subjects in the range of 5.5%–17.4%). Realistic in vitro experiments of cyclic NPPV demonstrated similar trends in lung delivery to those observed with the steady-state simulations, but with lower lung delivery efficiencies. Reaching the lung delivery efficiencies reported with the steady-state simulations of 80%–90% will require synchronization of aerosol administration during inspiration and reducing the size of the EEG aerosol delivery unit. Conclusions: The EEG approach enabled high-efficiency lung delivery of aerosols administered during NPPV and reduced intersubject aerosol delivery variability by an order of magnitude. Use of an in-line DPI device that connects to the NPPV mask appears to be a convenient method to rapidly administer an EEG aerosol and synchronize the delivery with inspiration. PMID:28075194

  14. Design and control of a variable geometry turbofan with an independently modulated third stream

    NASA Astrophysics Data System (ADS)

    Simmons, Ronald J.

    Emerging 21st century military missions task engines to deliver the fuel efficiency of a high bypass turbofan while retaining the ability to produce the high specific thrust of a low bypass turbofan. This study explores the possibility of satisfying such competing demands by adding a second independently modulated bypass stream to the basic turbofan architecture. This third stream can be used for a variety of purposes including: providing a cool heat sink for dissipating aircraft heat loads, cooling turbine cooling air, and providing a readily available stream of constant pressure ratio air for lift augmentation. Furthermore, by modulating airflow to the second and third streams, it is possible to continuously match the engine's airflow demand to the inlet's airflow supply thereby reducing spillage and increasing propulsive efficiency. This research begins with a historical perspective of variable cycle engines and shows a logical progression to proposed architectures. Then a novel method for investigating optimal performance is presented which determines most favorable on design variable geometry settings, most beneficial moment to terminate flow holding, and an optimal scheduling of variable features for fuel efficient off design operation. Mission analysis conducted across the three candidate missions verifies that these three stream variable cycles can deliver fuel savings in excess of 30% relative to a year 2000 reference turbofan. This research concludes by evaluating the relative impact of each variable technology on the performance of adaptive engine architectures. The most promising technologies include modulated turbine cooling air, variable high pressure turbine inlet area and variable third stream nozzle throat area. With just these few features it is possible to obtain nearly optimal performance, including 90% or more of the potential fuel savings, with far fewer variable features than are available in the study engine. It is abundantly clear that three stream variable architectures can significantly outperform existing two stream turbofans in both fuel efficiency and at the vehicle system level with only a modest increase in complexity and weight. Such engine architectures should be strongly considered for future military applications.

  15. Development of Permanent Magnet Reluctance Motor Suitable for Variable-Speed Drive for Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Sakai, Kazuto; Takahashi, Norio; Shimomura, Eiji; Arata, Masanobu; Nakazawa, Yousuke; Tajima, Toshinobu

    Regarding environmental and energy issues, increasing importance has been placed on energy saving in various systems. To save energy, it would be desirable if the total efficiency of various types of equipment were increased.Recently, a hybrid electric vehicle (HEV) and an electric vehicle (EV) have been developed. The use of new technologies will eventually lead to the realization of the new- generation vehicle with high efficiency. One new technology is the variable-speed drive over a wide range of speeds. The motor driving systems of the EV or the HEV must operate in the variable-speed range of up to 1:5. This has created the need for a high-efficiency motor that is capable of operation over a wide speed range. In this paper, we describe the concept of a novel permanent magnet reluctance motor (PRM) and discuss its characteristics. We developed the PRM, which has the capability of operating over a wide speed range with high efficiency. The PRM has a rotor with a salient pole, which generates magnetic anisotropy. In addition, the permanent magnets embedded in the rotor core counter the q-axis flux by the armature reaction. Then, the power density and the power factor increase. The PRM produces reluctance torque and torque by permanent magnet (PM) flux. The reluctance torque is 1 to 2 times larger than the PM torque. When the PRM operates over a constant-power speed range, the field component of the current will be regulated to maintain a constant voltage. The output power of the developed PRM is 8 to 250kW. It is clarified that the PRM operates at a wide variable-speed range (1:5) with high efficiency (92-97%). It is concluded that the PRM has high performance over a wide constant-power speed range. In addition, the PRM is constructed using a small PM, so that we can solve the problem of cost. Thus, the PRM is a superior machine that is suited for variable-speed drive applications.

  16. NASA developments in solid state power amplifiers

    NASA Technical Reports Server (NTRS)

    Leonard, Regis F.

    1990-01-01

    Over the last ten years, NASA has undertaken an extensive program aimed at development of solid state power amplifiers for space applications. Historically, the program may be divided into three phases. The first efforts were carried out in support of the advanced communications technology satellite (ACTS) program, which is developing an experimental version of a Ka-band commercial communications system. These first amplifiers attempted to use hybrid technology. The second phase was still targeted at ACTS frequencies, but concentrated on monolithic implementations, while the current, third phase, is a monolithic effort that focusses on frequencies appropriate for other NASA programs and stresses amplifier efficiency. The topics covered include: (1) 20 GHz hybrid amplifiers; (2) 20 GHz monolithic MESFET power amplifiers; (3) Texas Instruments' (TI) 20 GHz variable power amplifier; (4) TI 20 GHz high power amplifier; (5) high efficiency monolithic power amplifiers; (6) GHz high efficiency variable power amplifier; (7) TI 32 GHz monolithic power amplifier performance; (8) design goals for Hughes' 32 GHz variable power amplifier; and (9) performance goals for Hughes' pseudomorphic 60 GHz power amplifier.

  17. Enhanced spectral efficiency using bandwidth switchable SAW filtering for mobile satellite communications systems

    NASA Technical Reports Server (NTRS)

    Peach, Robert; Malarky, Alastair

    1990-01-01

    Currently proposed mobile satellite communications systems require a high degree of flexibility in assignment of spectral capacity to different geographic locations. Conventionally this results in poor spectral efficiency which may be overcome by the use of bandwidth switchable filtering. Surface acoustic wave (SAW) technology makes it possible to provide banks of filters whose responses may be contiguously combined to form variable bandwidth filters with constant amplitude and phase responses across the entire band. The high selectivity possible with SAW filters, combined with the variable bandwidth capability, makes it possible to achieve spectral efficiencies over the allocated bandwidths of greater than 90 percent, while retaining full system flexibility. Bandwidth switchable SAW filtering (BSSF) achieves these gains with a negligible increase in hardware complexity.

  18. PATTERN PREDICTION OF ACADEMIC SUCCESS.

    ERIC Educational Resources Information Center

    LUNNEBORG, CLIFFORD E.; LUNNEBORG, PATRICIA W.

    A TECHNIQUE OF PATTERN ANALYSIS WHICH EMPHASIZES THE DEVELOPMENT OF MORE EFFECTIVE WAYS OF SCORING A GIVEN SET OF VARIABLES WAS FORMULATED. TO THE ORIGINAL VARIABLES WERE SUCCESSIVELY ADDED TWO, THREE, AND FOUR VARIABLE PATTERNS AND THE INCREASE IN PREDICTIVE EFFICIENCY ASSESSED. RANDOMLY SELECTED HIGH SCHOOL SENIORS WHO HAD PARTICIPATED IN THE…

  19. A study of flux control for high-efficiency speed control of variable flux permanent magnet motor

    NASA Astrophysics Data System (ADS)

    Kim, Young Hyun; Lee, Seong Soo; Lee, Jung Ho

    2018-05-01

    In this study, we evaluate the performance of permanent magnets (PMs). The efficiency of attraction in the high speed region was studied using the variable flux memory motor (VFMM). It is presented in order to analyze the magnetic characteristics of PMs, using the second quadrant plan data with re- and de-magnetization. In addition, this study focuses on the evaluation of operational characteristics relative to the magnetizing directions according to the d-axis currents, by using one of the finite element solutions. The feasibility of application for the VFMM has been experimentally demonstrated.

  20. The Concept of Resource Use Efficiency as a Theoretical Basis for Promising Coal Mining Technologies

    NASA Astrophysics Data System (ADS)

    Mikhalchenko, Vadim

    2017-11-01

    The article is devoted to solving one of the most relevant problems of the coal mining industry - its high resource use efficiency, which results in high environmental and economic costs of operating enterprises. It is shown that it is the high resource use efficiency of traditional, historically developed coal production systems that generates a conflict between indicators of economic efficiency and indicators of resistance to uncertainty and variability of market environment parameters. The traditional technological paradigm of exploitation of coal deposits also predetermines high, technology-driven, economic risks. The solution is shown and a real example of the problem solution is considered.

  1. What do foraging wasps optimize in a variable environment, energy investment or body temperature?

    PubMed

    Kovac, Helmut; Stabentheiner, Anton; Brodschneider, Robert

    2015-11-01

    Vespine wasps (Vespula sp.) are endowed with a pronounced ability of endothermic heat production. To show how they balance energetics and thermoregulation under variable environmental conditions, we measured the body temperature and respiration of sucrose foragers (1.5 M, unlimited flow) under variable ambient temperature (T a = 20-35 °C) and solar radiation (20-570 W m(-2)). Results revealed a graduated balancing of metabolic efforts with thermoregulatory needs. The thoracic temperature in the shade depended on ambient temperature, increasing from ~37 to 39 °C. However, wasps used solar heat gain to regulate their thorax temperature at a rather high level at low T a (mean T thorax ~ 39 °C). Only at high T a they used solar heat to reduce their metabolic rate remarkably. A high body temperature accelerated the suction speed and shortened foraging time. As the costs of foraging strongly depended on duration, the efficiency could be significantly increased with a high body temperature. Heat gain from solar radiation enabled the wasps to enhance foraging efficiency at high ambient temperature (T a = 30 °C) by up to 63 %. The well-balanced change of economic strategies in response to environmental conditions minimized costs of foraging and optimized energetic efficiency.

  2. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  3. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  4. Relationships of efficiency to reproductive disorders in Danish milk production: a stochastic frontier analysis.

    PubMed

    Lawson, L G; Bruun, J; Coelli, T; Agger, J F; Lund, M

    2004-01-01

    Relationships of various reproductive disorders and milk production performance of Danish dairy farms were investigated. A stochastic frontier production function was estimated using data collected in 1998 from 514 Danish dairy farms. Measures of farm-level milk production efficiency relative to this production frontier were obtained, and relationships between milk production efficiency and the incidence risk of reproductive disorders were examined. There were moderate positive relationships between milk production efficiency and retained placenta, induction of estrus, uterine infections, ovarian cysts, and induction of birth. Inclusion of reproductive management variables showed that these moderate relationships disappeared, but directions of coefficients for almost all those variables remained the same. Dystocia showed a weak negative correlation with milk production efficiency. Farms that were mainly managed by young farmers had the highest average efficiency scores. The estimated milk losses due to inefficiency averaged 1142, 488, and 256 kg of energy-corrected milk per cow, respectively, for low-, medium-, and high-efficiency herds. It is concluded that the availability of younger cows, which enabled farmers to replace cows with reproductive disorders, contributed to high cow productivity in efficient farms. Thus, a high replacement rate more than compensates for the possible negative effect of reproductive disorders. The use of frontier production and efficiency/inefficiency functions to analyze herd data may enable dairy advisors to identify inefficient herds and to simulate the effect of alternative management procedures on the individual herd's efficiency.

  5. Development of high efficiency ball-bearing turbocharger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyashita, K.; Kurasawa, M.; Matsuoka, H.

    1987-01-01

    Turbochargers have become very popular on passenger cars since the first mass-produced turbocharged passenger cars were put on market in Japan in 1979. Turbo lag is one of the most serious problem since the first mass-production started. Several new technologies such as a variable geometry turbocharger, ceramic turbocharger, etc. have been introduced to improve acceleration performance. A variable geometry turbocharger changes the area of gas flow passage and increases exhaust gas speed at low engine speed. A ceramic turbocharger reduces inertia moment of a turbine wheel and shaft. Turbocharger mechanical efficiency has equal importance as compressor efficiency and turbine efficiency.more » This paper describes the test results of ball bearing turbochargers.« less

  6. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers' Adaptations.

    PubMed

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.

  7. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers’ Adaptations

    PubMed Central

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435

  8. Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling

    NASA Astrophysics Data System (ADS)

    Awasthi, Shalini; Nair, Nisanth N.

    2017-03-01

    Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.

  9. Investigating Runoff Efficiency in Upper Colorado River Streamflow Over Past Centuries

    NASA Astrophysics Data System (ADS)

    Woodhouse, Connie A.; Pederson, Gregory T.

    2018-01-01

    With increasing concerns about the impact of warming temperatures on water resources, more attention is being paid to the relationship between runoff and precipitation, or runoff efficiency. Temperature is a key influence on Colorado River runoff efficiency, and warming temperatures are projected to reduce runoff efficiency. Here, we investigate the nature of runoff efficiency in the upper Colorado River (UCRB) basin over the past 400 years, with a specific focus on major droughts and pluvials, and to contextualize the instrumental period. We first verify the feasibility of reconstructing runoff efficiency from tree-ring data. The reconstruction is then used to evaluate variability in runoff efficiency over periods of high and low flow, and its correspondence to a reconstruction of late runoff season UCRB temperature variability. Results indicate that runoff efficiency has played a consistent role in modulating the relationship between precipitation and streamflow over past centuries, and that temperature has likely been the key control. While negative runoff efficiency is most common during dry periods, and positive runoff efficiency during wet years, there are some instances of positive runoff efficiency moderating the impact of precipitation deficits on streamflow. Compared to past centuries, the 20th century has experienced twice as many high flow years with negative runoff efficiency, likely due to warm temperatures. These results suggest warming temperatures will continue to reduce runoff efficiency in wet or dry years, and that future flows will be less than anticipated from precipitation due to warming temperatures.

  10. Investigating runoff efficiency in upper Colorado River streamflow over past centuries

    USGS Publications Warehouse

    Woodhouse, Connie A.; Pederson, Gregory T.

    2018-01-01

    With increasing concerns about the impact of warming temperatures on water resources, more attention is being paid to the relationship between runoff and precipitation, or runoff efficiency. Temperature is a key influence on Colorado River runoff efficiency, and warming temperatures are projected to reduce runoff efficiency. Here, we investigate the nature of runoff efficiency in the upper Colorado River (UCRB) basin over the past 400 years, with a specific focus on major droughts and pluvials, and to contextualize the instrumental period. We first verify the feasibility of reconstructing runoff efficiency from tree-ring data. The reconstruction is then used to evaluate variability in runoff efficiency over periods of high and low flow, and its correspondence to a reconstruction of late runoff season UCRB temperature variability. Results indicate that runoff efficiency has played a consistent role in modulating the relationship between precipitation and streamflow over past centuries, and that temperature has likely been the key control. While negative runoff efficiency is most common during dry periods, and positive runoff efficiency during wet years, there are some instances of positive runoff efficiency moderating the impact of precipitation deficits on streamflow. Compared to past centuries, the 20th century has experienced twice as many high flow years with negative runoff efficiency, likely due to warm temperatures. These results suggest warming temperatures will continue to reduce runoff efficiency in wet or dry years, and that future flows will be less than anticipated from precipitation due to warming temperatures.

  11. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  12. Precipitation Storage Efficiency During Fallow in Wheat-Fallow Systems

    USDA-ARS?s Scientific Manuscript database

    Wheat-fallow production systems arose in order to stabilize widely ranging wheat yields that resulted from highly variable precipitation in the Great Plains. Historically, precipitation storage efficiency (PSE) over the fallow period increased over time as inversion tillage systems used for weed con...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bustamante, Mauricio; Heinze, Jonas; Winter, Walter

    Gamma-ray bursts (GRBs) are promising as sources of neutrinos and cosmic rays. In the internal shock scenario, blobs of plasma emitted from a central engine collide within a relativistic jet and form shocks, leading to particle acceleration and emission. Motivated by present experimental constraints and sensitivities, we improve the predictions of particle emission by investigating time-dependent effects from multiple shocks. We produce synthetic light curves with different variability timescales that stem from properties of the central engine. For individual GRBs, qualitative conclusions about model parameters, neutrino production efficiency, and delays in high-energy gamma-rays can be deduced from inspection of themore » gamma-ray light curves. GRBs with fast time variability without additional prominent pulse structure tend to be efficient neutrino emitters, whereas GRBs with fast variability modulated by a broad pulse structure can be inefficient neutrino emitters and produce delayed high-energy gamma-ray signals. Our results can be applied to quantitative tests of the GRB origin of ultra-high-energy cosmic rays, and have the potential to impact current and future multi-messenger searches.« less

  14. Light valve based on nonimaging optics with potential application in cold climate greenhouses

    NASA Astrophysics Data System (ADS)

    Valerio, Angel A.; Mossman, Michele A.; Whitehead, Lorne A.

    2014-09-01

    We have evaluated a new concept for a variable light valve and thermal insulation system based on nonimaging optics. The system incorporates compound parabolic concentrators and can readily be switched between an open highly light transmissive state and a closed highly thermally insulating state. This variable light valve makes the transition between high thermal insulation and efficient light transmittance practical and may be useful in plant growth environments to provide both adequate sunlight illumination and thermal insulation as needed. We have measured light transmittance values exceeding 80% for the light valve design and achieved thermal insulation values substantially exceeding those of traditional energy efficient windows. The light valve system presented in this paper represents a potential solution for greenhouse food production in locations where greenhouses are not feasible economically due to high heating cost.

  15. Displacement Based Multilevel Structural Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszezanski-Sobieski, J.; Striz, A. G.

    1996-01-01

    In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.

  16. Optimization of non-thermal plasma efficiency in the simultaneous elimination of benzene, toluene, ethyl-benzene, and xylene from polluted airstreams using response surface methodology.

    PubMed

    Najafpoor, Ali Asghar; Jonidi Jafari, Ahmad; Hosseinzadeh, Ahmad; Khani Jazani, Reza; Bargozin, Hasan

    2018-01-01

    Treatment with a non-thermal plasma (NTP) is a new and effective technology applied recently for conversion of gases for air pollution control. This research was initiated to optimize the efficient application of the NTP process in benzene, toluene, ethyl-benzene, and xylene (BTEX) removal. The effects of four variables including temperature, initial BTEX concentration, voltage, and flow rate on the BTEX elimination efficiency were investigated using response surface methodology (RSM). The constructed model was evaluated by analysis of variance (ANOVA). The model goodness-of-fit and statistical significance was assessed using determination coefficients (R 2 and R 2 adj ) and the F-test. The results revealed that the R 2 proportion was greater than 0.96 for BTEX removal efficiency. The statistical analysis demonstrated that the BTEX removal efficiency was significantly correlated with the temperature, BTEX concentration, voltage, and flow rate. Voltage was the most influential variable affecting the dependent variable as it exerted a significant effect (p < 0.0001) on the response variable. According to the achieved results, NTP can be applied as a progressive, cost-effective, and practical process for treatment of airstreams polluted with BTEX in conditions of low residence time and high concentrations of pollutants.

  17. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  18. Interactions Between Mineral Surfaces, Substrates, Enzymes, and Microbes Result in Hysteretic Temperature Sensitivities and Microbial Carbon Use Efficiencies and Weaker Predicted Carbon-Climate Feedbacks

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Tang, J.

    2014-12-01

    We hypothesize that the large observed variability in decomposition temperature sensitivity and carbon use efficiency arises from interactions between temperature, microbial biogeochemistry, and mineral surface sorptive reactions. To test this hypothesis, we developed a numerical model that integrates the Dynamic Energy Budget concept for microbial physiology, microbial trait-based community structure and competition, process-specific thermodynamically ­­based temperature sensitivity, a non-linear mineral sorption isotherm, and enzyme dynamics. We show, because mineral surfaces interact with substrates, enzymes, and microbes, both temperature sensitivity and microbial carbon use efficiency are hysteretic and highly variable. Further, by mimicking the traditional approach to interpreting soil incubation observations, we demonstrate that the conventional labile and recalcitrant substrate characterization for temperature sensitivity is flawed. In a 4 K temperature perturbation experiment, our fully dynamic model predicted more variable but weaker carbon-climate feedbacks than did the static temperature sensitivity and carbon use efficiency model when forced with yearly, daily, and hourly variable temperatures. These results imply that current earth system models likely over-estimate the response of soil carbon stocks to global warming.

  19. Genetic determinism of anatomical and hydraulic traits within an apple progeny.

    PubMed

    Lauri, Pierre-Éric; Gorza, Olivier; Cochard, Hervé; Martinez, Sébastien; Celton, Jean-Marc; Ripetti, Véronique; Lartaud, Marc; Bry, Xavier; Trottier, Catherine; Costes, Evelyne

    2011-08-01

    The apple tree is known to have an isohydric behaviour, maintaining rather constant leaf water potential in soil with low water status and/or under high evaporative demand. However, little is known on the xylem water transport from roots to leaves from the two perspectives of efficiency and safety, and on its genetic variability. We analysed 16 traits related to hydraulic efficiency and safety, and anatomical traits in apple stems, and the relationships between them. Most variables were found heritable, and we investigated the determinism underlying their genetic control through a quantitative trait loci (QTL) analysis on 90 genotypes from the same progeny. Principal component analysis (PCA) revealed that all traits related to efficiency, whether hydraulic conductivity, vessel number and area or wood area, were included in the first PC, whereas the second PC included the safety variables, thus confirming the absence of trade-off between these two sets of traits. Our results demonstrated that clustered variables were characterized by common genomic regions. Together with previous results on the same progeny, our study substantiated that hydraulic efficiency traits co-localized with traits identified for tree growth and fruit production. © 2011 Blackwell Publishing Ltd.

  20. Variable Mach number design approach for a parallel waverider with a wide-speed range based on the osculating cone theory

    NASA Astrophysics Data System (ADS)

    Zhao, Zhen-tao; Huang, Wei; Li, Shi-Bin; Zhang, Tian-Tian; Yan, Li

    2018-06-01

    In the current study, a variable Mach number waverider design approach has been proposed based on the osculating cone theory. The design Mach number of the osculating cone constant Mach number waverider with the same volumetric efficiency of the osculating cone variable Mach number waverider has been determined by writing a program for calculating the volumetric efficiencies of waveriders. The CFD approach has been utilized to verify the effectiveness of the proposed approach. At the same time, through the comparative analysis of the aerodynamic performance, the performance advantage of the osculating cone variable Mach number waverider is studied. The obtained results show that the osculating cone variable Mach number waverider owns higher lift-to-drag ratio throughout the flight profile when compared with the osculating cone constant Mach number waverider, and it has superior low-speed aerodynamic performance while maintaining nearly the same high-speed aerodynamic performance.

  1. Technology Assessment for Large Vertical-Lift Transport Tiltrotors

    NASA Technical Reports Server (NTRS)

    Germanowski, Peter J.; Stille, Brandon L.; Strauss, Michael P.

    2010-01-01

    The technical community has identified rotor efficiency as a critical enabling technology for large vertical-lift transport (LVLT) rotorcraft. The size and performance of LVLT aircraft will be far beyond current aircraft capabilities, enabling a transformational change in cargo transport effectiveness. Two candidate approaches for achieving high efficiency were considered for LVLT applications: a variable-diameter tiltrotor (VDTR) and a variable-speed tiltrotor (VSTR); the former utilizes variable-rotor geometry and the latter utilizes variable-rotor speed. Conceptual aircraft designs were synthesized for the VDTR and VSTR and compared to a conventional tiltrotor (CTR). The aircraft were optimized to a common objective function and bounded by a set of physical- and requirements-driven constraints. The resulting aircraft were compared for weight, size, performance, handling qualities, and other attributes. These comparisons established a measure of the relative merits of the variable-diameter and -speed rotor systems as enabling technologies for LVLT capability.

  2. Optical Variability and Classification of High Redshift (3.5 < z < 5.5) Quasars on SDSS Stripe 82

    NASA Astrophysics Data System (ADS)

    AlSayyad, Yusra; McGreer, Ian D.; Fan, Xiaohui; Connolly, Andrew J.; Ivezic, Zeljko; Becker, Andrew C.

    2015-01-01

    Recent studies have shown promise in combining optical colors with variability to efficiently select and estimate the redshifts of low- to mid-redshift quasars in upcoming ground-based time-domain surveys. We extend these studies to fainter and less abundant high-redshift quasars using light curves from 235 sq. deg. and 10 years of Stripe 82 imaging reprocessed with the prototype LSST data management stack. Sources are detected on the i-band co-adds (5σ: i ~ 24) but measured on the single-epoch (ugriz) images, generating complete and unbiased lightcurves for sources fainter than the single-epoch detection threshold. Using these forced photometry lightcurves, we explore optical variability characteristics of high redshift quasars and validate classification methods with particular attention to the low signal limit. In this low SNR limit, we quantify the degradation of the uncertainties and biases on variability parameters using simulated light curves. Completeness/efficiency and redshift accuracy are verified with new spectroscopic observations on the MMT and APO 3.5m. These preliminary results are part of a survey to measure the z~4 luminosity function for quasars (i < 23) on Stripe 82 and to validate purely photometric classification techniques for high redshift quasars in LSST.

  3. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  4. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  5. Historical review of lung counting efficiencies for low energy photon emitters

    DOE PAGES

    Jeffers, Karen L.; Hickman, David P.

    2014-03-01

    This publication reviews the measured efficiency and variability over time of a high purity planar germanium in vivo lung count system for multiple photon energies using increasingly thick overlays with the Lawrence Livermore Torso Phantom. Furthermore, the measured variations in efficiency are compared with the current requirement for in vivo bioassay performance as defined by the American National Standards Institute Standard.

  6. Implementation of high slurry concentration and sonication to pack high-efficiency, meter-long capillary ultrahigh pressure liquid chromatography columns.

    PubMed

    Godinho, Justin M; Reising, Arved E; Tallarek, Ulrich; Jorgenson, James W

    2016-09-02

    Slurry packing capillary columns for ultrahigh pressure liquid chromatography is complicated by many interdependent experimental variables. Previous results have suggested that combination of high slurry concentration and sonication during packing would create homogeneous bed microstructures and yield highly efficient capillary columns. Herein, the effect of sonication while packing very high slurry concentrations is presented. A series of six, 1m×75μm internal diameter columns were packed with 200mg/mL slurries of 2.02μm bridged-ethyl hybrid silica particles. Three of the columns underwent sonication during packing and yielded highly efficient separations with reduced plate heights as low as 1.05. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Fred Hutchinson Cancer Research Center, Seattle, Washington: Laboratories for the 21st Century Case Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2001-12-01

    This case study was prepared by participants in the Laboratories for the 21st Century program, a joint endeavor of the U.S. Environmental Protection Agency and the U.S. Department of Energy's Federal Energy Management Program. The goal of this program is to foster greater energy efficiency in new laboratory buildings for both the public and the private sectors. Retrofits of existing laboratories are also encouraged. The energy-efficient features of the laboratories in the Fred Hutchinson Cancer Research Center complex in Seattle, Washington, include extensive use of efficient lighting, variable-air-volume controls, variable-speed drives, motion sensors, and high-efficiency chillers and motors. With aboutmore » 532,000 gross square feet, the complex is estimated to use 33% less electrical energy than most traditional research facilities consume because of its energy-efficient design and features.« less

  8. Fred Hutchinson Cancer Research Center, Seattle, Washington: Laboratories for the 21st Century Case Studies (Revision)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2002-03-01

    This case study was prepared by participants in the Laboratories for the 21st Century program, a joint endeavor of the U.S. Environmental Protection Agency and the U.S. Department of Energy's Federal Energy Management Program. The goal of this program is to foster greater energy efficiency in new laboratory buildings for both the public and the private sectors. Retrofits of existing laboratories are also encouraged. The energy-efficient features of the laboratories in the Fred Hutchinson Cancer Research Center complex in Seattle, Washington, include extensive use of efficient lighting, variable-air-volume controls, variable-speed drives, motion sensors, and high-efficiency chillers and motors. With aboutmore » 532,000 gross square feet, the complex is estimated to use 33% less electrical energy than most traditional research facilities consume because of its energy-efficient design and features.« less

  9. Novel high-frequency, high-power, pulsed oscillator based on a transmission line transformer.

    PubMed

    Burdt, R; Curry, R D

    2007-07-01

    Recent analysis and experiments have demonstrated the potential for transmission line transformers to be employed as compact, high-frequency, high-power, pulsed oscillators with variable rise time, high output impedance, and high operating efficiency. A prototype system was fabricated and tested that generates a damped sinusoidal wave form at a center frequency of 4 MHz into a 200 Omega load, with operating efficiency above 90% and peak power on the order of 10 MW. The initial rise time of the pulse is variable and two experiments were conducted to demonstrate initial rise times of 12 and 3 ns, corresponding to a spectral content from 4-30 and from 4-100 MHz, respectively. A SPICE model has been developed to accurately predict the circuit behavior and scaling laws have been identified to allow for circuit design at higher frequencies and higher peak power. The applications, circuit analysis, test stand, experimental results, circuit modeling, and design of future systems are all discussed.

  10. Efficient Approaches for Propagating Hydrologic Forcing Uncertainty: High-Resolution Applications Over the Western United States

    NASA Astrophysics Data System (ADS)

    Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.

    2017-12-01

    NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.

  11. Sampling and modeling riparian forest structure and riparian microclimate

    Treesearch

    Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam Temesgen

    2013-01-01

    Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...

  12. A multichannel fiber optic photometer present performance and future developments

    NASA Technical Reports Server (NTRS)

    Barwig, H.; Schoembs, R.; Huber, G.

    1988-01-01

    A three channel photometer for simultaneous multicolor observations was designed with the aim of making possible highly efficient photometry of fast variable objects like cataclysmic variables. Experiences with this instrument over a period of three years are presented. Aspects of the special techniques applied are discussed with respect to high precision photometry. In particular, the use of fiber optics is critically analyzed. Finally, the development of a new photometer concept is discussed.

  13. Security of a discretely signaled continuous variable quantum key distribution protocol for high rate systems.

    PubMed

    Zhang, Zheshen; Voss, Paul L

    2009-07-06

    We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.

  14. Estimation of exciton reverse transfer for variable spectra and high efficiency in interlayer-based organic light-emitting devices

    NASA Astrophysics Data System (ADS)

    Liu, Shengqiang; Zhao, Juan; Huang, Jiang; Yu, Junsheng

    2016-12-01

    Organic light-emitting devices (OLEDs) with three different exciton adjusting interlayers (EALs), which are inserted between two complementary blue and yellow emitting layers, are fabricated to demonstrate the relationship between the EAL and device performance. The results show that the variations of type and thickness of EAL have different adjusting capability and distribution control on excitons. However, we also find that the reverse Dexter transfer of triplet exciton from the light-emitting layer to the EAL is an energy loss path, which detrimentally affects electroluminescent (EL) spectral performance and device efficiency in different EAL-based devices. Based on exciton distribution and integration, an estimation of exciton reverse transfer is developed through a triplet energy level barrier to simulate the exciton behavior. Meanwhile, the estimation results also demonstrate the relationship between the EAL and device efficiency by a parameter of exciton reverse transfer probability. The estimation of exciton reverse transfer discloses a crucial role of the EALs in the interlayer-based OLEDs to achieve variable EL spectra and high efficiency.

  15. Evaluation of range and distortion tolerance for high Mach number transonic fan stages. Task 2: Performance of a 1500-foot-per-second tip speed transonic fan stage with variable geometry inlet guide vanes and stator

    NASA Technical Reports Server (NTRS)

    Bilwakesh, K. R.; Koch, C. C.; Prince, D. C.

    1972-01-01

    A 0.5 hub/tip radius ratio compressor stage consisting of a 1500 ft/sec tip speed rotor, a variable camber inlet guide vane and a variable stagger stator was designed and tested with undistorted inlet flow, flow with tip radial distortion, and flow with 90 degrees, one-per-rev, circumferential distortion. At the design speed and design IGV and stator setting the design stage pressure ratio was achieved at a weight within 1% of the design flow. Analytical results on rotor tip shock structure, deviation angle and part-span shroud losses at different operating conditions are presented. The variable geometry blading enabled efficient operation with adequate stall margin at the design condition and at 70% speed. Closing the inlet guide vanes to 40 degrees changed the speed-versus-weight flow relationship along the stall line and thus provided the flexibility of operation at off-design conditions. Inlet flow distortion caused considerable losses in peak efficiency, efficiency on a constant throttle line through design pressure ratio at design speed, stall pressure ratio, and stall margin at the 0 degrees IGV setting and high rotative speeds. The use of the 40 degrees inlet guide vane setting enabled partial recovery of the stall margin over the standard constant throttle line.

  16. Multi-messenger Light Curves from Gamma-Ray Bursts in the Internal Shock Model

    NASA Astrophysics Data System (ADS)

    Bustamante, Mauricio; Heinze, Jonas; Murase, Kohta; Winter, Walter

    2017-03-01

    Gamma-ray bursts (GRBs) are promising as sources of neutrinos and cosmic rays. In the internal shock scenario, blobs of plasma emitted from a central engine collide within a relativistic jet and form shocks, leading to particle acceleration and emission. Motivated by present experimental constraints and sensitivities, we improve the predictions of particle emission by investigating time-dependent effects from multiple shocks. We produce synthetic light curves with different variability timescales that stem from properties of the central engine. For individual GRBs, qualitative conclusions about model parameters, neutrino production efficiency, and delays in high-energy gamma-rays can be deduced from inspection of the gamma-ray light curves. GRBs with fast time variability without additional prominent pulse structure tend to be efficient neutrino emitters, whereas GRBs with fast variability modulated by a broad pulse structure can be inefficient neutrino emitters and produce delayed high-energy gamma-ray signals. Our results can be applied to quantitative tests of the GRB origin of ultra-high-energy cosmic rays, and have the potential to impact current and future multi-messenger searches.

  17. Effect of trial-to-trial variability on optimal event-related fMRI design: Implications for Beta-series correlation and multi-voxel pattern analysis

    PubMed Central

    Abdulrahman, Hunar; Henson, Richard N.

    2016-01-01

    Functional magnetic resonance imaging (fMRI) studies typically employ rapid, event-related designs for behavioral reasons and for reasons associated with statistical efficiency. Efficiency is calculated from the precision of the parameters (Betas) estimated from a General Linear Model (GLM) in which trial onsets are convolved with a Hemodynamic Response Function (HRF). However, previous calculations of efficiency have ignored likely variability in the neural response from trial to trial, for example due to attentional fluctuations, or different stimuli across trials. Here we compare three GLMs in their efficiency for estimating average and individual Betas across trials as a function of trial variability, scan noise and Stimulus Onset Asynchrony (SOA): “Least Squares All” (LSA), “Least Squares Separate” (LSS) and “Least Squares Unitary” (LSU). Estimation of responses to individual trials in particular is important for both functional connectivity using “Beta-series correlation” and “multi-voxel pattern analysis” (MVPA). Our simulations show that the ratio of trial-to-trial variability to scan noise impacts both the optimal SOA and optimal GLM, especially for short SOAs < 5 s: LSA is better when this ratio is high, whereas LSS and LSU are better when the ratio is low. For MVPA, the consistency across voxels of trial variability and of scan noise is also critical. These findings not only have important implications for design of experiments using Beta-series regression and MVPA, but also statistical parametric mapping studies that seek only efficient estimation of the mean response across trials. PMID:26549299

  18. On scheduling task systems with variable service times

    NASA Astrophysics Data System (ADS)

    Maset, Richard G.; Banawan, Sayed A.

    1993-08-01

    Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.

  19. Scale model performance test investigation of exhaust system mixers for an Energy Efficient Engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1980-01-01

    A scale model performance test was conducted as part of the NASA Energy Efficient Engine (E3) Program, to investigate the geometric variables that influence the aerodynamic design of exhaust system mixers for high-bypass, mixed-flow engines. Mixer configuration variables included lobe number, penetration and perimeter, as well as several cutback mixer geometries. Mixing effectiveness and mixer pressure loss were determined using measured thrust and nozzle exit total pressure and temperature surveys. Results provide a data base to aid the analysis and design development of the E3 mixed-flow exhaust system.

  20. Energy efficient fluid powered linear actuator with variable area

    DOEpatents

    Lind, Randall F.; Love, Lonnie J.

    2016-09-13

    Hydraulic actuation systems having variable displacements and energy recovery capabilities include cylinders with pistons disposed inside of barrels. When operating in energy consuming modes, high speed valves pressurize extension chambers or retraction chambers to provide enough force to meet or counteract an opposite load force. When operating in energy recovery modes, high speed valves return a working fluid from extension chambers or retraction chambers, which are pressurized by a load, to an accumulator for later use.

  1. A Drive Method of Permanent Magnet Synchronous Motor Using Torque Angle Estimation without Position Sensor

    NASA Astrophysics Data System (ADS)

    Tanaka, Takuro; Takahashi, Hisashi

    In some motor applications, it is very difficult to attach a position sensor to the motor in housing. One of the examples of such applications is the dental handpiece-motor. In those designs, it is necessary to drive highly efficiency at low speed and variable load condition without a position sensor. We developed a method to control a motor high-efficient and smoothly at low speed without a position sensor. In this paper, the method in which permanent magnet synchronous motor is controlled smoothly and high-efficient by using torque angle control in synchronized operation is shown. The usefulness is confirmed by experimental results. In conclusion, the proposed sensor-less control method has been achieved to be very efficiently and smoothly.

  2. Principle and Basic Characteristics of Variable-Magnetic-Force Memory Motors

    NASA Astrophysics Data System (ADS)

    Sakai, Kazuto; Yuki, Kazuaki; Hashiba, Yutaka; Takahashi, Norio; Yasui, Kazuya; Kovudhikulrungsri, Lilit

    A reduction in the power consumed by motors is required for energy saving in the case of electrical appliances and electric vehicles (EV). The motors used for operating these apparatus operate at variable speeds. Further, the motors operate with small load in stationary mode and with large load in start-up mode. A permanent magnet motor can operate at the rated power with a high efficiency. However, the efficiency is lower at small load or high speed because the large constant magnetic force results in substantial core loss. Furthermore, the flux-weakening current that depresses voltage at high speed leads to significant copper loss. Therefore, we have developed a new technique for controlling the magnetic force of permanent magnet on the basis of the load or speed of the motor. In this paper, we propose the novel motor that can vary magnetic flux and we clarify the principle.

  3. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  4. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  5. Comparison of Predicted Thermoelectric Energy Conversion Efficiency by Cumulative Properties and Reduced Variables Approaches

    NASA Astrophysics Data System (ADS)

    Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt

    2018-06-01

    The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.

  6. The role of environmental variables on the efficiency of water and sewerage companies: a case study of Chile.

    PubMed

    Molinos-Senante, María; Sala-Garrido, Ramón; Lafuente, Matilde

    2015-07-01

    This paper evaluates the efficiency of water and sewerage companies (WaSCs) by introducing the lack of service quality as undesirable outputs. It also investigates whether the production frontier of WaSCs is overall constant returns to scale (CRS) or variable returns to scale (VRS) by using two different data envelopment analysis models. In a second-stage analysis, we study the influence of exogenous and endogenous variables on WaSC performance by applying non-parametric hypothesis tests. In a pioneering approach, the analysis covers 18 WaSCs from Chile, representing about 90% of the Chilean urban population. The results evidence that the technology of the sample studied is characterized overall by CRS. Peak water demand, the percentage of external workers, and the percentage of unbilled water are the factors affecting the efficiency of WaSCs. From a policy perspective, the integration of undesirable outputs into the assessment of WaSC performance is crucial not to penalize companies that provide high service quality to customers.

  7. Global map of solar power production efficiency, considering micro climate factors

    NASA Astrophysics Data System (ADS)

    Hassanpour Adeh, E.; Higgins, C. W.

    2017-12-01

    Natural resources degradation and greenhouse gas emissions are creating a global crisis. Renewable energy is the most reliable option to mitigate this environmental dilemma. Abundancy of solar energy makes it highly attractive source of electricity. The existing global spatial maps of available solar energy are created with various models which consider the irradiation, latitude, cloud cover, elevation, shading and aerosols, and neglect the influence of local meteorological conditions. In this research, the influences of microclimatological variables on solar energy productivity were investigated with an in-field study at the Rabbit Hills solar arrays near Oregon State University. The local studies were extended to a global level, where global maps of solar power were produced, taking the micro climate variables into account. These variables included: temperature, relative humidity, wind speed, wind direction, solar radiation. The energy balance approach was used to synthesize the data and compute the efficiencies. The results confirmed that the solar power efficiency can be directly affected by the air temperature and wind speed.

  8. The Technical Efficiency of Specialised Milk Farms: A Regional View

    PubMed Central

    Špička, Jindřich; Smutka, Luboš

    2014-01-01

    The aim of the article is to evaluate production efficiency and its determinants of specialised dairy farming among the EU regions. In the most of European regions, there is a relatively high significance of small specialised farms including dairy farms. The DEAVRS method (data envelopment analysis with variable returns to scale) reveals efficient and inefficient regions including the scale efficiency. In the next step, the two-sample t-test determines differences of economic and structural indicators between efficient and inefficient regions. The research reveals that substitution of labour by capital/contract work explains the variability of the farm net value added per AWU (annual work unit) income indicator by more than 30%. The significant economic determinants of production efficiency in specialised dairy farming are farm size, herd size, crop output per hectare, productivity of energy, and capital (at α = 0.01). Specialised dairy farms in efficient regions have significantly higher farm net value added per AWU than inefficient regions. Agricultural enterprises in inefficient regions have a more extensive structure and produce more noncommodity output (public goods). Specialised dairy farms in efficient regions have a slightly higher milk yield, specific livestock costs of feed, bedding, and veterinary services per livestock unit. PMID:25050408

  9. Building America Case Study: Impact of Improved Duct Insulation on Fixed-Capacity (SEER 13) and Variable-Capacity (SEER 22) Heat Pumps, Cocoa, Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Withers, J. Cummings, B. Nigusse, E. Martin

    A new generation of central, ducted variable-capacity heat pump systems has come on the market, promising very high cooling and heating efficiency. Instead of cycling on at full capacity and then cycling off when the thermostat is satisfied, they vary their cooling and heating output over a wide range (approximately 40 to 118% of nominal full capacity); thus, staying 'on' for 60% to 100% more hours per day compared to fixed-capacity systems. Current Phase 4 experiments in an instrumented lab home with simulated occupancy evaluate the impact of duct R-value enhancement on the overall operating efficiency of the variable-capacity systemmore » compared to the fixed-capacity system.« less

  10. Detection of Nitrogen Content in Rubber Leaves Using Near-Infrared (NIR) Spectroscopy with Correlation-Based Successive Projections Algorithm (SPA).

    PubMed

    Tang, Rongnian; Chen, Xupeng; Li, Chuang

    2018-05-01

    Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.

  11. Energy efficient fluid powered linear actuator with variable area and concentric chambers

    DOEpatents

    Lind, Randall F.; Love, Lonnie J.

    2016-11-15

    Hydraulic actuation systems having concentric chambers, variable displacements and energy recovery capabilities include cylinders with pistons disposed inside of barrels. When operating in energy consuming modes, high speed valves pressurize extension chambers or retraction chambers to provide enough force to meet or counteract an opposite load force. When operating in energy recovery modes, high speed valves return a working fluid from extension chambers or retraction chambers, which are pressurized by a load, to an accumulator for later use.

  12. Progress with variable cycle engines

    NASA Technical Reports Server (NTRS)

    Westmoreland, J. S.

    1980-01-01

    The evaluation of components of an advanced propulsion system for a future supersonic cruise vehicle is discussed. These components, a high performance duct burner for thrust augmentation and a low jet noise coannular exhaust nozzle, are part of the variable stream control engine. An experimental test program involving both isolated component and complete engine tests was conducted for the high performance, low emissions duct burner with excellent results. Nozzle model tests were completed which substantiate the inherent jet noise benefit associated with the unique velocity profile possible of a coannular exhaust nozzle system on a variable stream control engine. Additional nozzle model performance tests have established high thrust efficiency levels at takeoff and supersonic cruise for this nozzle system. Large scale testing of these two critical components is conducted using an F100 engine as the testbed for simulating the variable stream control engine.

  13. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  14. AGT100 turbomachinery. [for automobiles

    NASA Technical Reports Server (NTRS)

    Tipton, D. L.; Mckain, T. F.

    1982-01-01

    High-performance turbomachinery components have been designed and tested for the AGT100 automotive engine. The required wide range of operation coupled with the small component size, compact packaging, and low cost of production provide significant aerodynamic challenges. Aerodynamic design and development testing of the centrifugal compressor and two radial turbines are described. The compressor achieved design flow, pressure ratio, and surge margin on the initial build. Variable inlet guide vanes have proven effective in modulating flow capacity and in improving part-speed efficiency. With optimum use of the variable inlet guide vanes, the initial efficiency goals have been demonstrated in the critical idle-to-70% gasifier speed range. The gasifier turbine exceeded initial performance goals and demonstrated good performance over a wide range. The radial power turbine achieved 'developed' efficiency goals on the first build.

  15. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  16. Influence of dissolved organic matter concentration and composition on the removal efficiency of perfluoroalkyl substances (PFASs) during drinking water treatment.

    PubMed

    Kothawala, Dolly N; Köhler, Stephan J; Östlund, Anna; Wiberg, Karin; Ahrens, Lutz

    2017-09-15

    Drinking water treatment plants (DWTPs) are constantly adapting to a host of emerging threats including the removal of micro-pollutants like perfluoroalkyl substances (PFASs), while concurrently considering how background levels of dissolved organic matter (DOM) influences their removal efficiency. Two adsorbents, namely anion exchange (AE) and granulated active carbon (GAC) have shown particular promise for PFAS removal, yet the influence of background levels of DOM remains poorly explored. Here we considered how the removal efficiency of 13 PFASs are influenced by two contrasting types of DOM at four concentrations, using both AE (Purolite A-600 ® ) and GAC (Filtrasorb 400 ® ). We placed emphasis on the pre-equilibrium conditions to gain better mechanistic insight into the dynamics between DOM, PFASs and adsorbents. We found AE to be very effective at removing both PFASs and DOM, while largely remaining resistant to even high levels of background DOM (8 mg carbon L -1 ) and surprisingly found that smaller PFASs were removed slightly more efficiently than longer chained counterparts, In contrast, PFAS removal efficiency with GAC was highly variable with PFAS chain length, often improving in the presence of DOM, but with variable response based on the type of DOM and PFAS chain length. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The role of effort in moderating the anxiety-performance relationship: Testing the prediction of processing efficiency theory in simulated rally driving.

    PubMed

    Wilson, Mark; Smith, Nickolas C; Chattington, Mark; Ford, Mike; Marple-Horvat, Dilwyn E

    2006-11-01

    We tested some of the key predictions of processing efficiency theory using a simulated rally driving task. Two groups of participants were classified as either dispositionally high or low anxious based on trait anxiety scores and trained on a simulated driving task. Participants then raced individually on two similar courses under counterbalanced experimental conditions designed to manipulate the level of anxiety experienced. The effort exerted on the driving tasks was assessed though self-report (RSME), psychophysiological measures (pupil dilation) and visual gaze data. Efficiency was measured in terms of efficiency of visual processing (search rate) and driving control (variability of wheel and accelerator pedal) indices. Driving performance was measured as the time taken to complete the course. As predicted, increased anxiety had a negative effect on processing efficiency as indexed by the self-report, pupillary response and variability of gaze data. Predicted differences due to dispositional levels of anxiety were also found in the driving control and effort data. Although both groups of drivers performed worse under the threatening condition, the performance of the high trait anxious individuals was affected to a greater extent by the anxiety manipulation than the performance of the low trait anxious drivers. The findings suggest that processing efficiency theory holds promise as a theoretical framework for examining the relationship between anxiety and performance in sport.

  18. Efficiency Study of NLS Base-Year Design. RTI-22U-884-3.

    ERIC Educational Resources Information Center

    Moore, R. P.; And Others

    An efficiency study was conducted of the base year design used for the National Longitudinal Study of the High School Class of 1972 (NLS). Finding the optimal design involved a search for the numbers of sample schools and students that would maximize the variance at a given cost. Twenty-one variables describing students' plans, attitudes,…

  19. Effects of process variables and kinetics on the degradation of 2,4-dichlorophenol using advanced reduction processes (ARP).

    PubMed

    Yu, Xingyue; Cabooter, Deirdre; Dewil, Raf

    2018-05-24

    This study aims at investigating the efficiency and kinetics of 2,4-DCP degradation via advanced reduction processes (ARP). Using UV light as activation method, the highest degradation efficiency of 2,4-DCP was obtained when using sulphite as a reducing agent. The highest degradation efficiency was observed under alkaline conditions (pH = 10.0), for high sulphite dosage and UV intensity, and low 2,4-DCP concentration. For all process conditions, first-order reaction rate kinetics were applicable. A quadratic polynomial equation fitted by a Box-Behnken Design was used as a statistical model and proved to be precise and reliable in describing the significance of the different process variables. The analysis of variance demonstrated that the experimental results were in good agreement with the predicted model (R 2  = 0.9343), and solution pH, sulphite dose and UV intensity were found to be key process variables in the sulphite/UV ARP. Consequently, the present study provides a promising approach for the efficient degradation of 2,4-DCP with fast degradation kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A new generation of central, ducted variable-capacity heat pump systems has come on the market, promising very high cooling and heating efficiency. Instead of cycling on at full capacity and then cycling off when the thermostat is satisfied, they vary their cooling and heating output over a wide range (approximately 40 to 118% of nominal full capacity); thus, staying 'on' for 60% to 100% more hours per day compared to fixed-capacity systems. Current Phase 4 experiments in an instrumented lab home with simulated occupancy evaluate the impact of duct R-value enhancement on the overall operating efficiency of the variable-capacity systemmore » compared to the fixed-capacity system.« less

  1. Digital pre-compensation techniques enabling high-capacity bandwidth variable transponders

    NASA Astrophysics Data System (ADS)

    Napoli, Antonio; Berenguer, Pablo Wilke; Rahman, Talha; Khanna, Ginni; Mezghanni, Mahdi M.; Gardian, Lennart; Riccardi, Emilio; Piat, Anna Chiadò; Calabrò, Stefano; Dris, Stefanos; Richter, André; Fischer, Johannes Karl; Sommerkorn-Krombholz, Bernd; Spinnler, Bernhard

    2018-02-01

    Digital pre-compensation techniques are among the enablers for cost-efficient high-capacity transponders. In this paper we describe various methods to mitigate the impairments introduced by state-of-the-art components within modern optical transceivers. Numerical and experimental results validate their performance and benefits.

  2. Problems and programming for analysis of IUE high resolution data for variability

    NASA Technical Reports Server (NTRS)

    Grady, C. A.

    1981-01-01

    Observations of variability in stellar winds provide an important probe of their dynamics. It is crucial however to know that any variability seen in a data set can be clearly attributed to the star and not to instrumental or data processing effects. In the course of analysis of IUE high resolution data of alpha Cam and other O, B and Wolf-Rayet stars several effects were found which cause spurious variability or spurious spectral features in our data. Programming was developed to partially compensate for these effects using the Interactive Data language (IDL) on the LASP PDP 11/34. Use of an interactive language such as IDL is particularly suited to analysis of variability data as it permits use of efficient programs coupled with the judgement of the scientist at each stage of processing.

  3. Rotordynamic Feasibility of a Conceptual Variable-Speed Power Turbine Propulsion System for Large Civil Tilt-Rotor Applications

    NASA Technical Reports Server (NTRS)

    Howard, Samuel

    2012-01-01

    A variable-speed power turbine concept is analyzed for rotordynamic feasibility in a Large Civil Tilt-Rotor (LCTR) class engine. Implementation of a variable-speed power turbine in a rotorcraft engine would enable high efficiency propulsion at the high forward velocities anticipated of large tilt-rotor vehicles. Therefore, rotordynamics is a critical issue for this engine concept. A preliminary feasibility study is presented herein to address this concern and identify if variable-speed is possible in a conceptual engine sized for the LCTR. The analysis considers critical speed placement in the operating speed envelope, stability analysis up to the maximum anticipated operating speed, and potential unbalance response amplitudes to determine that a variable-speed power turbine is likely to be challenging, but not impossible to achieve in a tilt-rotor propulsion engine.

  4. Service-oriented workflow to efficiently and automatically fulfill products in a highly individualized web and mobile environment

    NASA Astrophysics Data System (ADS)

    Qiao, Mu

    2015-03-01

    Service Oriented Architecture1 (SOA) is widely used in building flexible and scalable web sites and services. In most of the web or mobile photo book and gifting business space, the products ordered are highly variable without a standard template that one can substitute texts or images from similar to that of commercial variable data printing. In this paper, the author describes a SOA workflow in a multi-sites, multi-product lines fulfillment system where three major challenges are addressed: utilization of hardware and equipment, highly automation with fault recovery, and highly scalable and flexible with order volume fluctuation.

  5. High efficiency and high-energy intra-cavity beam shaping laser

    NASA Astrophysics Data System (ADS)

    Yang, Hailong; Meng, Junqing; Chen, Weibiao

    2015-09-01

    We present a technology of intra-cavity laser beam shaping with theory and experiment to obtain a flat-top-like beam with high-pulse energy. A radial birefringent element (RBE) was used in a crossed Porro prism polarization output coupling resonator to modulate the phase delay radially. The reflectively of a polarizer used as an output mirror was variable radially. A flat-top-like beam with 72.5 mJ, 11 ns at 20 Hz was achieved by a side-pumped Nd:YAG zigzag slab laser, and the optical-to-optical conversion efficiency was 17.3%.

  6. Application of several variable-valve-timing concepts to an LHR engine

    NASA Technical Reports Server (NTRS)

    Morel, T.; Keribar, R.; Sawlivala, M.; Hakim, N.

    1987-01-01

    The paper discusses advantages provided by electronically controlled hydraulically activated valves (ECVs) when applied to low heat rejection (LHR) engines. The ECV concept provides additional engine control flexibility by allowing for a variable valve timing as a function of speed and load, or for a given transient condition. The results of a study carried out to assess the benefits that this flexibility can offer to an LHR engine indicated that, when judged on the benefits to BSFC, volumetric efficiency, and peak firing pressure, ECVs would provide only modest benefits in comparison to conventional valve profiles. It is noted, however, that once installed on the engine, the ECVs would permit a whole range of certain more sophisticated variable valve timing strategies not otherwise possible, such as high compression cranking, engine braking, cylinder cutouts, and volumetric efficiency timing with engine speed.

  7. Heart failure in primary care: co-morbidity and utilization of health care resources.

    PubMed

    Carmona, Montserrat; García-Olmos, Luis M; García-Sagredo, Pilar; Alberquilla, Ángel; López-Rodríguez, Fernando; Pascual, Mario; Muñoz, Adolfo; Salvador, Carlos H; Monteagudo, José L; Otero-Puime, Ángel

    2013-10-01

    In order to ensure proper management of primary care (PC) services, the efficiency of the health professionals tasked with such services must be known. Patients with heart failure (HF) are characterized by advanced age, high co-morbidity and high resource utilization. To ascertain PC resource utilization by HF patients and variability in the management of such patients by GPs. Descriptive, cross-sectional study targeting a population attended by 129 GPs over the course of 1 year. All patients with diagnosis of HF in their clinical histories were included, classified using the Adjusted Clinical Group system and then grouped into six resource utilization bands (RUBs). Resource utilization and Efficiency Index were both calculated. One hundred per cent of patients with HF were ranked in RUBs 3, 4 and 5. The highest GP visit rate was 20 and the lowest in excess of 10 visits per year. Prescription drug costs for these patients ranged from €885 to €1422 per patient per year. Health professional efficiency varied notably, even after adjustment for co-morbidity (Efficiency Index Variation Ratio of 28.27 for visits and 404.29 for prescription drug cost). Patients with HF register a high utilization of resources, and there is great variability in the management of such patients by health professionals, which cannot be accounted for by the degree of case complexity.

  8. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  9. Summary of NACA/NASA Variable-Sweep Research and Development Leading to the F-111 (TFX)

    NASA Technical Reports Server (NTRS)

    1966-01-01

    On November 24, 1962, the United States ushered in a new era of aircraft development when the Department of Defense placed an initial development contract for the world's first supersonic variable-sweep aircraft - the F-111 or so-called TFX (tactical fighter-experimental). The multimission performance potential of this concept is made possible by virtue of the variable-sweep wing - a research development of the NASA and its predecessor, the NACA. With the wing swept forward into the maximum span position, the aircraft configuration is ideal for efficient subsonic flight. This provides long-range combat and ferry mission capability, short-field landing and take-off characteristics, and compatibility with naval aircraft carrier operation. With the wing swept back to about 650 of sweep, the aircraft has optimum supersonic performance to accomplish high-altitude supersonic bombing or interceptor missions. With the wing folded still further back, the aircraft provides low drag and low gust loads during supersonic flight "on the deck" (altitudes under 1000 feet). The concept of wing variable sweep, of course, is not new. Initial studies were conducted at Langley as early as 1945, and two subsonic variable-sweep prototypes (Bell X-5 and Grumman XF-IOF) were flown as early as 1951/52. These were subsonic aircraft, however, and the great advantage of variable sweep in improving supersonic flight efficiency could not be realized. Further the structures of these early aircraft were complicated by the necessity for translating the ing fore and aft to achieve satisfactory longitUdinal stability as the wing sweep was varied. Late in 1958 a research breakthrough at Langley provided the technology for designing a variable-sweep wing having satisfactory stability through a wide sweep angle range without the necessity for fore and aft translation of the wing. In this same period there evolved within the military services an urgent requirement for a versatile fighter-bomber that could fly efficiently at subsonic and supersonic speeds at high altitude and "on the deck". The application of variable sweep to this mission requirement then became obvious.

  10. Identification of variables influencing pharmaceutical interventions to improve medication review efficiency.

    PubMed

    Cornuault, Lauriane; Mouchel, Victorine; Phan Thi, Thuy-Tan; Beaussier, Hélène; Bézie, Yvonnick; Corny, Jennifer

    2018-06-02

    Background Clinical pharmacists' involvement has improved patients' care, by suggesting therapeutic optimizations. However, budget restrictions require a prioritization of these activities to focus resources on patients more at risk of medication errors. Objective The aim of our study was to identify variables influencing the formulation of pharmaceutical to improve medication review efficiency. Setting This study was conducted in medical wards of a 643-acute beds hospital in Paris, France. Methods All hospital medical prescriptions of all patients admitted within four medical wards (cardiology, rheumatology, neurology, vascular medicine) were analyzed. The study was conducted in each ward for 2 weeks, during 4 weeks. For each patient, variables prospectively collected were: age, gender, weight, emergency admission, number of high-alert medications and of total drugs prescribed, care unit, serum creatinine. Number of pharmaceutical interventions (PIs) and their type were reported. Main outcome measures Variables influencing the number of pharmaceutical interventions during medication review were identified using simple and multiple linear regressions. Results A total of 2328 drug prescriptions (303 patients, mean age 70.6 years-old) were analyzed. Mean number of hospital drug prescriptions was 7.9. A total of 318 PIs were formulated. Most frequent PIs were drug omission (n = 88, 27.7%), overdosing (n = 69, 21.7%), and underdosing (n = 51, 16.0%). Among variables studied, age, serum creatinine level, number of high-alert medications prescribed and total number of drugs prescribed were significantly associated with the formulation of pharmaceutical interventions (adjusted R 2  = 0.34). Conclusions This study identified variables (age, serum creatinine level, number of high-alert medication, number of prescribed drugs) that may help institutions/pharmacists target their reviews towards patients most likely to require pharmacist interventions.

  11. Numerical investigation of the variable nozzle effect on the mixed flow turbine performance characteristics

    NASA Astrophysics Data System (ADS)

    Meziri, B.; Hamel, M.; Hireche, O.; Hamidou, K.

    2016-09-01

    There are various matching ways between turbocharger and engine, the variable nozzle turbine is the most significant method. The turbine design must be economic with high efficiency and large capacity over a wide range of operational conditions. These design intents are used in order to decrease thermal load and improve thermal efficiency of the engine. This paper presents an original design method of a variable nozzle vane for mixed flow turbines developed from previous experimental and numerical studies. The new device is evaluated with a numerical simulation over a wide range of rotational speeds, pressure ratios, and different vane angles. The compressible turbulent steady flow is solved using the ANSYS CFX software. The numerical results agree well with experimental data in the nozzleless configuration. In the variable nozzle case, the results show that the turbine performance characteristics are well accepted in different open positions and improved significantly in low speed regime and at low pressure ratio.

  12. Cell-centered high-order hyperbolic finite volume method for diffusion equation on unstructured grids

    NASA Astrophysics Data System (ADS)

    Lee, Euntaek; Ahn, Hyung Taek; Luo, Hong

    2018-02-01

    We apply a hyperbolic cell-centered finite volume method to solve a steady diffusion equation on unstructured meshes. This method, originally proposed by Nishikawa using a node-centered finite volume method, reformulates the elliptic nature of viscous fluxes into a set of augmented equations that makes the entire system hyperbolic. We introduce an efficient and accurate solution strategy for the cell-centered finite volume method. To obtain high-order accuracy for both solution and gradient variables, we use a successive order solution reconstruction: constant, linear, and quadratic (k-exact) reconstruction with an efficient reconstruction stencil, a so-called wrapping stencil. By the virtue of the cell-centered scheme, the source term evaluation was greatly simplified regardless of the solution order. For uniform schemes, we obtain the same order of accuracy, i.e., first, second, and third orders, for both the solution and its gradient variables. For hybrid schemes, recycling the gradient variable information for solution variable reconstruction makes one order of additional accuracy, i.e., second, third, and fourth orders, possible for the solution variable with less computational work than needed for uniform schemes. In general, the hyperbolic method can be an effective solution technique for diffusion problems, but instability is also observed for the discontinuous diffusion coefficient cases, which brings necessity for further investigation about the monotonicity preserving hyperbolic diffusion method.

  13. Separation of natural product using columns packed with Fused-Core particles.

    PubMed

    Yang, Peilin; Litwinski, George R; Pursch, Matthias; McCabe, Terry; Kuppannan, Krishna

    2009-06-01

    Three HPLC columns packed with 3 microm, sub-2 microm, and 2.7 microm Fused-Core (superficially porous) particles were compared in separation performance using two natural product mixtures containing 15 structurally related components. The Ascentis Express C18 column packed with Fused-Core particles showed an 18% increase in column efficiency (theoretical plates), a 76% increase in plate number per meter, a 65% enhancement in separation speed and a 19% increase in back pressure compared to the Atlantis T3 C18 column packed with 3 microm particles. Column lot-to-lot variability for critical pairs in the natural product mixture was observed with both columns, with the Atlantis T3 column exhibiting a higher degree of variability. The Ascentis Express column was also compared with the Acquity BEH column packed with sub-2 microm particles. Although the peak efficiencies obtained by the Ascentis Express column were only about 74% of those obtained by the Acquity BEH column, the 50% lower back pressure and comparable separation speed allowed high-efficiency and high-speed separation to be performed using conventional HPLC instrumentation.

  14. Design considerations of high-performance InGaAs/InP single-photon avalanche diodes for quantum key distribution.

    PubMed

    Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei

    2016-09-20

    InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.

  15. Optimization of Dish Solar Collectors with and without Secondary Concentrators

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1982-01-01

    Methods for optimizing parabolic dish solar collectors and the consequent effects of various optical, thermal, mechanical, and cost variables are examined. The most important performance optimization is adjusting the receiver aperture to maximize collector efficiency. Other parameters that can be adjusted to optimize efficiency include focal length, and, if a heat engine is used, the receiver temperature. The efficiency maxima associated with focal length and receiver temperature are relatively broad; it may, accordingly, be desirable to design somewhat away from the maxima. Performance optimization is sensitive to the slope and specularity errors of the concentrator. Other optical and thermal variables affecting optimization are the reflectance and blocking factor of the concentrator, the absorptance and losses of the receiver, and, if a heat engine is used, the shape of the engine efficiency versus temperature curve. Performance may sometimes be improved by use of an additional optical element (a secondary concentrator) or a receiver window if the errors of the primary concentrator are large or the receiver temperature is high.

  16. The diversity of (13)C isotope discrimination in a Quercus robur full-sib family is associated with differences in intrinsic water use efficiency, transpiration efficiency, and stomatal conductance.

    PubMed

    Roussel, Magali; Dreyer, Erwin; Montpied, Pierre; Le-Provost, Grégoire; Guehl, Jean-Marc; Brendel, Oliver

    2009-01-01

    (13)C discrimination in organic matter with respect to atmospheric CO(2) (Delta(13)C) is under tight genetic control in many plant species, including the pedunculate oak (Quercus robur L.) full-sib progeny used in this study. Delta(13)C is expected to reflect intrinsic water use efficiency, but this assumption requires confirmation due to potential interferences with mesophyll conductance to CO(2), or post-photosynthetic discrimination. In order to dissect the observed Delta(13)C variability in this progeny, six genotypes that have previously been found to display extreme phenotypic values of Delta(13)C [either very high ('high Delta') or low ('low Delta') phenotype] were selected, and transpiration efficiency (TE; accumulated biomass/transpired water), net CO(2) assimilation rate (A), stomatal conductance for water vapour (g(s)), and intrinsic water use efficiency (W(i)=A/g(s)) were compared with Delta(13)C in bulk leaf matter, wood, and cellulose in wood. As expected, 'high Delta' displayed higher values of Delta(13)C not only in bulk leaf matter, but also in wood and cellulose. This confirmed the stability of the genotypic differences in Delta(13)C recorded earlier. 'High Delta' also displayed lower TE, lower W(i), and higher g(s). A small difference was detected in photosynthetic capacity but none in mesophyll conductance to CO(2). 'High Delta' and 'low Delta' displayed very similar leaf anatomy, except for higher stomatal density in 'high Delta'. Finally, diurnal courses of leaf gas exchange revealed a higher g(s) in 'high Delta' in the morning than in the afternoon when the difference decreased. The gene ERECTA, involved in the control of water use efficiency, leaf differentiation, and stomatal density, displayed higher expression levels in 'low Delta'. In this progeny, the variability of Delta(13)C correlated closely with that of W(i) and TE. Genetic differences of Delta(13)C and W(i) can be ascribed to differences in stomatal conductance and stomatal density but not in photosynthetic capacity.

  17. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  18. Efficiency optimization of a closed indirectly fired gas turbine cycle working under two variable-temperature heat reservoirs

    NASA Astrophysics Data System (ADS)

    Ma, Zheshu; Wu, Jieer

    2011-08-01

    Indirectly or externally fired gas turbines (IFGT or EFGT) are interesting technologies under development for small and medium scale combined heat and power (CHP) supplies in combination with micro gas turbine technologies. The emphasis is primarily on the utilization of the waste heat from the turbine in a recuperative process and the possibility of burning biomass even "dirty" fuel by employing a high temperature heat exchanger (HTHE) to avoid the combustion gases passing through the turbine. In this paper, finite time thermodynamics is employed in the performance analysis of a class of irreversible closed IFGT cycles coupled to variable temperature heat reservoirs. Based on the derived analytical formulae for the dimensionless power output and efficiency, the efficiency optimization is performed in two aspects. The first is to search the optimum heat conductance distribution corresponding to the efficiency optimization among the hot- and cold-side of the heat reservoirs and the high temperature heat exchangers for a fixed total heat exchanger inventory. The second is to search the optimum thermal capacitance rate matching corresponding to the maximum efficiency between the working fluid and the high-temperature heat reservoir for a fixed ratio of the thermal capacitance rates of the two heat reservoirs. The influences of some design parameters on the optimum heat conductance distribution, the optimum thermal capacitance rate matching and the maximum power output, which include the inlet temperature ratio of the two heat reservoirs, the efficiencies of the compressor and the gas turbine, and the total pressure recovery coefficient, are provided by numerical examples. The power plant configuration under optimized operation condition leads to a smaller size, including the compressor, turbine, two heat reservoirs and the HTHE.

  19. Review: Efficiency of Physical and Chemical Treatments on the Inactivation of Dairy Bacteriophages

    PubMed Central

    Guglielmotti, Daniela M.; Mercanti, Diego J.; Reinheimer, Jorge A.; Quiberoni, Andrea del L.

    2011-01-01

    Bacteriophages can cause great economic losses due to fermentation failure in dairy plants. Hence, physical and chemical treatments of raw material and/or equipment are mandatory to maintain phage levels as low as possible. Regarding thermal treatments used to kill pathogenic bacteria or achieve longer shelf-life of dairy products, neither low temperature long time nor high temperature short time pasteurization were able to inactivate most lactic acid bacteria (LAB) phages. Even though most phages did not survive 90°C for 2 min, there were some that resisted 90°C for more than 15 min (conditions suggested by the International Dairy Federation, for complete phage destruction). Among biocides tested, ethanol showed variable effectiveness in phage inactivation, since only phages infecting dairy cocci and Lactobacillus helveticus were reasonably inactivated by this alcohol, whereas isopropanol was in all cases highly ineffective. In turn, peracetic acid has consistently proved to be very fast and efficient to inactivate dairy phages, whereas efficiency of sodium hypochlorite was variable, even among different phages infecting the same LAB species. Both alkaline chloride foam and ethoxylated non-ylphenol with phosphoric acid were remarkably efficient, trait probably related to their highly alkaline or acidic pH values in solution, respectively. Photocatalysis using UV light and TiO2 has been recently reported as a feasible option to industrially inactivate phages infecting diverse LAB species. Processes involving high pressure were barely used for phage inactivation, but until now most studied phages revealed high resistance to these treatments. To conclude, and given the great phage diversity found on dairies, it is always advisable to combine different anti-phage treatments (biocides, heat, high pressure, photocatalysis), rather than using them separately at extreme conditions. PMID:22275912

  20. Factors influencing the ablative efficiency of high intensity focused ultrasound (HIFU) treatment for adenomyosis: A retrospective study.

    PubMed

    Gong, Chunmei; Yang, Bin; Shi, Yarong; Liu, Zhongqiong; Wan, Lili; Zhang, Hong; Jiang, Denghua; Zhang, Lian

    2016-08-01

    Objectives The aim of this study was to investigate factors affecting ablative efficiency of high intensity focused ultrasound (HIFU) for adenomyosis. Materials and methods In all, 245 patients with adenomyosis who underwent ultrasound guided HIFU (USgHIFU) were retrospectively reviewed. All patients underwent dynamic contrast-enhanced magnetic resonance imaging (MRI) before and after HIFU treatment. The non-perfused volume (NPV) ratio, energy efficiency factor (EEF) and greyscale change were set as dependent variables, while the factors possibly affecting ablation efficiency were set as independent variables. These variables were used to build multiple regression models. Results A total of 245 patients with adenomyosis successfully completed HIFU treatment. Enhancement type on T1 weighted image (WI), abdominal wall thickness, volume of adenomyotic lesion, the number of hyperintense points, location of the uterus, and location of adenomyosis all had a linear relationship with the NPV ratio. Distance from skin to the adenomyotic lesion's ventral side, enhancement type on T1WI, volume of adenomyotic lesion, abdominal wall thickness, and signal intensity on T2WI all had a linear relationship with EEF. Location of the uterus and abdominal wall thickness also both had a linear relationship with greyscale change. Conclusion The enhancement type on T1WI, signal intensity on T2WI, volume of adenomyosis, location of the uterus and adenomyosis, number of hyperintense points, abdominal wall thickness, and distance from the skin to the adenomyotic lesion's ventral side can all be used as predictors of HIFU for adenomyosis.

  1. Identifying the influential aquifer heterogeneity factor on nitrate reduction processes by numerical simulation

    NASA Astrophysics Data System (ADS)

    Jang, E.; He, W.; Savoy, H.; Dietrich, P.; Kolditz, O.; Rubin, Y.; Schüth, C.; Kalbacher, T.

    2017-01-01

    Nitrate reduction reactions in groundwater systems are strongly influenced by various aquifer heterogeneity factors that affect the transport of chemical species, spatial distribution of redox reactive substances and, as a result, the overall nitrate reduction efficiency. In this study, we investigated the influence of physical and chemical aquifer heterogeneity, with a focus on nitrate transport and redox transformation processes. A numerical modeling study for simulating coupled hydrological-geochemical aquifer heterogeneity was conducted in order to improve our understanding of the influence of the aquifer heterogeneity on the nitrate reduction reactions and to identify the most influential aquifer heterogeneity factors throughout the simulation. Results show that the most influential aquifer heterogeneity factors could change over time. With abundant presence of electron donors in the high permeable zones (initial stage), physical aquifer heterogeneity significantly influences the nitrate reduction since it enables the preferential transport of nitrate to these zones and enhances mixing of reactive partners. Chemical aquifer heterogeneity plays a comparatively minor role. Increasing the spatial variability of the hydraulic conductivity also increases the nitrate removal efficiency of the system. However, ignoring chemical aquifer heterogeneity can lead to an underestimation of nitrate removals in long-term behavior. With the increase of the spatial variability of the electron donor, i.e. chemical heterogeneity, the number of the ;hot spots; i.e. zones with comparably higher reactivity, should also increase. Hence, nitrate removal efficiencies will also be spatially variable but overall removal efficiency will be sustained if longer time scales are considered and nitrate fronts reach these high reactivity zones.

  2. A method for predicting optimized processing parameters for surfacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupont, J.N.; Marder, A.R.

    1994-12-31

    Welding is used extensively for surfacing applications. To operate a surfacing process efficiently, the variables must be optimized to produce low levels of dilution with the substrate while maintaining high deposition rates. An equation for dilution in terms of the welding variables, thermal efficiency factors, and thermophysical properties of the overlay and substrate was developed by balancing energy and mass terms across the welding arc. To test the validity of the resultant dilution equation, the PAW, GTAW, GMAW, and SAW processes were used to deposit austenitic stainless steel onto carbon steel over a wide range of parameters. Arc efficiency measurementsmore » were conducted using a Seebeck arc welding calorimeter. Melting efficiency was determined based on knowledge of the arc efficiency. Dilution was determined for each set of processing parameters using a quantitative image analysis system. The pertinent equations indicate dilution is a function of arc power (corrected for arc efficiency), filler metal feed rate, melting efficiency, and thermophysical properties of the overlay and substrate. With the aid of the dilution equation, the effect of processing parameters on dilution is presented by a new processing diagram. A new method is proposed for determining dilution from welding variables. Dilution is shown to depend on the arc power, filler metal feed rate, arc and melting efficiency, and the thermophysical properties of the overlay and substrate. Calculated dilution levels were compared with measured values over a large range of processing parameters and good agreement was obtained. The results have been applied to generate a processing diagram which can be used to: (1) predict the maximum deposition rate for a given arc power while maintaining adequate fusion with the substrate, and (2) predict the resultant level of dilution with the substrate.« less

  3. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  5. System efficiency of a tap transformer based grid connection topology applied on a direct driven generator for wind power.

    PubMed

    Apelfröjd, Senad; Eriksson, Sandra

    2014-01-01

    Results from experiments on a tap transformer based grid connection system for a variable speed vertical axis wind turbine are presented. The tap transformer based system topology consists of a passive diode rectifier, DC-link, IGBT inverter, LCL-filter, and tap transformer. Full range variable speed operation is enabled by using the different step-up ratios of a tap transformer. Simulations using MATLAB/Simulink have been performed in order to study the behavior of the system. A full experimental set up of the system has been used in the laboratory study, where a clone of the on-site generator was driven by an induction motor and the system was connected to a resistive load to better evaluate the performance. Furthermore, the system is run and evaluated for realistic wind speeds and variable speed operation. For a more complete picture of the system performance, a case study using real site Weibull parameters is done, comparing different tap selection options. The results show high system efficiency at nominal power and an increase in overall power output for full tap operation in comparison with the base case, a standard transformer. In addition, the loss distribution at different wind speeds is shown, which highlights the dominant losses at low and high wind speeds. Finally, means for further increasing the overall system efficiency are proposed.

  6. System Efficiency of a Tap Transformer Based Grid Connection Topology Applied on a Direct Driven Generator for Wind Power

    PubMed Central

    2014-01-01

    Results from experiments on a tap transformer based grid connection system for a variable speed vertical axis wind turbine are presented. The tap transformer based system topology consists of a passive diode rectifier, DC-link, IGBT inverter, LCL-filter, and tap transformer. Full range variable speed operation is enabled by using the different step-up ratios of a tap transformer. Simulations using MATLAB/Simulink have been performed in order to study the behavior of the system. A full experimental set up of the system has been used in the laboratory study, where a clone of the on-site generator was driven by an induction motor and the system was connected to a resistive load to better evaluate the performance. Furthermore, the system is run and evaluated for realistic wind speeds and variable speed operation. For a more complete picture of the system performance, a case study using real site Weibull parameters is done, comparing different tap selection options. The results show high system efficiency at nominal power and an increase in overall power output for full tap operation in comparison with the base case, a standard transformer. In addition, the loss distribution at different wind speeds is shown, which highlights the dominant losses at low and high wind speeds. Finally, means for further increasing the overall system efficiency are proposed. PMID:25258733

  7. The variability puzzle in human memory.

    PubMed

    Kahana, Michael J; Aggarwal, Eash V; Phan, Tung D

    2018-04-26

    Memory performance exhibits a high level of variability from moment to moment. Much of this variability may reflect inadequately controlled experimental variables, such as word memorability, past practice and subject fatigue. Alternatively, stochastic variability in performance may largely reflect the efficiency of endogenous neural processes that govern memory function. To help adjudicate between these competing views, the authors conducted a multisession study in which subjects completed 552 trials of a delayed free-recall task. Applying a statistical model to predict variability in each subject's recall performance uncovered modest effects of word memorability, proactive interference, and other variables. In contrast to the limited explanatory power of these experimental variables, performance on the prior list strongly predicted current list recall. These findings suggest that endogenous factors underlying successful encoding and retrieval drive variability in performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. HDMR methods to assess reliability in slope stability analyses

    NASA Astrophysics Data System (ADS)

    Kozubal, Janusz; Pula, Wojciech; Vessia, Giovanna

    2014-05-01

    Stability analyses of complex rock-soil deposits shall be tackled considering the complex structure of discontinuities within rock mass and embedded soil layers. These materials are characterized by a high variability in physical and mechanical properties. Thus, to calculate the slope safety factor in stability analyses two issues must be taken into account: 1) the uncertainties related to structural setting of the rock-slope mass and 2) the variability in mechanical properties of soils and rocks. High Dimensional Model Representation (HDMR) (Chowdhury et al. 2009; Chowdhury and Rao 2010) can be used to carry out the reliability index within complex rock-soil slopes when numerous random variables with high coefficient of variations are considered. HDMR implements the inverse reliability analysis, meaning that the unknown design parameters are sought provided that prescribed reliability index values are attained. Such approach uses implicit response functions according to the Response Surface Method (RSM). The simple RSM can be efficiently applied when less than four random variables are considered; as the number of variables increases, the efficiency in reliability index estimation decreases due to the great amount of calculations. Therefore, HDMR method is used to improve the computational accuracy. In this study, the sliding mechanism in Polish Flysch Carpathian Mountains have been studied by means of HDMR. The Southern part of Poland where Carpathian Mountains are placed is characterized by a rather complicated sedimentary pattern of flysh rocky-soil deposits that can be simplified into three main categories: (1) normal flysch, consisting of adjacent sandstone and shale beds of approximately equal thickness, (2) shale flysch, where shale beds are thicker than adjacent sandstone beds, and (3) sandstone flysch, where the opposite holds. Landslides occur in all flysch deposit types thus some configurations of possible unstable settings (within fractured rocky-soil masses) resulting in sliding mechanisms have been investigated in this study. The reliability indices values drawn from the HDRM method have been compared with conventional approaches as neural networks: the efficiency of HDRM is shown in the case studied. References Chowdhury R., Rao B.N. and Prasad A.M. 2009. High-dimensional model representation for structural reliability analysis. Commun. Numer. Meth. Engng, 25: 301-337. Chowdhury R. and Rao B. 2010. Probabilistic Stability Assessment of Slopes Using High Dimensional Model Representation. Computers and Geotechnics, 37: 876-884.

  9. Consequences of an uncertain mass mortality regime triggered by climate variability on giant clam population management in the Pacific Ocean.

    PubMed

    Van Wynsberge, Simon; Andréfouët, Serge; Gaertner-Mazouni, Nabila; Remoissenet, Georges

    2018-02-01

    Despite actions to manage sustainably tropical Pacific Ocean reef fisheries, managers have faced failures and frustrations because of unpredicted mass mortality events triggered by climate variability. The consequences of these events on the long-term population dynamics of living resources need to be better understood for better management decisions. Here, we use a giant clam (Tridacna maxima) spatially explicit population model to compare the efficiency of several management strategies under various scenarios of natural mortality, including mass mortality due to climatic anomalies. The model was parameterized by in situ estimations of growth and mortality and fishing effort, and was validated by historical and new in situ surveys of giant clam stocks in two French Polynesia lagoons. Projections on the long run (100 years) suggested that the best management strategy was a decrease of fishing pressure through quota implementation, regardless of the mortality regime considered. In contrast, increasing the minimum legal size of catch and closing areas to fishing were less efficient. When high mortality occurred due to climate variability, the efficiency of all management scenarios decreased markedly. Simulating El Niño Southern Oscillation event by adding temporal autocorrelation in natural mortality rates increased the natural variability of stocks, and also decreased the efficiency of management. These results highlight the difficulties that managers in small Pacific islands can expect in the future in the face of global warming, climate anomalies and new mass mortalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less

  11. High-Frequency ac Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.; Mildice, James

    1987-01-01

    Loads managed automatically under cycle-by-cycle control. 440-V rms, 20-kHz ac power system developed. System flexible, versatile, and "transparent" to user equipment, while maintaining high efficiency and low weight. Electrical source, from dc to 2,200-Hz ac converted to 440-V rms, 20-kHz, single-phase ac. Power distributed through low-inductance cables. Output power either dc or variable ac. Energy transferred per cycle reduced by factor of 50. Number of parts reduced by factor of about 5 and power loss reduced by two-thirds. Factors result in increased reliability and reduced costs. Used in any power-distribution system requiring high efficiency, high reliability, low weight, and flexibility to handle variety of sources and loads.

  12. Integration of Fixed and Flexible Route Public Transportation Systems, Phase I

    DOT National Transportation Integrated Search

    2012-01-01

    To provide efficient public transportation services in areas with high demand variability over time, it may be desirable : to switch vehicles between conventional services (with fixed routes and schedules) during peak periods and flexible : route ser...

  13. On the Asymptotic Relative Efficiency of Planned Missingness Designs.

    PubMed

    Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D

    2016-03-01

    In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.

  14. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tie, S. S.; Martini, P.; Mudd, D.

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  15. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE PAGES

    Tie, S. S.; Martini, P.; Mudd, D.; ...

    2017-02-15

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  16. Traditional neuropsychological correlates and reliability of the automated neuropsychological assessment metrics-4 battery for Parkinson's disease.

    PubMed

    Hawkins, Keith A; Jennings, Danna; Vincent, Andrea S; Gilliland, Kirby; West, Adrienne; Marek, Kenneth

    2012-08-01

    The automated neuropsychological assessment metrics battery-4 for PD offers the promise of a computerized approach to cognitive assessment. To assess its utility, the ANAM4-PD was administered to 72 PD patients and 24 controls along with a traditional battery. Reliability was assessed by retesting 26 patients. The cognitive efficiency score (CES; a global score) exhibited high reliability (r = 0.86). Constituent variables exhibited lower reliability. The CES correlated strongly with the traditional battery global score, but displayed weaker relationships to UPDRS scores than the traditional score. Multivariate analysis of variance revealed a significant difference between the patient and control groups in ANAM4-PD performance, with three ANAM4-PD tests, math, tower, and pursuit tracking, displaying sizeable differences. In discriminant analyses these variables were as effective as the total ANAM4-PD in classifying cases designated as impaired based on traditional variables. Principal components analyses uncovered fewer factors in the ANAM4-PD relative to the traditional battery. ANAM4-PD variables correlated at higher levels with traditional motor and processing speed variables than with untimed executive, intellectual or memory variables. The ANAM4-PD displays high global reliability, but variable subtest reliability. The battery assesses a narrower range of cognitive functions than traditional tests, and discriminates between patients and controls less effectively. Three ANAM4-PD tests, pursuit tracking, math, and tower performed as well as the total ANAM4-PD in classifying patients as cognitively impaired. These findings could guide the refinement of the ANAM4-PD as an efficient method of screening for mild to moderate cognitive deficits in PD patients. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Financial performance monitoring of the technical efficiency of critical access hospitals: a data envelopment analysis and logistic regression modeling approach.

    PubMed

    Wilson, Asa B; Kerr, Bernard J; Bastian, Nathaniel D; Fulton, Lawrence V

    2012-01-01

    From 1980 to 1999, rural designated hospitals closed at a disproportionally high rate. In response to this emergent threat to healthcare access in rural settings, the Balanced Budget Act of 1997 made provisions for the creation of a new rural hospital--the critical access hospital (CAH). The conversion to CAH and the associated cost-based reimbursement scheme significantly slowed the closure rate of rural hospitals. This work investigates which methods can ensure the long-term viability of small hospitals. This article uses a two-step design to focus on a hypothesized relationship between technical efficiency of CAHs and a recently developed set of financial monitors for these entities. The goal is to identify the financial performance measures associated with efficiency. The first step uses data envelopment analysis (DEA) to differentiate efficient from inefficient facilities within a data set of 183 CAHs. Determining DEA efficiency is an a priori categorization of hospitals in the data set as efficient or inefficient. In the second step, DEA efficiency is the categorical dependent variable (efficient = 0, inefficient = 1) in the subsequent binary logistic regression (LR) model. A set of six financial monitors selected from the array of 20 measures were the LR independent variables. We use a binary LR to test the null hypothesis that recently developed CAH financial indicators had no predictive value for categorizing a CAH as efficient or inefficient, (i.e., there is no relationship between DEA efficiency and fiscal performance).

  18. Estimating Latent Variable Interactions With Non-Normal Observed Data: A Comparison of Four Approaches

    PubMed Central

    Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.

    2012-01-01

    A Monte Carlo simulation was conducted to investigate the robustness of four latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of non-normality of the observed exogenous variables. Results showed that the CPI and LMS approaches yielded biased estimates of the interaction effect when the exogenous variables were highly non-normal. When the violation of non-normality was not severe (normal; symmetric with excess kurtosis < 1), the LMS approach yielded the most efficient estimates of the latent interaction effect with the highest statistical power. In highly non-normal conditions, the GAPI and UPI approaches with ML estimation yielded unbiased latent interaction effect estimates, with acceptable actual Type-I error rates for both the Wald and likelihood ratio tests of interaction effect at N ≥ 500. An empirical example illustrated the use of the four approaches in testing a latent variable interaction between academic self-efficacy and positive family role models in the prediction of academic performance. PMID:23457417

  19. The 4 phase VSR motor: The ideal prime mover for electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holling, G.H.; Yeck, M.M.

    1994-12-31

    4 phase variable switched reluctance motors are gaining acceptance in many applications due to their fault tolerant characteristics. A 4 phase variable switched reluctance motor (VSR) is modelled and its performance is predicted for several operating points for an electric vehicle application. The 4 phase VSR offers fault tolerance, high performance, and an excellent torque to weight ratio. The actual system performance was measured both on a teststand and on an actual vehicle. While the system described is used in a production electric motorscooter, the technology is equally applicable for high efficiency electric cars and buses. 4 refs.

  20. Thermionic modules

    DOEpatents

    King, Donald B.; Sadwick, Laurence P.; Wernsman, Bernard R.

    2002-06-18

    Modules of assembled microminiature thermionic converters (MTCs) having high energy-conversion efficiencies and variable operating temperatures manufactured using MEMS manufacturing techniques including chemical vapor deposition. The MTCs incorporate cathode to anode spacing of about 1 micron or less and use cathode and anode materials having work functions ranging from about 1 eV to about 3 eV. The MTCs also exhibit maximum efficiencies of just under 30%, and thousands of the devices and modules can be fabricated at modest costs.

  1. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  2. Plasmon resonance enhanced multicolour photodetection by graphene

    PubMed Central

    Liu, Yuan; Cheng, Rui; Liao, Lei; Zhou, Hailong; Bai, Jingwei; Liu, Gang; Liu, Lixin; Huang, Yu; Duan, Xiangfeng

    2012-01-01

    Graphene has the potential for high-speed, wide-band photodetection, but only with very low external quantum efficiency and no spectral selectivity. Here we report a dramatic enhancement of the overall quantum efficiency and spectral selectivity that enables multicolour photodetection, by coupling graphene with plasmonic nanostructures. We show that metallic plasmonic nanostructures can be integrated with graphene photodetectors to greatly enhance the photocurrent and external quantum efficiency by up to 1,500%. Plasmonic nanostructures of variable resonance frequencies selectively amplify the photoresponse of graphene to light of different wavelengths, enabling highly specific detection of multicolours. Being atomically thin, graphene photodetectors effectively exploit the local plasmonic enhancement effect to achieve a significant enhancement factor not normally possible with traditional planar semiconductor materials. PMID:22146398

  3. Modeling Responses of Dryland Spring Triticale, Proso Millet and Foxtail Millet to Initial Soil Water in the High Plains

    USDA-ARS?s Scientific Manuscript database

    Dryland farming strategies in the High Plains must make efficient use of limited and variable precipitation and stored water in the soil profile for stable and sustainable farm productivity. Current research efforts focus on replacing summer fallow in the region with more profitable and environmenta...

  4. Experimental Evaluation of a Low Emissions High Performance Duct Burner for Variable Cycle Engines (VCE)

    NASA Technical Reports Server (NTRS)

    Lohmann, R. P.; Mador, R. J.

    1979-01-01

    An evaluation was conducted with a three stage Vorbix duct burner to determine the performance and emissions characteristics of the concept and to refine the configuration to provide acceptable durability and operational characteristics for its use in the variable cycle engine (VCE) testbed program. The tests were conducted at representative takeoff, transonic climb, and supersonic cruise inlet conditions for the VSCE-502B study engine. The test stand, the emissions sampling and analysis equipment, and the supporting flow visualization rigs are described. The performance parameters including the fuel-air ratio, the combustion efficiency/exit temperature, thrust efficiency, and gaseous emissions calculations are defined. The test procedures are reviewed and the results are discussed.

  5. Ku-band high efficiency GaAs MMIC power amplifiers

    NASA Technical Reports Server (NTRS)

    Tserng, H. Q.; Witkowski, L. C.; Wurtele, M.; Saunier, Paul

    1988-01-01

    The development of Ku-band high efficiency GaAs MMIC power amplifiers is examined. Three amplifier modules operating over the 13 to 15 GHz frequency range are to be developed. The first MMIC is a 1 W variable power amplifier (VPA) with 35 percent efficiency. On-chip digital gain control is to be provided. The second MMIC is a medium power amplifier (MPA) with an output power goal of 1 W and 40 percent power-added efficiency. The third MMIC is a high power amplifier (HPA) with 4 W output power goal and 40 percent power-added efficiency. An output power of 0.36 W/mm with 49 percent efficiency was obtained on an ion implanted single gate MESFET at 15 GHz. On a dual gate MESFET, an output power of 0.42 W/mm with 27 percent efficiency was obtained. A mask set was designed that includes single stage, two stage, and three stage single gate amplifiers. A single stage 600 micron amplifier produced 0.4 W/mm output power with 40 percent efficiency at 14 GHz. A four stage dual gate amplifier generated 500 mW of output power with 20 dB gain at 17 GHz. A four-bit digital-to-analog converter was designed and fabricated which has an output swing of -3 V to +/- 1 V.

  6. Variable velocity in solar external receivers

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, M. R.; Sánchez-González, A.; Acosta-Iborra, A.; Santana, D.

    2017-06-01

    One of the major problems in solar external receivers is tube overheating, which accelerates the risk of receiver failure. It can be solved implementing receivers with high number of panels. However, it exponentially increases the pressure drop in the receiver and the parasitic power consumption of the Solar Power Tower (SPT), reducing the global efficiency of the SPT. A new concept of solar external receiver, named variable velocity receiver, is able to adapt their configuration to the different flux density distributions. A set of valves allows splitting in several independent panels those panels in which the wall temperature is over the limit. It increases the velocity of the heat transfer fluid (HTF) and its cooling capacity. This receiver does not only reduce the wall temperature of the tubes, but also simplifies the control of the heliostat field and allows to employ more efficient aiming strategies. In this study, it has been shown that variable velocity receiver presents high advantages with respect to traditional receiver. Nevertheless, more than two divisions per panels are not recommendable, due to the increment of the pressure drop over 70 bars. In the design point (12 h of the Spring Equinox), the use of a variable number of panels between 18 and 36 (two divisions per panel), in a SPT similar to Gemasolar, improves the power capacity of the SPT in 5.7%, with a pressure drop increment of 10 bars. Off-design, when the flux distribution is high and not symmetric (e.g. 10-11 h), the power generated by the variable velocity receiver is 18% higher than the generated by the traditional receiver, at these hours the pressure drop increases almost 20 bars.

  7. Interrelationships between trait anxiety, situational stress and mental effort predict phonological processing efficiency, but not effectiveness.

    PubMed

    Edwards, Elizabeth J; Edwards, Mark S; Lyvers, Michael

    2016-08-01

    Attentional control theory (ACT) describes the mechanisms associated with the relationship between anxiety and cognitive performance. We investigated the relationship between cognitive trait anxiety, situational stress and mental effort on phonological performance using a simple (forward-) and complex (backward-) word span task. Ninety undergraduate students participated in the study. Predictor variables were cognitive trait anxiety, indexed using questionnaire scores; situational stress, manipulated using ego threat instructions; and perceived level of mental effort, measured using a visual analogue scale. Criterion variables (a) performance effectiveness (accuracy) and (b) processing efficiency (accuracy divided by response time) were analyzed in separate multiple moderated-regression analyses. The results revealed (a) no relationship between the predictors and performance effectiveness, and (b) a significant 3-way interaction on processing efficiency for both the simple and complex tasks, such that at higher effort, trait anxiety and situational stress did not predict processing efficiency, whereas at lower effort, higher trait anxiety was associated with lower efficiency at high situational stress, but not at low situational stress. Our results were in full support of the assumptions of ACT and implications for future research are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. A multiple-alignment based primer design algorithm for genetically highly variable DNA targets

    PubMed Central

    2013-01-01

    Background Primer design for highly variable DNA sequences is difficult, and experimental success requires attention to many interacting constraints. The advent of next-generation sequencing methods allows the investigation of rare variants otherwise hidden deep in large populations, but requires attention to population diversity and primer localization in relatively conserved regions, in addition to recognized constraints typically considered in primer design. Results Design constraints include degenerate sites to maximize population coverage, matching of melting temperatures, optimizing de novo sequence length, finding optimal bio-barcodes to allow efficient downstream analyses, and minimizing risk of dimerization. To facilitate primer design addressing these and other constraints, we created a novel computer program (PrimerDesign) that automates this complex procedure. We show its powers and limitations and give examples of successful designs for the analysis of HIV-1 populations. Conclusions PrimerDesign is useful for researchers who want to design DNA primers and probes for analyzing highly variable DNA populations. It can be used to design primers for PCR, RT-PCR, Sanger sequencing, next-generation sequencing, and other experimental protocols targeting highly variable DNA samples. PMID:23965160

  9. Size variability of the unit building block of peripheral light-harvesting antennas as a strategy for effective functioning of antennas of variable size that is controlled in vivo by light intensity.

    PubMed

    Taisova, A S; Yakovlev, A G; Fetisova, Z G

    2014-03-01

    This work continuous a series of studies devoted to discovering principles of organization of natural antennas in photosynthetic microorganisms that generate in vivo large and highly effective light-harvesting structures. The largest antenna is observed in green photosynthesizing bacteria, which are able to grow over a wide range of light intensities and adapt to low intensities by increasing of size of peripheral BChl c/d/e antenna. However, increasing antenna size must inevitably cause structural changes needed to maintain high efficiency of its functioning. Our model calculations have demonstrated that aggregation of the light-harvesting antenna pigments represents one of the universal structural factors that optimize functioning of any antenna and manage antenna efficiency. If the degree of aggregation of antenna pigments is a variable parameter, then efficiency of the antenna increases with increasing size of a single aggregate of the antenna. This means that change in degree of pigment aggregation controlled by light-harvesting antenna size is biologically expedient. We showed in our previous work on the oligomeric chlorosomal BChl c superantenna of green bacteria of the Chloroflexaceae family that this principle of optimization of variable antenna structure, whose size is controlled by light intensity during growth of bacteria, is actually realized in vivo. Studies of this phenomenon are continued in the present work, expanding the number of studied biological materials and investigating optical linear and nonlinear spectra of chlorosomes having different structures. We show for oligomeric chlorosomal superantennas of green bacteria (from two different families, Chloroflexaceae and Oscillochloridaceae) that a single BChl c aggregate is of small size, and the degree of BChl c aggregation is a variable parameter, which is controlled by the size of the entire BChl c superantenna, and the latter, in turn, is controlled by light intensity in the course of cell culture growth.

  10. The risk-adjusted vision beyond casemix (DRG) funding in Australia. International lessons in high complexity and capitation.

    PubMed

    Antioch, Kathryn M; Walsh, Michael K

    2004-06-01

    Hospitals throughout the world using funding based on diagnosis-related groups (DRG) have incurred substantial budgetary deficits, despite high efficiency. We identify the limitations of DRG funding that lack risk (severity) adjustment for State-wide referral services. Methods to risk adjust DRGs are instructive. The average price in casemix funding in the Australian State of Victoria is policy based, not benchmarked. Average cost weights are too low for high-complexity DRGs relating to State-wide referral services such as heart and lung transplantation and trauma. Risk-adjusted specified grants (RASG) are required for five high-complexity respiratory, cardiology and stroke DRGs incurring annual deficits of $3.6 million due to high casemix complexity and government under-funding despite high efficiency. Five stepwise linear regressions for each DRG excluded non-significant variables and assessed heteroskedasticity and multicollinearlity. Cost per patient was the dependent variable. Significant independent variables were age, length-of-stay outliers, number of disease types, diagnoses, procedures and emergency status. Diagnosis and procedure severity markers were identified. The methodology and the work of the State-wide Risk Adjustment Working Group can facilitate risk adjustment of DRGs State-wide and for Treasury negotiations for expenditure growth. The Alfred Hospital previously negotiated RASG of $14 million over 5 years for three trauma and chronic DRGs. Some chronic diseases require risk-adjusted capitation funding models for Australian Health Maintenance Organizations as an alternative to casemix funding. The use of Diagnostic Cost Groups can facilitate State and Federal government reform via new population-based risk adjusted funding models that measure health need.

  11. Numerical solution of the wave equation with variable wave speed on nonconforming domains by high-order difference potentials

    NASA Astrophysics Data System (ADS)

    Britt, S.; Tsynkov, S.; Turkel, E.

    2018-02-01

    We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.

  12. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    NASA Astrophysics Data System (ADS)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  13. Plasticity in physiological traits in conifers: implications for response to climate change in the western U.S.

    PubMed

    Grulke, N E

    2010-06-01

    Population variation in ecophysiological traits of four co-occurring montane conifers was measured on a large latitudinal gradient to quantitatively assess their potential for response to environmental change. White fir (Abies concolor) had the highest variability, gross photosynthetic rate (Pg), and foliar carbon (C) and nitrogen (N) content. Despite low water use efficiency (WUE), stomatal conductance (gs) of fir was the most responsive to unfavorable environmental conditions. Pinus lambertiana exhibited the least variability in Pg and WUE, and is likely to be the most vulnerable to environmental changes. Pinus ponderosa had an intermediate level of variability, and high needle growth at its higher elevational limits. Pinus Jeffreyi also had intermediate variability, but high needle growth at its southern latitudinal and lower elevational limits. The attributes used to assess tree vigor were effective in predicting population vulnerability to abiotic (drought) and biotic (herbivore) stresses. Published by Elsevier Ltd.

  14. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    NASA Astrophysics Data System (ADS)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-01

    Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.

  15. Impact of public transportation market share and other transportation and environmental policy variables on sustainable transportation.

    DOT National Transportation Integrated Search

    2015-01-01

    Policies that encourage reduced travel, such as traveling shorter distances, and increased use of more efficient transportation modes, such as public transportation and high-occupancy private automobiles, are often considered one of several possible ...

  16. High-throughput assay for optimising microbial biological control agent production and delivery

    USDA-ARS?s Scientific Manuscript database

    Lack of technologies to produce and deliver effective biological control agents (BCAs) is a major barrier to their commercialization. A myriad of variables associated with BCA cultivation, formulation, drying, storage, and reconstitution processes complicates agent quality maximization. An efficie...

  17. Variable Pitch Darrieus Water Turbines

    NASA Astrophysics Data System (ADS)

    Kirke, Brian; Lazauskas, Leo

    In recent years the Darrieus wind turbine concept has been adapted for use in water, either as a hydrokinetic turbine converting the kinetic energy of a moving fluid in open flow like an underwater wind turbine, or in a low head or ducted arrangement where flow is confined, streamtube expansion is controlled and efficiency is not subject to the Betz limit. Conventional fixed pitch Darrieus turbines suffer from two drawbacks, (i) low starting torque and (ii) shaking due to cyclical variations in blade angle of attack. Ventilation and cavitation can also cause problems in water turbines when blade velocities are high. Shaking can be largely overcome by the use of helical blades, but these do not produce large starting torque. Variable pitch can produce high starting torque and high efficiency, and by suitable choice of pitch regime, shaking can be minimized but not entirely eliminated. Ventilation can be prevented by avoiding operation close to a free surface, and cavitation can be prevented by limiting blade velocities. This paper summarizes recent developments in Darrieus water turbines, some problems and some possible solutions.

  18. Heteroleptic Copper(I)-Based Complexes for Photocatalysis: Combinatorial Assembly, Discovery, and Optimization.

    PubMed

    Minozzi, Clémentine; Caron, Antoine; Grenier-Petel, Jean-Christophe; Santandrea, Jeffrey; Collins, Shawn K

    2018-05-04

    A library of 50 copper-based complexes derived from bisphosphines and diamines was prepared and evaluated in three mechanistically distinct photocatalytic reactions. In all cases, a copper-based catalyst was identified to afford high yields, where new heteroleptic complexes derived from the bisphosphine BINAP displayed high efficiency across all reaction types. Importantly, the evaluation of the library of copper complexes revealed that even when photophysical data is available, it is not always possible to predict which catalyst structure will be efficient or inefficient in a given process, emphasizing the advantages for catalyst structures with high modularity and structural variability. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. High-efficiency cell concepts on low-cost silicon sheets

    NASA Technical Reports Server (NTRS)

    Bell, R. O.; Ravi, K. V.

    1985-01-01

    The limitations on sheet growth material in terms of the defect structure and minority carrier lifetime are discussed. The effect of various defects on performance are estimated. Given these limitations designs for a sheet growth cell that will make the best of the material characteristics are proposed. Achievement of optimum synergy between base material quality and device processing variables is proposed. A strong coupling exists between material quality and the variables during crystal growth, and device processing variables. Two objectives are outlined: (1) optimization of the coupling for maximum performance at minimal cost; and (2) decoupling of materials from processing by improvement in base material quality to make it less sensitive to processing variables.

  20. Microminiature thermionic converters

    DOEpatents

    King, Donald B.; Sadwick, Laurence P.; Wernsman, Bernard R.

    2001-09-25

    Microminiature thermionic converts (MTCs) having high energy-conversion efficiencies and variable operating temperatures. Methods of manufacturing those converters using semiconductor integrated circuit fabrication and micromachine manufacturing techniques are also disclosed. The MTCs of the invention incorporate cathode to anode spacing of about 1 micron or less and use cathode and anode materials having work functions ranging from about 1 eV to about 3 eV. Existing prior art thermionic converter technology has energy conversion efficiencies ranging from 5-15%. The MTCs of the present invention have maximum efficiencies of just under 30%, and thousands of the devices can be fabricated at modest costs.

  1. Multilayer dielectric diffraction gratings

    DOEpatents

    Perry, Michael D.; Britten, Jerald A.; Nguyen, Hoang T.; Boyd, Robert; Shore, Bruce W.

    1999-01-01

    The design and fabrication of dielectric grating structures with high diffraction efficiency used in reflection or transmission is described. By forming a multilayer structure of alternating index dielectric materials and placing a grating structure on top of the multilayer, a diffraction grating of adjustable efficiency, and variable optical bandwidth can be obtained. Diffraction efficiency into the first order in reflection varying between 1 and 98 percent has been achieved by controlling the design of the multilayer and the depth, shape, and material comprising the grooves of the grating structure. Methods for fabricating these gratings without the use of ion etching techniques are described.

  2. Multilayer dielectric diffraction gratings

    DOEpatents

    Perry, M.D.; Britten, J.A.; Nguyen, H.T.; Boyd, R.; Shore, B.W.

    1999-05-25

    The design and fabrication of dielectric grating structures with high diffraction efficiency used in reflection or transmission is described. By forming a multilayer structure of alternating index dielectric materials and placing a grating structure on top of the multilayer, a diffraction grating of adjustable efficiency, and variable optical bandwidth can be obtained. Diffraction efficiency into the first order in reflection varying between 1 and 98 percent has been achieved by controlling the design of the multilayer and the depth, shape, and material comprising the grooves of the grating structure. Methods for fabricating these gratings without the use of ion etching techniques are described. 7 figs.

  3. Tracking of Indels by DEcomposition is a Simple and Effective Method to Assess Efficiency of Guide RNAs in Zebrafish.

    PubMed

    Etard, Christelle; Joshi, Swarnima; Stegmaier, Johannes; Mikut, Ralf; Strähle, Uwe

    2017-12-01

    A bottleneck in CRISPR/Cas9 genome editing is variable efficiencies of in silico-designed gRNAs. We evaluated the sensitivity of the TIDE method (Tracking of Indels by DEcomposition) introduced by Brinkman et al. in 2014 for assessing the cutting efficiencies of gRNAs in zebrafish. We show that this simple method, which involves bulk polymerase chain reaction amplification and Sanger sequencing, is highly effective in tracking well-performing gRNAs in pools of genomic DNA derived from injected embryos. The method is equally effective for tracing INDELs in heterozygotes.

  4. A simulation study of turbofan engine deterioration estimation using Kalman filtering techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Heather H.

    1991-01-01

    Deterioration of engine components may cause off-normal engine operation. The result is an unecessary loss of performance, because the fixed schedules are designed to accommodate a wide range of engine health. These fixed control schedules may not be optimal for a deteriorated engine. This problem may be solved by including a measure of deterioration in determining the control variables. These engine deterioration parameters usually cannot be measured directly but can be estimated. A Kalman filter design is presented for estimating two performance parameters that account for engine deterioration: high and low pressure turbine delta efficiencies. The delta efficiency parameters model variations of the high and low pressure turbine efficiencies from nominal values. The filter has a design condition of Mach 0.90, 30,000 ft altitude, and 47 deg power level angle (PLA). It was evaluated using a nonlinear simulation of the F100 engine model derivative (EMD) engine, at the design Mach number and altitude over a PLA range of 43 to 55 deg. It was found that known high pressure turbine delta efficiencies of -2.5 percent and low pressure turbine delta efficiencies of -1.0 percent can be estimated with an accuracy of + or - 0.25 percent efficiency with a Kalman filter. If both the high and low pressure turbine are deteriorated, the delta efficiencies of -2.5 percent to both turbines can be estimated with the same accuracy.

  5. Evaluation of the potential of the Stirling engine for heavy duty application

    NASA Technical Reports Server (NTRS)

    Meijer, R. J.; Ziph, B.

    1981-01-01

    A 150 hp four cylinder heavy duty Stirling engine was evaluated. The engine uses a variable stroke power control system, swashplate drive and ceramic insulation. The sensitivity of the design to engine size and heater temperature is investigated. Optimization shows that, with porous ceramics, indicated efficiencies as high as 52% can be achieved. It is shown that the gain in engine efficiency becomes insignificant when the heater temperature is raised above 200 degrees F.

  6. Sleep variability and cardiac autonomic modulation in adolescents – Penn State Child Cohort (PSCC) study

    PubMed Central

    Rodríguez-Colón, Sol M.; He, Fan; Bixler, Edward O.; Fernandez-Mendoza, Julio; Vgontzas, Alexandros N.; Calhoun, Susan; Zheng, Zhi-Jie; Liao, Duanping

    2015-01-01

    Objective To investigate the effects of objectively measured habitual sleep patterns on cardiac autonomic modulation (CAM) in a population-based sample of adolescents. Methods We used data from 421 adolescents who completed the follow-up examination in the Penn State Children Cohort study. CAM was assessed by heart rate (HR) variability (HRV) analysis of beat-to-beat normal R-R intervals from a 39-h electrocardiogram, on a 30-min basis. The HRV indices included frequency domain (HF, LF, and LF/HF ratio), and time domain (SDNN, RMSSD, and heart rate or HR) variables. Actigraphy was used for seven consecutive nights to estimate nightly sleep duration and time in bed. The seven-night mean (SD) of sleep duration and sleep efficiency were used to represent sleep duration, duration variability, sleep efficiency, and efficiency variability, respectively. HF and LF were log-transformed for statistical analysis. Linear mixed-effect models were used to analyze the association between sleep patterns and CAM. Results After adjusting for major confounders, increased sleep duration variability and efficiency variability were significantly associated with lower HRV and higher HR during the 39-h, as well as separated by daytime and nighttime. For instance, a 1-h increase in sleep duration variability is associated with −0.14(0.04), −0.12(0.06), and −0.16(0.05) ms2 decrease in total, daytime, and nighttime HF, respectively. No associations were found between sleep duration, or sleep efficiency and HRV. Conclusion Higher habitual sleep duration variability and efficiency variability are associated with lower HRV and higher HR, suggesting that an irregular sleep pattern has an adverse impact on CAM, even in healthy adolescents. PMID:25555635

  7. A stochastic frontier approach to study the relationship between gastrointestinal nematode infections and technical efficiency of dairy farms.

    PubMed

    van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes

    2014-01-01

    The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and 23L/cow per year for Tiber Farm Accounting System farms in the low- (0-90), medium- (90-95), and high- (95-99) efficiency score groups, respectively. The potential milk increase associated with reducing the level of infection was higher for highly efficient farms (6.7% of the total possible milk increase when becoming fully technically efficient) than for less efficient farms (3.8% of the total possible milk increase when becoming fully technically efficient). Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Effect of task-oriented training and high-variability practice on gross motor performance and activities of daily living in children with spastic diplegia.

    PubMed

    Kwon, Hae-Yeon; Ahn, So-Yoon

    2016-10-01

    [Purpose] This study investigates how a task-oriented training and high-variability practice program can affect the gross motor performance and activities of daily living for children with spastic diplegia and provides an effective and reliable clinical database for future improvement of motor performances skills. [Subjects and Methods] This study randomly assigned seven children with spastic diplegia to each intervention group including that of a control group, task-oriented training group, and a high-variability practice group. The control group only received neurodevelopmental treatment for 40 minutes, while the other two intervention groups additionally implemented a task-oriented training and high-variability practice program for 8 weeks (twice a week, 60 min per session). To compare intra and inter-relationships of the three intervention groups, this study measured gross motor performance measure (GMPM) and functional independence measure for children (WeeFIM) before and after 8 weeks of training. [Results] There were statistically significant differences in the amount of change before and after the training among the three intervention groups for the gross motor performance measure and functional independence measure. [Conclusion] Applying high-variability practice in a task-oriented training course may be considered an efficient intervention method to improve motor performance skills that can tune to movement necessary for daily livelihood through motor experience and learning of new skills as well as change of tasks learned in a complex environment or similar situations to high-variability practice.

  9. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Treesearch

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  10. Developing an in silico model of the modulation of base excision repair using methoxyamine for more targeted cancer therapeutics.

    PubMed

    Gurkan-Cavusoglu, Evren; Avadhani, Sriya; Liu, Lili; Kinsella, Timothy J; Loparo, Kenneth A

    2013-04-01

    Base excision repair (BER) is a major DNA repair pathway involved in the processing of exogenous non-bulky base damages from certain classes of cancer chemotherapy drugs as well as ionising radiation (IR). Methoxyamine (MX) is a small molecule chemical inhibitor of BER that is shown to enhance chemotherapy and/or IR cytotoxicity in human cancers. In this study, the authors have analysed the inhibitory effect of MX on the BER pathway kinetics using a computational model of the repair pathway. The inhibitory effect of MX depends on the BER efficiency. The authors have generated variable efficiency groups using different sets of protein concentrations generated by Latin hypercube sampling, and they have clustered simulation results into high, medium and low efficiency repair groups. From analysis of the inhibitory effect of MX on each of the three groups, it is found that the inhibition is most effective for high efficiency BER, and least effective for low efficiency repair.

  11. Methodology for the optimal design of an integrated first and second generation ethanol production plant combined with power cogeneration.

    PubMed

    Bechara, Rami; Gomez, Adrien; Saint-Antonin, Valérie; Schweitzer, Jean-Marc; Maréchal, François

    2016-08-01

    The application of methodologies for the optimal design of integrated processes has seen increased interest in literature. This article builds on previous works and applies a systematic methodology to an integrated first and second generation ethanol production plant with power cogeneration. The methodology breaks into process simulation, heat integration, thermo-economic evaluation, exergy efficiency vs. capital costs, multi-variable, evolutionary optimization, and process selection via profitability maximization. Optimization generated Pareto solutions with exergy efficiency ranging between 39.2% and 44.4% and capital costs from 210M$ to 390M$. The Net Present Value was positive for only two scenarios and for low efficiency, low hydrolysis points. The minimum cellulosic ethanol selling price was sought to obtain a maximum NPV of zero for high efficiency, high hydrolysis alternatives. The obtained optimal configuration presented maximum exergy efficiency, hydrolyzed bagasse fraction, capital costs and ethanol production rate, and minimum cooling water consumption and power production rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. [Multilevel analysis of the technical efficiency of hospitals in the Spanish National Health System by property and type of management].

    PubMed

    Pérez-Romero, Carmen; Ortega-Díaz, M Isabel; Ocaña-Riola, Ricardo; Martín-Martín, José Jesús

    2018-05-11

    To analyze technical efficiency by type of property and management of general hospitals in the Spanish National Health System (2010-2012) and identify hospital and regional explanatory variables. 230 hospitals were analyzed combining data envelopment analysis and fixed effects multilevel linear models. Data envelopment analysis measured overall, technical and scale efficiency, and the analysis of explanatory factors was performed using multilevel models. The average rate of overall technical efficiency of hospitals without legal personality is lower than hospitals with legal personality (0.691 and 0.876 in 2012). There is a significant variability in efficiency under variable returns (TE) by direct, indirect and mixed forms of management. The 29% of the variability in TE es attributable to the Region. Legal personality increased the TE of the hospitals by 11.14 points. On the other hand, most of the forms of management (different to those of the traditional hospitals) increased TE in varying percentages. At regional level, according to the model considered, insularity and average annual income per household are explanatory variables of TE. Having legal personality favours technical efficiency. The regulatory and management framework of hospitals, more than public or private ownership, seem to explain technical efficiency. Regional characteristics explain the variability in TE. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Poor phonetic perceivers are affected by cognitive load when resolving talker variability

    PubMed Central

    Antoniou, Mark; Wong, Patrick C. M.

    2015-01-01

    Speech training paradigms aim to maximise learning outcomes by manipulating external factors such as talker variability. However, not all individuals may benefit from such manipulations because subject-external factors interact with subject-internal ones (e.g., aptitude) to determine speech perception and/or learning success. In a previous tone learning study, high-aptitude individuals benefitted from talker variability, whereas low-aptitude individuals were impaired. Because increases in cognitive load have been shown to hinder speech perception in mixed-talker conditions, it has been proposed that resolving talker variability requires cognitive resources. This proposal leads to the hypothesis that low-aptitude individuals do not use their cognitive resources as efficiently as those with high aptitude. Here, high- and low-aptitude subjects identified pitch contours spoken by multiple talkers under high and low cognitive load conditions established by a secondary task. While high-aptitude listeners outperformed low-aptitude listeners across load conditions, only low-aptitude listeners were impaired by increased cognitive load. The findings suggest that low-aptitude listeners either have fewer available cognitive resources or are poorer at allocating attention to the signal. Therefore, cognitive load is an important factor when considering individual differences in speech perception and training paradigms. PMID:26328675

  14. Poor phonetic perceivers are affected by cognitive load when resolving talker variability.

    PubMed

    Antoniou, Mark; Wong, Patrick C M

    2015-08-01

    Speech training paradigms aim to maximise learning outcomes by manipulating external factors such as talker variability. However, not all individuals may benefit from such manipulations because subject-external factors interact with subject-internal ones (e.g., aptitude) to determine speech perception and/or learning success. In a previous tone learning study, high-aptitude individuals benefitted from talker variability, whereas low-aptitude individuals were impaired. Because increases in cognitive load have been shown to hinder speech perception in mixed-talker conditions, it has been proposed that resolving talker variability requires cognitive resources. This proposal leads to the hypothesis that low-aptitude individuals do not use their cognitive resources as efficiently as those with high aptitude. Here, high- and low-aptitude subjects identified pitch contours spoken by multiple talkers under high and low cognitive load conditions established by a secondary task. While high-aptitude listeners outperformed low-aptitude listeners across load conditions, only low-aptitude listeners were impaired by increased cognitive load. The findings suggest that low-aptitude listeners either have fewer available cognitive resources or are poorer at allocating attention to the signal. Therefore, cognitive load is an important factor when considering individual differences in speech perception and training paradigms.

  15. Effect of feeding silkworm on growth performance and feed efficiency of snakehead (Channa striata)

    NASA Astrophysics Data System (ADS)

    Firmani, U.; Lono

    2018-04-01

    The snakehead, Chana striata is a carnivorous freshwater fish and widely distributed in Asia. High demand of this fish has been triggering many aquaculturist to culture C. stiata. Feed was the important factor for fish growth. Silkworm has high protein content, low fat and can be used as natural feed for finfish. This study investigate the silkworm feed in C. striata. The treatment of this research were A (100 % pellet); B (100 % silkworm); C (combination of 75 % pellet and 25 % silkworm); D (combination of 50 % pellet and 50 % silkworm); and E (combination of 25 % pellet and 75 % silkworm). The variables measured in this study were relatif growth, specific growth rate, feed efficiency, feed conversion ratio, and survival rate. The result show that silkworm gave the high growth performance, feed efficiency and survival rate of the snakehead (Channa striata) compared with the control.

  16. PP-SWAT: A phython-based computing software for efficient multiobjective callibration of SWAT

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  17. Impact of public transit market share and other transportation variables on GHG emissions : developing statistical models for aggregate predictions.

    DOT National Transportation Integrated Search

    2015-01-01

    Policies that encourage reduced travel, such as traveling shorter distances, and increased use of : more efficient transportation modes, such as public transportation and high-occupancy private : automobiles, are often considered one of several possi...

  18. Low vibration high numerical aperture automated variable temperature Raman microscope

    DOE PAGES

    Tian, Y.; Reijnders, A. A.; Osterhoudt, G. B.; ...

    2016-04-05

    Raman micro-spectroscopy is well suited for studying a variety of properties and has been applied to wide- ranging areas. Combined with tuneable temperature, Raman spectra can offer even more insights into the properties of materials. However, previous designs of variable temperature Raman microscopes have made it extremely challenging to measure samples with low signal levels due to thermal and positional instability as well as low collection efficiencies. Thus, contemporary Raman microscope has found limited applicability to probing the subtle physics involved in phase transitions and hysteresis. This paper describes a new design of a closed-cycle, Raman microscope with full polarizationmore » rotation. High collection efficiency, thermal and mechanical stability are ensured by both deliberate optical, cryogenic, and mechanical design. Measurements on two samples, Bi 2Se 3 and V 2O 3, which are known as challenging due to low thermal conductivities, low signal levels and/or hysteretic effects, are measured with previously undemonstrated temperature resolution.« less

  19. Low vibration high numerical aperture automated variable temperature Raman microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Y.; Reijnders, A. A.; Osterhoudt, G. B.

    Raman micro-spectroscopy is well suited for studying a variety of properties and has been applied to wide- ranging areas. Combined with tuneable temperature, Raman spectra can offer even more insights into the properties of materials. However, previous designs of variable temperature Raman microscopes have made it extremely challenging to measure samples with low signal levels due to thermal and positional instability as well as low collection efficiencies. Thus, contemporary Raman microscope has found limited applicability to probing the subtle physics involved in phase transitions and hysteresis. This paper describes a new design of a closed-cycle, Raman microscope with full polarizationmore » rotation. High collection efficiency, thermal and mechanical stability are ensured by both deliberate optical, cryogenic, and mechanical design. Measurements on two samples, Bi 2Se 3 and V 2O 3, which are known as challenging due to low thermal conductivities, low signal levels and/or hysteretic effects, are measured with previously undemonstrated temperature resolution.« less

  20. A multi-fidelity framework for physics based rotor blade simulation and optimization

    NASA Astrophysics Data System (ADS)

    Collins, Kyle Brian

    New helicopter rotor designs are desired that offer increased efficiency, reduced vibration, and reduced noise. Rotor Designers in industry need methods that allow them to use the most accurate simulation tools available to search for these optimal designs. Computer based rotor analysis and optimization have been advanced by the development of industry standard codes known as "comprehensive" rotorcraft analysis tools. These tools typically use table look-up aerodynamics, simplified inflow models and perform aeroelastic analysis using Computational Structural Dynamics (CSD). Due to the simplified aerodynamics, most design studies are performed varying structural related design variables like sectional mass and stiffness. The optimization of shape related variables in forward flight using these tools is complicated and results are viewed with skepticism because rotor blade loads are not accurately predicted. The most accurate methods of rotor simulation utilize Computational Fluid Dynamics (CFD) but have historically been considered too computationally intensive to be used in computer based optimization, where numerous simulations are required. An approach is needed where high fidelity CFD rotor analysis can be utilized in a shape variable optimization problem with multiple objectives. Any approach should be capable of working in forward flight in addition to hover. An alternative is proposed and founded on the idea that efficient hybrid CFD methods of rotor analysis are ready to be used in preliminary design. In addition, the proposed approach recognizes the usefulness of lower fidelity physics based analysis and surrogate modeling. Together, they are used with high fidelity analysis in an intelligent process of surrogate model building of parameters in the high fidelity domain. Closing the loop between high and low fidelity analysis is a key aspect of the proposed approach. This is done by using information from higher fidelity analysis to improve predictions made with lower fidelity models. This thesis documents the development of automated low and high fidelity physics based rotor simulation frameworks. The low fidelity framework uses a comprehensive code with simplified aerodynamics. The high fidelity model uses a parallel processor capable CFD/CSD methodology. Both low and high fidelity frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both the low and high fidelity frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test the process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist distribution. Approximation models are built for metrics related to rotor efficiency and vibration using the results from 60+ high fidelity (CFD/CSD) experiments and 400+ low fidelity experiments. Optimization using the approximation models found the Pareto Frontier anchor points, or the design having maximum rotor efficiency and the design having minimum vibration. Various Pareto generation methods are used to find designs on the frontier between these two anchor designs. When tested in the high fidelity framework, the Pareto anchor designs are shown to be very good designs when compared with other designs from the high fidelity database. This provides evidence that the process proposed has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors. In conclusion, the methods developed and documented in this thesis have made several novel contributions. First, an automated high fidelity CFD based forward flight simulation framework has been built for use in preliminary design optimization. The framework was built around an integrated, parallel processor capable CFD/CSD/AA process. Second, a novel method of building approximate models of high fidelity parameters has been developed. The method uses a combination of low and high fidelity results and combines Design of Experiments, statistical effects analysis, and aspects of approximation model management. And third, the determination of rotor blade shape variables through optimization using CFD based analysis in forward flight has been performed. This was done using the high fidelity CFD/CSD/AA framework and method mentioned above. While the low and high fidelity predictions methods used in the work still have inaccuracies that can affect the absolute levels of the results, a framework has been successfully developed and demonstrated that allows for an efficient process to improve rotor blade designs in terms of a selected choice of objective function(s). Using engineering judgment, this methodology could be applied today to investigate opportunities to improve existing designs. With improvements in the low and high fidelity prediction components that will certainly occur, this framework could become a powerful tool for future rotorcraft design work. (Abstract shortened by UMI.)

  1. Deposition efficiency of inhaled particles (15-5000 nm) related to breathing pattern and lung function: an experimental study in healthy children and adults.

    PubMed

    Rissler, Jenny; Gudmundsson, Anders; Nicklasson, Hanna; Swietlicki, Erik; Wollmer, Per; Löndahl, Jakob

    2017-04-08

    Exposure to airborne particles has a major impact on global health. The probability of these particles to deposit in the respiratory tract during breathing is essential for their toxic effects. Observations have shown that there is a substantial variability in deposition between subjects, not only due to respiratory diseases, but also among individuals with healthy lungs. The factors determining this variability are, however, not fully understood. In this study we experimentally investigate factors that determine individual differences in the respiratory tract depositions of inhaled particles for healthy subjects at relaxed breathing. The study covers particles of diameters 15-5000 nm and includes 67 subjects aged 7-70 years. A comprehensive examination of lung function was performed for all subjects. Principal component analyses and multiple regression analyses were used to explore the relationships between subject characteristics and particle deposition. A large individual variability in respiratory tract deposition efficiency was found. Individuals with high deposition of a certain particle size generally had high deposition for all particles <3500 nm. The individual variability was explained by two factors: breathing pattern, and lung structural and functional properties. The most important predictors were found to be breathing frequency and anatomical airway dead space. We also present a linear regression model describing the deposition based on four variables: tidal volume, breathing frequency, anatomical dead space and resistance of the respiratory system (the latter measured with impulse oscillometry). To understand why some individuals are more susceptible to airborne particles we must understand, and take into account, the individual variability in the probability of particles to deposit in the respiratory tract by considering not only breathing patterns but also adequate measures of relevant structural and functional properties.

  2. Variability in clubhead presentation characteristics and ball impact location for golfers' drives.

    PubMed

    Betzler, Nils F; Monk, Stuart A; Wallace, Eric S; Otto, Steve R

    2012-01-01

    The purpose of the present study was to analyse the variability in clubhead presentation to the ball and the resulting ball impact location on the club face for a range of golfers of different ability. A total of 285 male and female participants hit multiple shots using one of four proprietary drivers. Self-reported handicap was used to quantify a participant's golfing ability. A bespoke motion capture system and user-written algorithms was used to track the clubhead just before and at impact, measuring clubhead speed, clubhead orientation, and impact location. A Doppler radar was used to measure golf ball speed. Generally, golfers of higher skill (lower handicap) generated increased clubhead speed and increased efficiency (ratio of ball speed to clubhead speed). Non-parametric statistical tests showed that low-handicap golfers exhibit significantly lower variability from shot to shot in clubhead speed, efficiency, impact location, attack angle, club path, and face angle compared with high-handicap golfers.

  3. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  4. Cooled variable-area radial turbine technology program

    NASA Technical Reports Server (NTRS)

    Large, G. D.; Meyer, L. J.

    1982-01-01

    The objective of this study was a conceptual evaluation and design analyses of a cooled variable-area radial turbine capable of maintaining nearly constant high efficiency when operated at a constant speed and pressure ratio over a range of flows corresponding to 50- to 100-percent maximum engine power. The results showed that a 1589K (2400 F) turbine was feasible that would satisfy a 4000-hour duty cycle life goal. The final design feasibility is based on 1988 material technology goals. A peak aerodynamic stage total efficiency of 0.88 was predicted at 100 percent power. Two candidate stators were identified: an articulated trailing-edge and a locally movable sidewall. Both concepts must be experimentally evaluated to determine the optimum configuration. A follow-on test program is proposed for this evaluation.

  5. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  6. Introduction of Transplant Registry Unified Management Program 2 (TRUMP2): scripts for TRUMP data analyses, part I (variables other than HLA-related data).

    PubMed

    Atsuta, Yoshiko

    2016-01-01

    Collection and analysis of information on diseases and post-transplant courses of allogeneic hematopoietic stem cell transplant recipients have played important roles in improving therapeutic outcomes in hematopoietic stem cell transplantation. Efficient, high-quality data collection systems are essential. The introduction of the Second-Generation Transplant Registry Unified Management Program (TRUMP2) is intended to improve data quality and more efficient data management. The TRUMP2 system will also expand possible uses of data, as it is capable of building a more complex relational database. The construction of an accessible data utilization system for adequate data utilization by researchers would promote greater research activity. Study approval and management processes and authorship guidelines also need to be organized within this context. Quality control of processes for data manipulation and analysis will also affect study outcomes. Shared scripts have been introduced to define variables according to standard definitions for quality control and improving efficiency of registry studies using TRUMP data.

  7. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  8. Exploring the Unknown: Detection of Fast Variability of Starlight (Abstract)

    NASA Astrophysics Data System (ADS)

    Stanton, R. H.

    2017-12-01

    (Abstract only) In previous papers the author described a photometer designed for observing high-speed events such as lunar and asteroid occultations, and for searching for new varieties of fast stellar variability. A significant challenge presented by such a system is how one deals with the large quantity of data generated in order to process it efficiently and reveal any hidden information that might be present. This paper surveys some of the techniques used to achieve this goal.

  9. Study of blade aspect ratio on a compressor front stage aerodynamic and mechanical design report

    NASA Technical Reports Server (NTRS)

    Burger, G. D.; Lee, D.; Snow, D. W.

    1979-01-01

    A single stage compressor was designed with the intent of demonstrating that, for a tip speed and hub-tip ratio typical of an advanced core compressor front stage, the use of low aspect ratio can permit high levels of blade loading to be achieved at an acceptable level of efficiency. The design pressure ratio is 1.8 at an adiabatic efficiency of 88.5 percent. Both rotor and stator have multiple-circular-arc airfoil sections. Variable IGV and stator vanes permit low speed matching adjustments. The design incorporates an inlet duct representative of an engine transition duct between fan and high pressure compressor.

  10. General statistical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberhardt, L L; Gilbert, R O

    From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less

  11. Developing Clade-Specific Microsatellite Markers: A Case Study in the Filamentous Fungal Genus Aspergillus

    USDA-ARS?s Scientific Manuscript database

    Microsatellite markers are highly variable and very commonly used in population genetics studies. However, microsatellite loci are typically poorly conserved and cannot be used in distant related species. Thus, development of clade-specific microsatellite markers would increase efficiency and allow ...

  12. Treatment of high ethanol concentration wastewater by biological sand filters: enhanced COD removal and bacterial community dynamics.

    PubMed

    Rodriguez-Caballero, A; Ramond, J-B; Welz, P J; Cowan, D A; Odlare, M; Burton, S G

    2012-10-30

    Winery wastewater is characterized by its high chemical oxygen demand (COD), seasonal occurrence and variable composition, including periodic high ethanol concentrations. In addition, winery wastewater may contain insufficient inorganic nutrients for optimal biodegradation of organic constituents. Two pilot-scale biological sand filters (BSFs) were used to treat artificial wastewater: the first was amended with ethanol and the second with ethanol, inorganic nitrogen (N) and phosphorus (P). A number of biochemical parameters involved in the removal of pollutants through BSF systems were monitored, including effluent chemistry and bacterial community structures. The nutrient supplemented BSF showed efficient COD, N and P removal. Comparison of the COD removal efficiencies of the two BSFs showed that N and P addition enhanced COD removal efficiency by up to 16%. Molecular fingerprinting of BSF sediment samples using denaturing gradient gel electrophoresis (DGGE) showed that amendment with high concentrations of ethanol destabilized the microbial community structure, but that nutrient supplementation countered this effect. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Has competition increased hospital technical efficiency?

    PubMed

    Lee, Keon-Hyung; Park, Jungwon; Lim, Seunghoo; Park, Sang-Chul

    2015-01-01

    Hospital competition and managed care have affected the hospital industry in various ways including technical efficiency. Hospital efficiency has become an important topic, and it is important to properly measure hospital efficiency in order to evaluate the impact of policies on the hospital industry. The primary independent variable is hospital competition. By using the 2001-2004 inpatient discharge data from Florida, we calculate the degree of hospital competition in Florida for 4 years. Hospital efficiency scores are developed using the Data Envelopment Analysis and by using the selected input and output variables from the American Hospital Association's Annual Survey of Hospitals for those acute care general hospitals in Florida. By using the hospital efficiency score as a dependent variable, we analyze the effects of hospital competition on hospital efficiency from 2001 to 2004 and find that when a hospital was located in a less competitive market in 2003, its technical efficiency score was lower than those in a more competitive market.

  14. Experimental evaluation of a translating nozzle sidewall radial turbine

    NASA Technical Reports Server (NTRS)

    Roelke, Richard J.; Rogo, Casimir

    1987-01-01

    Studies have shown that reduced specific fuel consumption of rotorcraft engines can be achieved with a variable capacity engine. A key component in such an engine in a high-work, high-temperature variable geometry gas generator turbine. An optimization study indicated that a radial turbine with a translating nozzle sidewall could produce high efficiency over a wide range of engine flows but substantiating data were not available. An experimental program with Teledyne CAE, Toledo, Ohio was undertaken to evaluate the moving sidewall concept. A variety of translating nozzle sidewall turbine configurations were evaluated. The effects of nozzle leakage and coolant flows were also investigated. Testing was done in warm air (121 C). The results of the contractual program were summarized.

  15. A data envelope analysis to assess factors affecting technical and economic efficiency of individual broiler breeder hens.

    PubMed

    Romero, L F; Zuidhof, M J; Jeffrey, S R; Naeima, A; Renema, R A; Robinson, F E

    2010-08-01

    This study evaluated the effect of feed allocation and energetic efficiency on technical and economic efficiency of broiler breeder hens using the data envelope analysis methodology and quantified the effect of variables affecting technical efficiency. A total of 288 Ross 708 pullets were placed in individual cages at 16 wk of age and assigned to 1 of 4 feed allocation groups. Three of them had feed allocated on a group basis with divergent BW targets: standard, high (standard x 1.1), and low (standard x 0.9). The fourth group had feed allocated on an individual bird basis following the standard BW target. Birds were classified in 3 energetic efficiency categories: low, average, and high, based on estimated maintenance requirements. Technical efficiency considered saleable chicks as output and cumulative ME intake and time as inputs. Economic efficiency of feed allocation treatments was analyzed under different cost scenarios. Birds with low feed allocation exhibited a lower technical efficiency (69.4%) than standard (72.1%), which reflected a reduced egg production rate. Feed allocation of the high treatment could have been reduced by 10% with the same chick production as the standard treatment. The low treatment exhibited reduced economic efficiency at greater capital costs, whereas high had reduced economic efficiency at greater feed costs. The average energetic efficiency hens had a lower technical efficiency in the low compared with the standard feed allocation. A 1% increment in estimated maintenance requirement changed technical efficiency by -0.23%, whereas a 1% increment in ME intake had a -0.47% effect. The negative relationship between technical efficiency and ME intake was counterbalanced by a positive correlation of ME intake and egg production. The negative relationship of technical efficiency and maintenance requirements was synergized by a negative correlation of hen maintenance and egg production. Economic efficiency methodologies are effective tools to assess the economic effect of selection and flock management programs because biological, allocative, and economic factors can be independently analyzed.

  16. Investigation on the Inertance Tubes of Pulse Tube Cryocooler Without Reservoir

    NASA Astrophysics Data System (ADS)

    Liu, Y. J.; Yang, L. W.; Liang, J. T.; Hong, G. T.

    2010-04-01

    Phase angle is of vital importance for high-efficiency pulse tube cryocoolers (PTCs). Inertance tube as the main phase shifter is useful for the PTCs to obtain appropriate phase angle. Experiments of inertance tube without reservoir under variable frequency, variable length and diameter of inertance tube and variable pressure amplitude are investigated respectively. In addition, the authors used DeltaEC, a computer program to predict the performance of low-amplitude thermoacoustic engines, to simulate the effects of inertance tube without reservoir. According to the comparison of experiments and theoretical simulations, DeltaEC method is feasible and effective to direct and improve the design of inertance tubes.

  17. Intraspecific Variation in Wood Anatomical, Hydraulic, and Foliar Traits in Ten European Beech Provenances Differing in Growth Yield

    PubMed Central

    Hajek, Peter; Kurjak, Daniel; von Wühlisch, Georg; Delzon, Sylvain; Schuldt, Bernhard

    2016-01-01

    In angiosperms, many studies have described the inter-specific variability of hydraulic-related traits and little is known at the intra-specific level. This information is however mandatory to assess the adaptive capacities of tree populations in the context of increasing drought frequency and severity. Ten 20-year old European beech (Fagus sylvatica L.) provenances representing the entire distribution range throughout Europe and differing significantly in aboveground biomass increment (ABI) by a factor of up to four were investigated for branch wood anatomical, hydraulic, and foliar traits in a provenance trial located in Northern Europe. We quantified to which extend xylem hydraulic and leaf traits are under genetic control and tested whether the xylem hydraulic properties (hydraulic efficiency and safety) trades off with yield and wood anatomical and leaf traits. Our results showed that only three out of 22 investigated ecophysiological traits showed significant genetic differentiations between provenances, namely vessel density (VD), the xylem pressure causing 88% loss of hydraulic conductance and mean leaf size. Depending of the ecophysiological traits measured, genetic differentiation between populations explained 0–14% of total phenotypic variation, while intra-population variability was higher than inter-population variability. Most wood anatomical traits and some foliar traits were additionally related to the climate of provenance origin. The lumen to sapwood area ratio, vessel diameter, theoretical specific conductivity and theoretical leaf-specific conductivity as well as the C:N-ratio increased with climatic aridity at the place of origin while the carbon isotope signature (δ13C) decreased. Contrary to our assumption, none of the wood anatomical traits were related to embolism resistance but were strong determinants of hydraulic efficiency. Although ABI was associated with both VD and δ13C, both hydraulic efficiency and embolism resistance were unrelated, disproving the assumed trade-off between hydraulic efficiency and safety. European beech seems to compensate increasing water stress with growing size mainly by adjusting vessel number and not vessel diameter. In conclusion, European beech has a high potential capacity to cope with climate change due to the high degree of intra-population genetic variability. PMID:27379112

  18. Intraspecific Variation in Wood Anatomical, Hydraulic, and Foliar Traits in Ten European Beech Provenances Differing in Growth Yield.

    PubMed

    Hajek, Peter; Kurjak, Daniel; von Wühlisch, Georg; Delzon, Sylvain; Schuldt, Bernhard

    2016-01-01

    In angiosperms, many studies have described the inter-specific variability of hydraulic-related traits and little is known at the intra-specific level. This information is however mandatory to assess the adaptive capacities of tree populations in the context of increasing drought frequency and severity. Ten 20-year old European beech (Fagus sylvatica L.) provenances representing the entire distribution range throughout Europe and differing significantly in aboveground biomass increment (ABI) by a factor of up to four were investigated for branch wood anatomical, hydraulic, and foliar traits in a provenance trial located in Northern Europe. We quantified to which extend xylem hydraulic and leaf traits are under genetic control and tested whether the xylem hydraulic properties (hydraulic efficiency and safety) trades off with yield and wood anatomical and leaf traits. Our results showed that only three out of 22 investigated ecophysiological traits showed significant genetic differentiations between provenances, namely vessel density (VD), the xylem pressure causing 88% loss of hydraulic conductance and mean leaf size. Depending of the ecophysiological traits measured, genetic differentiation between populations explained 0-14% of total phenotypic variation, while intra-population variability was higher than inter-population variability. Most wood anatomical traits and some foliar traits were additionally related to the climate of provenance origin. The lumen to sapwood area ratio, vessel diameter, theoretical specific conductivity and theoretical leaf-specific conductivity as well as the C:N-ratio increased with climatic aridity at the place of origin while the carbon isotope signature (δ(13)C) decreased. Contrary to our assumption, none of the wood anatomical traits were related to embolism resistance but were strong determinants of hydraulic efficiency. Although ABI was associated with both VD and δ(13)C, both hydraulic efficiency and embolism resistance were unrelated, disproving the assumed trade-off between hydraulic efficiency and safety. European beech seems to compensate increasing water stress with growing size mainly by adjusting vessel number and not vessel diameter. In conclusion, European beech has a high potential capacity to cope with climate change due to the high degree of intra-population genetic variability.

  19. Optimization on the impeller of a low-specific-speed centrifugal pump for hydraulic performance improvement

    NASA Astrophysics Data System (ADS)

    Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng

    2016-09-01

    In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.

  20. Electromagnetic machines with Nd-Fe-B magnets

    NASA Astrophysics Data System (ADS)

    Hanitsch, Rolf

    1989-08-01

    Permanent magnet motors are now becoming more accepted for general use in industrial fixed and variable speed drives. With the application of high-energy permanent magnets, such as Nd-Fe-B, the new motors offer higher efficiency and reduced size and weight compared with wound field energy converters of the same rating.

  1. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  2. Predictive GT-Power Simulation for VNT Matching on a 1.6 L Turbocharged GDI Engine

    EPA Science Inventory

    The thermal efficiency benefits of low-pressure (LP) exhaust gas recirculation (EGR) in spark-ignition engine combustion are well known. One of the greatest barriers facing adoption of LP-EGR for high power-density applications is the challenge of boosting. Variable nozzle turbin...

  3. Relationship between glycaemic variability and hyperglycaemic clamp-derived functional variables in (impending) type 1 diabetes.

    PubMed

    Van Dalem, Annelien; Demeester, Simke; Balti, Eric V; Decochez, Katelijn; Weets, Ilse; Vandemeulebroucke, Evy; Van de Velde, Ursule; Walgraeve, An; Seret, Nicole; De Block, Christophe; Ruige, Johannes; Gillard, Pieter; Keymeulen, Bart; Pipeleers, Daniel G; Gorus, Frans K

    2015-12-01

    We examined whether measures of glycaemic variability (GV), assessed by continuous glucose monitoring (CGM) and self-monitoring of blood glucose (SMBG), can complement or replace measures of beta cell function and insulin action in detecting the progression of preclinical disease to type 1 diabetes. Twenty-two autoantibody-positive (autoAb(+)) first-degree relatives (FDRs) of patients with type 1 diabetes who were themselves at high 5-year risk (50%) for type 1 diabetes underwent CGM, a hyperglycaemic clamp test and OGTT, and were followed for up to 31 months. Clamp variables were used to estimate beta cell function (first-phase [AUC5-10 min] and second-phase [AUC120-150 min] C-peptide release) combined with insulin resistance (glucose disposal rate; M 120-150 min). Age-matched healthy volunteers (n = 20) and individuals with recent-onset type 1 diabetes (n = 9) served as control groups. In autoAb(+) FDRs, M 120-150 min below the 10th percentile (P10) of controls achieved 86% diagnostic efficiency in discriminating between normoglycaemic FDRs and individuals with (impending) dysglycaemia. M 120-150 min outperformed AUC5-10 min and AUC120-150 min C-peptide below P10 of controls, which were only 59-68% effective. Among GV variables, CGM above the reference range was better at detecting (impending) dysglycaemia than elevated SMBG (77-82% vs 73% efficiency). Combined CGM measures were equally efficient as M 120-150 min (86%). Daytime GV variables were inversely correlated with clamp variables, and more strongly with M 120-150 min than with AUC5-10 min or AUC120-150 min C-peptide. CGM-derived GV and the glucose disposal rate, reflecting both insulin secretion and action, outperformed SMBG and first- or second-phase AUC C-peptide in identifying FDRs with (impending) dysglycaemia or diabetes. Our results indicate the feasibility of developing minimally invasive CGM-based criteria for close metabolic monitoring and as outcome measures in trials.

  4. Measuring the efficiencies of visiting nurse service agencies using data envelopment analysis.

    PubMed

    Kuwahara, Yuki; Nagata, Satoko; Taguchi, Atsuko; Naruse, Takashi; Kawaguchi, Hiroyuki; Murashima, Sachiyo

    2013-09-01

    This study develops a measure of the efficiency of visiting nurse (VN) agencies in Japan, examining the issues related to the measurement of efficiency, and identifying the characteristics that influence efficiency. We have employed a data envelopment analysis to measure the efficiency of 108 VN agencies, using the numbers of 5 types of staff as the input variables and the numbers of 3 types of visits as the output variables. The median efficiency scores of the VN agencies were found to be 0.80 and 1.00 according to the constant returns to scale (CRS) and variable returns to scale (VRS) models, respectively, and the median scale efficiency score was 0.95. This study supports using both the CRS and VRS models to measure the scale efficiency of VN service agencies. We also found that relatively efficient VN agencies filled at least 30 % of staff positions with experienced workers, and so concluded that this characteristic has a direct influence on the length of visits.

  5. Enhancement of dissolution and oral bioavailability of lacidipine via pluronic P123/F127 mixed polymeric micelles: formulation, optimization using central composite design and in vivo bioavailability study.

    PubMed

    Fares, Ahmed R; ElMeshad, Aliaa N; Kassem, Mohamed A A

    2018-11-01

    This study aims at preparing and optimizing lacidipine (LCDP) polymeric micelles using thin film hydration technique in order to overcome LCDP solubility-limited oral bioavailability. A two-factor three-level central composite face-centered design (CCFD) was employed to optimize the formulation variables to obtain LCDP polymeric micelles of high entrapment efficiency and small and uniform particle size (PS). Formulation variables were: Pluronic to drug ratio (A) and Pluronic P123 percentage (B). LCDP polymeric micelles were assessed for entrapment efficiency (EE%), PS and polydispersity index (PDI). The formula with the highest desirability (0.959) was chosen as the optimized formula. The values of the formulation variables (A and B) in the optimized polymeric micelles formula were 45% and 80%, respectively. Optimum LCDP polymeric micelles had entrapment efficiency of 99.23%, PS of 21.08 nm and PDI of 0.11. Optimum LCDP polymeric micelles formula was physically characterized using transmission electron microscopy. LCDP polymeric micelles showed saturation solubility approximately 450 times that of raw LCDP in addition to significantly enhanced dissolution rate. Bioavailability study of optimum LCDP polymeric micelles formula in rabbits revealed a 6.85-fold increase in LCDP bioavailability compared to LCDP oral suspension.

  6. Leaf photoacclimatory responses of the tropical seagrass Thalassia testudinum under mesocosm conditions: a mechanistic scaling-up study.

    PubMed

    Cayabyab, Napo M; Enríquez, Susana

    2007-01-01

    Here, the leaf photoacclimatory plasticity and efficiency of the tropical seagrass Thalassia testudinum were examined. Mesocosms were used to compare the variability induced by three light conditions, two leaf sections and the variability observed at the collection site. The study revealed an efficient photosynthetic light use at low irradiances, but limited photoacclimatory plasticity to increase maximum photosynthetic rates (P(max)) and saturation (E(k)) and compensation (E(c)) irradiances under high light irradiance. A strong, positive and linear association between the percentage of daylight hours above saturation and the relative maximum photochemical efficiency (F(V)/F(M)) reduction observed between basal and apical leaf sections was also found. The results indicate that T. testudinum leaves have a shade-adapted physiology. However, the large amount of heterotrophic biomass that this seagrass maintains may considerably increase plant respiratory demands and their minimum quantum requirements for growth (MQR). Although the MQR still needs to be quantified, it is hypothesized that the ecological success of this climax species in the oligotrophic and highly illuminated waters of the Caribbean may rely on the ability of the canopy to regulate the optimal leaf light environment and the morphological plasticity of the whole plant to enhance total leaf area and to reduce carbon respiratory losses.

  7. Sometimes processes don't matter: the general effect of short term climate variability on erosional systems.

    NASA Astrophysics Data System (ADS)

    Deal, Eric; Braun, Jean

    2017-04-01

    Climatic forcing undoubtedly plays an important role in shaping the Earth's surface. However, precisely how climate affects erosion rates, landscape morphology and the sedimentary record is highly debated. Recently there has been a focus on the influence of short-term variability in rainfall and river discharge on the relationship between climate and erosion rates. Here, we present a simple probabilistic argument, backed by modelling, that demonstrates that the way the Earth's surface responds to short-term climatic forcing variability is primarily determined by the existence and magnitude of erosional thresholds. We find that it is the ratio between the threshold magnitude and the mean magnitude of climatic forcing that determines whether variability matters or not and in which way. This is a fundamental result that applies regardless of the nature of the erosional process. This means, for example, that we can understand the role that discharge variability plays in determining fluvial erosion efficiency despite doubts about the processes involved in fluvial erosion. We can use this finding to reproduce the main conclusions of previous studies on the role of discharge variability in determining long-term fluvial erosion efficiency. Many aspects of the landscape known to influence discharge variability are affected by human activity, such as land use and river damming. Another important control on discharge variability, rainfall intensity, is also expected to increase with warmer temperatures. Among many other implications, our findings help provide a general framework to understand and predict the response of the Earth's surface to changes in mean and variability of rainfall and river discharge associated with the anthropogenic activity. In addition, the process independent nature of our findings suggest that previous work on river discharge variability and erosion thresholds can be applied to other erosional systems.

  8. A Study of Effects of MultiCollinearity in the Multivariable Analysis

    PubMed Central

    Yoo, Wonsuk; Mayberry, Robert; Bae, Sejong; Singh, Karan; (Peter) He, Qinghua; Lillard, James W.

    2015-01-01

    A multivariable analysis is the most popular approach when investigating associations between risk factors and disease. However, efficiency of multivariable analysis highly depends on correlation structure among predictive variables. When the covariates in the model are not independent one another, collinearity/multicollinearity problems arise in the analysis, which leads to biased estimation. This work aims to perform a simulation study with various scenarios of different collinearity structures to investigate the effects of collinearity under various correlation structures amongst predictive and explanatory variables and to compare these results with existing guidelines to decide harmful collinearity. Three correlation scenarios among predictor variables are considered: (1) bivariate collinear structure as the most simple collinearity case, (2) multivariate collinear structure where an explanatory variable is correlated with two other covariates, (3) a more realistic scenario when an independent variable can be expressed by various functions including the other variables. PMID:25664257

  9. A Study of Effects of MultiCollinearity in the Multivariable Analysis.

    PubMed

    Yoo, Wonsuk; Mayberry, Robert; Bae, Sejong; Singh, Karan; Peter He, Qinghua; Lillard, James W

    2014-10-01

    A multivariable analysis is the most popular approach when investigating associations between risk factors and disease. However, efficiency of multivariable analysis highly depends on correlation structure among predictive variables. When the covariates in the model are not independent one another, collinearity/multicollinearity problems arise in the analysis, which leads to biased estimation. This work aims to perform a simulation study with various scenarios of different collinearity structures to investigate the effects of collinearity under various correlation structures amongst predictive and explanatory variables and to compare these results with existing guidelines to decide harmful collinearity. Three correlation scenarios among predictor variables are considered: (1) bivariate collinear structure as the most simple collinearity case, (2) multivariate collinear structure where an explanatory variable is correlated with two other covariates, (3) a more realistic scenario when an independent variable can be expressed by various functions including the other variables.

  10. Integrating environmental gap analysis with spatial conservation prioritization: a case study from Victoria, Australia.

    PubMed

    Sharafi, Seyedeh Mahdieh; Moilanen, Atte; White, Matt; Burgman, Mark

    2012-12-15

    Gap analysis is used to analyse reserve networks and their coverage of biodiversity, thus identifying gaps in biodiversity representation that may be filled by additional conservation measures. Gap analysis has been used to identify priorities for species and habitat types. When it is applied to identify gaps in the coverage of environmental variables, it embodies the assumption that combinations of environmental variables are effective surrogates for biodiversity attributes. The question remains of how to fill gaps in conservation systems efficiently. Conservation prioritization software can identify those areas outside existing conservation areas that contribute to the efficient covering of gaps in biodiversity features. We show how environmental gap analysis can be implemented using high-resolution information about environmental variables and ecosystem condition with the publicly available conservation prioritization software, Zonation. Our method is based on the conversion of combinations of environmental variables into biodiversity features. We also replicated the analysis by using Species Distribution Models (SDMs) as biodiversity features to evaluate the robustness and utility of our environment-based analysis. We apply the technique to a planning case study of the state of Victoria, Australia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. THE INFLUENCE OF VARIABLE TEMPERATURE AND HUMIDITY ON THE PREDATION EFFICIENCY OF P. PERSIMILIS, N. CALIFORNICUS AND N. FALLACIS.

    PubMed

    Audenaert, J; Vangansbeke, D; Verhoeven, R; De Clercq, P; Tirry, L; Gobin, B

    2014-01-01

    Predatory mites like Phytoseiulus persimilis Athias-Henriot, Neoseiulus californicus McGregor and N. fallacis (Garman) (Acari: Phytoseiidae) are essential in sustainable control strategies of the two-spotted spider mite Tetranychus urticae Koch (Acari: Tetranychidae) in warm greenhouse cultures to complement imited available pesticides and to tackle emerging resistance. However, in response to high energy prices, greenhouse plant breeders have recently changed their greenhouse steering strategies, allowing more variation in temperature and humidity. The impact of these variations on biological control agents is poorly understood. Therefore, we constructed functional response models to demonstrate the impact of realistic climate variations on predation efficiency. First, two temperature regimes were compared at constant humidity (70%) and photoperiod (16L:8D): DIF0 (constant temperature) and DIF15 (variable temperature with day-night difference of 15°C). At mean temperatures of 25°C, DIF15 had a negative influence on the predation efficiency of P. persimilis and N. californicus, as compared to DIF0. At low mean temperatures of 15°C, however, DIF15 showed a higher predation efficiency for P. persimilis and N. californicus. For N. fallacis no difference was observed at both 15°C and 25°C. Secondly, two humidity regimes were compared, at a mean temperature of 25°C (DIFO) and constant photoperiod (16L:8D): RHCTE (constant 70% humidity) and RHALT (alternating 40% L:70%D humidity). For P. persimilis and N. fallacis RHCTE resulted in a higher predation efficiency than RHALT, for N. californicus this effect was opposite. This shows that N. californicus is more adapted to dry climates as compared to the other predatory mites. We conclude that variable greenhouse climates clearly affect predation efficiency of P. persimilis, N. californicus and N. fallacis. To obtain optimal control efficiency, the choice of predatory mites (including dose and application frequency) should be adapted to the actual greenhouse climate.

  12. Simultaneous Optimization of Multiple Response Variables for the Gelatin-chitosan Microcapsules Containing Angelica Essential Oil.

    PubMed

    Li, Qiang; Sun, Li-Jian; Gong, Xian-Feng; Wang, Yang; Zhao, Xue-Ling

    2017-01-01

    Angelica essential oil (AO), a major pharmacologically active component of Angelica sinensis (Oliv.) Diels, possesses hemogenesis, analgesic activities, and sedative effect. The application of AO in pharmaceutical systems had been limited because of its low oxidative stability. The AO-loaded gelatin-chitosan microcapsules with prevention from oxidation were developed and optimized using response surface methodology. The effects of formulation variables (pH at complex coacervation, gelatin concentration, and core/wall ratio) on multiple response variables (yield, encapsulation efficiency, antioxidation rate, percent of drug released in 1 h, and time to 85% drug release) were systemically investigated. A desirability function that combined these five response variables was constructed. All response variables investigated were found to be highly dependent on the formulation variables, with strong interactions observed between the formulation variables. It was found that optimum overall desirability of AO microcapsules could be obtained at pH 6.20, gelatin concentration 25.00%, and core/wall ratio 40.40%. The experimental values of the response variables highly agreed with the predicted values. The antioxidation rate of optimum formulation was approximately 8 times higher than that of AO. The in-vitro drug release from microcapsules was followed Higuchi model with super case-II transport mechanism.

  13. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    PubMed

    Leving, Marika T; Vegter, Riemer J K; Hartog, Johanneke; Lamoth, Claudine J C; de Groot, Sonja; van der Woude, Lucas H V

    2015-01-01

    It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability.

  14. Effects of Visual Feedback-Induced Variability on Motor Learning of Handrim Wheelchair Propulsion

    PubMed Central

    Leving, Marika T.; Vegter, Riemer J. K.; Hartog, Johanneke; Lamoth, Claudine J. C.; de Groot, Sonja; van der Woude, Lucas H. V.

    2015-01-01

    Background It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. Methods 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. Results The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. Conclusion These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability. PMID:25992626

  15. Spatial variations in drainage efficiency in a boreal wetland environment as a function of lidar and radar-derived deviations from the regional hydraulic gradient

    NASA Astrophysics Data System (ADS)

    Hopkinson, C.; Brisco, B.; Chasmer, L.; Devito, K.; Montgomery, J. S.; Patterson, S.; Petrone, R. M.

    2017-12-01

    The dense forest cover of the Western Boreal Plains of northern Alberta is underlain by a mix of glacial moraines, sandy outwash sediments and clay plains possessing spatially variable hydraulic conductivities. The region is also characterised by a large number of post-glacial surface depression wetlands that have seasonally and topographically limited surface connectivity. Consequently, drainage along shallow regional hydraulic gradients may be dominated either by variations in surface geology or local variations in Et. Long-term government lake level monitoring is sparse in this region, but over a decade of hydrometeorological monitoring has taken place around the Utikuma Regional Study Area (URSA), a research site led by the University of Alberta. In situ lake and ground water level data are here combined with time series of airborne lidar and RadarSat II synthetic aperture radar (SAR) data to assess the spatial variability of water levels during late summer period characterised by flow recession. Long term Lidar data were collected or obtained by the authors in August of 2002, 2008, 2011 and 2016, while seasonal SAR data were captured approximately every 24 days during the summers of 2015, 2016 and 2017. Water levels for wetlands exceeding 100m2 in area across a north-trending 20km x 5km topographic gradient north of Utikuma Lake were extracted directly from the lidar and indirectly from the SAR. The recent seasonal variability in spatial water levels was extracted from SAR, while the lidar data illustrated more long term trends associated with land use and riparian vegetation succession. All water level data collected in August were combined and averaged at multiple scales using a raster focal statistics function to generate a long term spatial map of the regional hydraulic gradient and scale-dependent variations. Areas of indicated high and low drainage efficiency were overlain onto layers of landcover and surface geology to ascertain causal relationships. Areas associated with high spatial variability in water level illustrate reduced drainage connectivity, while areas of reduced variability indicate high surface connectivity and/or hydraulic conductivity. The hypothesis of surface geology controls on local wetland connectivity and landscape drainage efficiency is supported through this analysis.

  16. Variable-Speed Induction Motor Drives for Aircraft Environmental Control Compressors

    NASA Technical Reports Server (NTRS)

    Mildice, J. W.; Hansen, I. G.; Schreiner, K. E.; Roth, M. E.

    1996-01-01

    New, more-efficient designs for aircraft jet engines are not capable of supplying the large quantities of bleed air necessary to provide pressurization and air conditioning for the environmental control systems (ECS) of the next generation of large passenger aircraft. System analysis and engineering have determined that electrically-driven ECS can help to maintain the improved fuel efficiencies; and electronic controllers and induction motors are now being developed in a NASA/NPD SBIR Program to drive both types of ECS compressors. Previous variable-speed induction motor/controller system developments and publications have primarily focused on field-oriented control, with large transient reserve power, for maximum acceleration and optimum response in actuator and robotics systems. The application area addressed herein is characterized by slowly-changing inputs and outputs, small reserve power capability for acceleration, and optimization for maximum efficiency. This paper therefore focuses on the differences between this case and the optimum response case, and shows the development of this new motor/controller approach. It starts with the creation of a new set of controller requirements. In response to those requirements, new control algorithms are being developed and implemented in an embedded computer, which is integrated into the motor controller closed loop. Buffered logic outputs are used to drive the power switches in a resonant-technology, power processor/motor-controller, at switching/resonant frequencies high enough to support efficient high-frequency induction motor operation at speeds up to 50,000-RPA

  17. Control approach development for variable recruitment artificial muscles

    NASA Astrophysics Data System (ADS)

    Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew

    2016-04-01

    This study characterizes hybrid control approaches for the variable recruitment of fluidic artificial muscles with double acting (antagonistic) actuation. Fluidic artificial muscle actuators have been explored by researchers due to their natural compliance, high force-to-weight ratio, and low cost of fabrication. Previous studies have attempted to improve system efficiency of the actuators through variable recruitment, i.e. using discrete changes in the number of active actuators. While current variable recruitment research utilizes manual valve switching, this paper details the current development of an online variable recruitment control scheme. By continuously controlling applied pressure and discretely controlling the number of active actuators, operation in the lowest possible recruitment state is ensured and working fluid consumption is minimized. Results provide insight into switching control scheme effects on working fluids, fabrication material choices, actuator modeling, and controller development decisions.

  18. Dopaminergic Variants in Siblings at High Risk for Autism: Associations With Initiating Joint Attention

    PubMed Central

    Gangi, Devon N.; Messinger, Daniel S.; Martin, Eden R.; Cuccaro, Michael L.

    2016-01-01

    Younger siblings of children with autism spectrum disorder (ASD; high-risk siblings) exhibit lower levels of initiating joint attention (IJA; sharing an object or experience with a social partner through gaze and/or gesture) than low-risk siblings of children without ASD. However, high-risk siblings also exhibit substantial variability in this domain. The neurotransmitter dopamine is linked to brain areas associated with reward, motivation, and attention, and common dopaminergic variants have been associated with attention difficulties. We examined whether these common dopaminergic variants, DRD4 and DRD2, explain variability in IJA in high-risk (n = 55) and low-risk (n = 38) siblings. IJA was assessed in the first year during a semi-structured interaction with an examiner. DRD4 and DRD2 genotypes were coded according to associated dopaminergic functioning to create a gene score, with higher scores indicating more genotypes associated with less efficient dopaminergic functioning. Higher dopamine gene scores (indicative of less efficient dopaminergic functioning) were associated with lower levels of IJA in the first year for high-risk siblings, while the opposite pattern emerged in low-risk siblings. Findings suggest differential susceptibility—IJA was differentially associated with dopaminergic functioning depending on familial ASD risk. Understanding genes linked to ASD-relevant behaviors in high-risk siblings will aid in early identification of children at greatest risk for difficulties in these behavioral domains, facilitating targeted prevention and intervention. PMID:26990357

  19. Less efficient oculomotor performance is associated with increased incidence of head impacts in high school ice hockey.

    PubMed

    Kiefer, Adam W; DiCesare, Christopher; Nalepka, Patrick; Foss, Kim Barber; Thomas, Staci; Myer, Gregory D

    2018-01-01

    To evaluate associations between pre-season oculomotor performance on visual tracking tasks and in-season head impact incidence during high school boys ice hockey. Prospective observational study design. Fifteen healthy high school aged male hockey athletes (M=16.50±1.17years) performed two 30s blocks each of a prosaccade and self-paced saccade task, and two trials each of a slow, medium, and fast smooth pursuit task (90°s -1 ; 180°s -1 ; 360°s -1 ) during the pre-season. Regular season in-game collision data were collected via helmet-mounted accelerometers. Simple linear regressions were used to examine relations between oculomotor performance measures and collision incidence at various impact thresholds. The variability of prosaccade latency was positively related to total collisions for the 20g force cutoff (p=0.046, adjusted R 2 =0.28). The average self-paced saccade velocity (p=0.020, adjusted R 2 =0.37) and variability of smooth pursuit gaze velocity (p=0.012, adjusted R 2 =0.47) were also positively associated with total collisions for the 50g force cutoff. These results provide preliminary evidence that less efficient oculomotor performance on three different oculomotor tasks is associated with increased incidence of head impacts during a competitive ice hockey season. The variability of prosaccade latency, the average self-paced saccade velocity and the variability of gaze velocity during predictable smooth pursuit all related to increased head impacts. Future work is needed to further understand player initiated collisions, but this is an important first step toward understanding strategies to reduce incidence of injury risk in ice hockey, and potentially contact sports more generally. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. PrimerDesign-M: A multiple-alignment based multiple-primer design tool for walking across variable genomes

    DOE PAGES

    Yoon, Hyejin; Leitner, Thomas

    2014-12-17

    Analyses of entire viral genomes or mtDNA requires comprehensive design of many primers across their genomes. In addition, simultaneous optimization of several DNA primer design criteria may improve overall experimental efficiency and downstream bioinformatic processing. To achieve these goals, we developed PrimerDesign-M. It includes several options for multiple-primer design, allowing researchers to efficiently design walking primers that cover long DNA targets, such as entire HIV-1 genomes, and that optimizes primers simultaneously informed by genetic diversity in multiple alignments and experimental design constraints given by the user. PrimerDesign-M can also design primers that include DNA barcodes and minimize primer dimerization. PrimerDesign-Mmore » finds optimal primers for highly variable DNA targets and facilitates design flexibility by suggesting alternative designs to adapt to experimental conditions.« less

  2. Method for modifying trigger level for adsorber regeneration

    DOEpatents

    Ruth, Michael J.; Cunningham, Michael J.

    2010-05-25

    A method for modifying a NO.sub.x adsorber regeneration triggering variable. Engine operating conditions are monitored until the regeneration triggering variable is met. The adsorber is regenerated and the adsorbtion efficiency of the adsorber is subsequently determined. The regeneration triggering variable is modified to correspond with the decline in adsorber efficiency. The adsorber efficiency may be determined using an empirically predetermined set of values or by using a pair of oxygen sensors to determine the oxygen response delay across the sensors.

  3. Thermo electronic laser energy conversion

    NASA Technical Reports Server (NTRS)

    Hansen, L. K.; Rasor, N. S.

    1976-01-01

    The thermo electronic laser energy converter (TELEC) is described and compared to the Waymouth converter and the conventional thermionic converter. The electrical output characteristics and efficiency of TELEC operation are calculated for a variety of design variables. Calculations and results are briefly outlined. It is shown that the TELEC concept can potentially convert 25 to 50 percent of incident laser radiation into electric power at high power densities and high waste heat rejection temperatures.

  4. Performance Characterization of a Novel Plasma Thruster to Provide a Revolutionary Operationally Responsive Space Capability with Micro- and Nano-Satellites

    DTIC Science & Technology

    2011-03-24

    and radiation resistance of rare earth permanent magnets for applications such as ion thrusters and high efficiency Stirling Radioisotope Generators...from Electron Transitioning Discharge Current Discharge Power Discharge Voltage Θ Divergence Angle Earths Gravity at Sea Level...Hall effect thruster HIVAC High Voltage Hall Accelerator LEO Low Earth Orbit LDS Laser Displacement System LVDT Linear variable differential

  5. Principle and Basic Characteristics of a Hybrid Variable-Magnetic-Force Motor

    NASA Astrophysics Data System (ADS)

    Sakai, Kazuto; Kuramochi, Satoru

    Reduction in the power consumed by motors is important for energy saving in the case of electrical appliances and electric vehicles (EVs). The motors used for operating these devices operate at variable speeds. Further, the motors operate with a small load in the stationary mode and a large load in the starting mode. A permanent magnet motor can be operated at the rated power with a high efficiency. However, the efficiency is low at a small load or at a high speed because the large constant magnetic force results in substantial core loss. Furthermore, the flux-weakening current that decreases the voltage at a high speed leads to significant copper loss and core loss. Therefore, we have developed a new technique for controlling the magnetic force of a permanent magnet on the basis of the load or speed of the motor. In this paper, we propose a novel motor that can vary the magnetic flux of a permanent magnet and clarify the principle and basic characteristics of the motor. The new motor has a permanent magnet that is magnetized by the magnetizing coil of the stator. The analysis results show that the magnetic flux linkage of the motor can be changed from 37% to 100% that a high torque can be produced.

  6. Investigation of Recombination Processes In A Magnetized Plasma

    NASA Technical Reports Server (NTRS)

    Chavers, Greg; Chang-Diaz, Franklin; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Interplanetary travel requires propulsion systems that can provide high specific impulse (Isp), while also having sufficient thrust to rapidly accelerate large payloads. One such propulsion system is the Variable Specific Impulse Magneto-plasma Rocket (VASIMR), which creates, heats, and exhausts plasma to provide variable thrust and Isp, optimally meeting the mission requirements. A large fraction of the energy to create the plasma is frozen in the exhaust in the form of ionization energy. This loss mechanism is common to all electromagnetic plasma thrusters and has an impact on their efficiency. When the device operates at high Isp, where the exhaust kinetic energy is high compared to the ionization energy, the frozen flow component is of little consequence; however, at low Isp, the effect of the frozen flow may be important. If some of this energy could be recovered through recombination processes, and re-injected as neutral kinetic energy, the efficiency of VASIMR, in its low Isp/high thrust mode may be improved. In this operating regime, the ionization energy is a large portion of the total plasma energy. An experiment is being conducted to investigate the possibility of recovering some of the energy used to create the plasma. This presentation will cover the progress and status of the experiment involving surface recombination of the plasma.

  7. Analysis of recovery efficiency in high-temperature aquifer thermal energy storage: a Rayleigh-based method

    NASA Astrophysics Data System (ADS)

    Schout, Gilian; Drijver, Benno; Gutierrez-Neri, Mariene; Schotting, Ruud

    2014-01-01

    High-temperature aquifer thermal energy storage (HT-ATES) is an important technique for energy conservation. A controlling factor for the economic feasibility of HT-ATES is the recovery efficiency. Due to the effects of density-driven flow (free convection), HT-ATES systems applied in permeable aquifers typically have lower recovery efficiencies than conventional (low-temperature) ATES systems. For a reliable estimation of the recovery efficiency it is, therefore, important to take the effect of density-driven flow into account. A numerical evaluation of the prime factors influencing the recovery efficiency of HT-ATES systems is presented. Sensitivity runs evaluating the effects of aquifer properties, as well as operational variables, were performed to deduce the most important factors that control the recovery efficiency. A correlation was found between the dimensionless Rayleigh number (a measure of the relative strength of free convection) and the calculated recovery efficiencies. Based on a modified Rayleigh number, two simple analytical solutions are proposed to calculate the recovery efficiency, each one covering a different range of aquifer thicknesses. The analytical solutions accurately reproduce all numerically modeled scenarios with an average error of less than 3 %. The proposed method can be of practical use when considering or designing an HT-ATES system.

  8. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.

  9. Shapley value-based multi-objective data envelopment analysis application for assessing academic efficiency of university departments

    NASA Astrophysics Data System (ADS)

    Abing, Stephen Lloyd N.; Barton, Mercie Grace L.; Dumdum, Michael Gerard M.; Bongo, Miriam F.; Ocampo, Lanndon A.

    2018-02-01

    This paper adopts a modified approach of data envelopment analysis (DEA) to measure the academic efficiency of university departments. In real-world case studies, conventional DEA models often identify too many decision-making units (DMUs) as efficient. This occurs when the number of DMUs under evaluation is not large enough compared to the total number of decision variables. To overcome this limitation and reduce the number of decision variables, multi-objective data envelopment analysis (MODEA) approach previously presented in the literature is applied. The MODEA approach applies Shapley value as a cooperative game to determine the appropriate weights and efficiency score of each category of inputs. To illustrate the performance of the adopted approach, a case study is conducted in a university in the Philippines. The input variables are academic staff, non-academic staff, classrooms, laboratories, research grants, and department expenditures, while the output variables are the number of graduates and publications. The results of the case study revealed that all DMUs are inefficient. DMUs with efficiency scores close to the ideal efficiency score may be emulated by other DMUs with least efficiency scores.

  10. Discrete and continuous variables for measurement-device-independent quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Feihu; Curty, Marcos; Qi, Bing

    In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less

  11. Discrete and continuous variables for measurement-device-independent quantum cryptography

    DOE PAGES

    Xu, Feihu; Curty, Marcos; Qi, Bing; ...

    2015-11-16

    In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less

  12. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    PubMed

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.

  13. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  14. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser.

    PubMed

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-03-17

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.

  15. Very High Fuel Economy, Heavy Duty, Constant Speed, Truck Engine Optimized Via Unique Energy Recovery Turbines and Facilitated High Efficiency Continuously Variable Drivetrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahman Habibzadeh

    2010-01-31

    The project began under a corporative agreement between Mack Trucks, Inc and the Department of Energy starting from September 1, 2005. The major objective of the four year project is to demonstrate a 10% efficiency gain by operating a Volvo 13 Litre heavy-duty diesel engine at a constant or narrow speed and coupled to a continuously variable transmission. The simulation work on the Constant Speed Engine started on October 1st. The initial simulations are aimed to give a basic engine model for the VTEC vehicle simulations. Compressor and turbine maps are based upon existing maps and/or qualified, realistic estimations. Themore » reference engine is a MD 13 US07 475 Hp. Phase I was completed in May 2006 which determined that an increase in fuel efficiency for the engine of 10.5% over the OICA cycle, and 8.2% over a road cycle was possible. The net increase in fuel efficiency would be 5% when coupled to a CVT and operated over simulated highway conditions. In Phase II an economic analysis was performed on the engine with turbocompound (TC) and a Continuously Variable Transmission (CVT). The system was analyzed to determine the payback time needed for the added cost of the TC and CVT system. The analysis was performed by considering two different production scenarios of 10,000 and 60,000 units annually. The cost estimate includes the turbocharger, the turbocompound unit, the interstage duct diffuser and installation details, the modifications necessary on the engine and the CVT. Even with the cheapest fuel and the lowest improvement, the pay back time is only slightly more than 12 months. A gear train is necessary between the engine crankshaft and turbocompound unit. This is considered to be relatively straight forward with no design problems.« less

  16. Regional Patterns of Stress Transfer in the Ablation Zone of the Western Greenland Ice Sheet

    NASA Astrophysics Data System (ADS)

    Andrews, L. C.; Hoffman, M. J.; Neumann, T.; Catania, G. A.; Luethi, M. P.; Hawley, R. L.

    2016-12-01

    Current understanding of the subglacial system indicates that the seasonal evolution of ice flow is strongly controlled by the gradual upstream progression of an inefficient - efficient transition within the subglacial hydrologic system followed by the reduction of melt and a downstream collapse of the efficient system. Using a spatiotemporally dense network of GPS-derived surface velocities from the Pâkitsoq Region of the western Greenland Ice Sheet, we find that this pattern of subglacial development is complicated by heterogeneous bed topography, resulting in complex patterns of ice flow. Following low elevation melt onset, early melt season strain rate anomalies are dominated by regional extension, which then gives way to spatially expansive compression. However, once daily minimum ice velocities fall below the observed winter background velocities, an alternating spatial pattern of extension and compression prevails. This pattern of strain rate anomalies is correlated with changing basal topography and differences in the magnitude of diurnal surface ice speeds. Along subglacial ridges, diurnal variability in ice speed is large, suggestive of a mature, efficient subglacial system. In regions of subglacial lows, diurnal variability in ice velocity is relatively low, likely associated with a less developed efficient subglacial system. The observed pattern suggests that borehole observations and modeling results demonstrating the importance of longitudinal stress transfer at a single field location are likely widely applicable in our study area and other regions of the Greenland Ice Sheet with highly variable bed topography. Further, the complex pattern of ice flow and evidence of spatially extensive longitudinal stress transfer add to the body of work indicating that the bed character plays an important role in the development of the subglacial system; closely matching diurnal ice velocity patterns with subglacial models may be difficult without coupling these models to high order ice flow models.

  17. Ultra-high-speed variable focus optics for novel applications in advanced imaging

    NASA Astrophysics Data System (ADS)

    Kang, S.; Dotsenko, E.; Amrhein, D.; Theriault, C.; Arnold, C. B.

    2018-02-01

    With the advancement of ultra-fast manufacturing technologies, high speed imaging with high 3D resolution has become increasingly important. Here we show the use of an ultra-high-speed variable focus optical element, the TAG Lens, to enable new ways to acquire 3D information from an object. The TAG Lens uses sound to adjust the index of refraction profile in a liquid and thereby can achieve focal scanning rates greater than 100 kHz. When combined with a high-speed pulsed LED and a high-speed camera, we can exploit this phenomenon to achieve high-resolution imaging through large depths. By combining the image acquisition with digital image processing, we can extract relevant parameters such as tilt and angle information from objects in the image. Due to the high speeds at which images can be collected and processed, we believe this technique can be used as an efficient method of industrial inspection and metrology for high throughput applications.

  18. YAMAT-seq: an efficient method for high-throughput sequencing of mature transfer RNAs

    PubMed Central

    Shigematsu, Megumi; Honda, Shozo; Loher, Phillipe; Telonis, Aristeidis G.; Rigoutsos, Isidore

    2017-01-01

    Abstract Besides translation, transfer RNAs (tRNAs) play many non-canonical roles in various biological pathways and exhibit highly variable expression profiles. To unravel the emerging complexities of tRNA biology and molecular mechanisms underlying them, an efficient tRNA sequencing method is required. However, the rigid structure of tRNA has been presenting a challenge to the development of such methods. We report the development of Y-shaped Adapter-ligated MAture TRNA sequencing (YAMAT-seq), an efficient and convenient method for high-throughput sequencing of mature tRNAs. YAMAT-seq circumvents the issue of inefficient adapter ligation, a characteristic of conventional RNA sequencing methods for mature tRNAs, by employing the efficient and specific ligation of Y-shaped adapter to mature tRNAs using T4 RNA Ligase 2. Subsequent cDNA amplification and next-generation sequencing successfully yield numerous mature tRNA sequences. YAMAT-seq has high specificity for mature tRNAs and high sensitivity to detect most isoacceptors from minute amount of total RNA. Moreover, YAMAT-seq shows quantitative capability to estimate expression levels of mature tRNAs, and has high reproducibility and broad applicability for various cell lines. YAMAT-seq thus provides high-throughput technique for identifying tRNA profiles and their regulations in various transcriptomes, which could play important regulatory roles in translation and other biological processes. PMID:28108659

  19. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    NASA Astrophysics Data System (ADS)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  20. Structural optimisation of cage induction motors using finite element analysis

    NASA Astrophysics Data System (ADS)

    Palko, S.

    The current trend in motor design is to have highly efficient, low noise, low cost, and modular motors with a high power factor. High torque motors are useful in applications like servo motors, lifts, cranes, and rolling mills. This report contains a detailed review of different optimization methods applicable in various design problems. Special attention is given to the performance of different methods, when they are used with finite element analysis (FEA) as an objective function, and accuracy problems arising from the numerical simulations. Also an effective method for designing high starting torque and high efficiency motors is presented. The method described in this work utilizes FEA combined with algorithms for the optimization of the slot geometry. The optimization algorithm modifies the position of the nodal points in the element mesh. The number of independent variables ranges from 14 to 140 in this work.

  1. [Analysis of the technical efficiency of hospitals in the Spanish National Health Service].

    PubMed

    Pérez-Romero, Carmen; Ortega-Díaz, M Isabel; Ocaña-Riola, Ricardo; Martín-Martín, José Jesús

    To analyse the technical efficiency and productivity of general hospitals in the Spanish National Health Service (NHS) (2010-2012) and identify explanatory hospital and regional variables. 230 NHS hospitals were analysed by data envelopment analysis for overall, technical and scale efficiency, and Malmquist index. The robustness of the analysis is contrasted with alternative input-output models. A fixed effects multilevel cross-sectional linear model was used to analyse the explanatory efficiency variables. The average rate of overall technical efficiency (OTE) was 0.736 in 2012; there was considerable variability by region. Malmquist index (2010-2012) is 1.013. A 23% variability in OTE is attributable to the region in question. Statistically significant exogenous variables (residents per 100 physicians, aging index, average annual income per household, essential public service expenditure and public health expenditure per capita) explain 42% of the OTE variability between hospitals and 64% between regions. The number of residents showed a statistically significant relationship. As regards regions, there is a statistically significant direct linear association between OTE and annual income per capita and essential public service expenditure, and an indirect association with the aging index and annual public health expenditure per capita. The significant room for improvement in the efficiency of hospitals is conditioned by region-specific characteristics, specifically aging, wealth and the public expenditure policies of each one. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. Genetic variation in heat tolerance-related traits in a population of wheat multiple synthetic derivatives

    PubMed Central

    Elbashir, Awad A. E.; Gorafi, Yasir S. A.; Tahir, Izzat S. A.; Elhashimi, Ashraf. M. A.; Abdalla, Modather G. A.; Tsujimoto, Hisashi

    2017-01-01

    In wheat (Triticum aestivum L.) high temperature (≥30°C) during grain filling leads to considerable reduction in grain yield. We studied 400 multiple synthetic derivatives (MSD) lines to examine the genetic variability of heat stress–adaptive traits and to identify new sources of heat tolerance to be used in wheat breeding programs. The experiment was arranged in an augmented randomized complete block design in four environments in Sudan. A wide range of genetic variability was found in most of the traits in all environments. For all traits examined, we found MSD lines that showed better performance than their parent ‘Norin 61’ and two adapted Sudanese cultivars. Using the heat tolerance efficiency, we identified 13 highly heat-tolerant lines and several lines with intermediate heat tolerance and good yield potential. We also identified lines with alleles that can be used to increase wheat yield potential. Our study revealed that the use of the MSD population is an efficient way to explore the genetic variation in Ae. tauschii for wheat breeding and improvement. PMID:29398942

  3. Virtual reality microscope versus conventional microscope regarding time to diagnosis: an experimental study.

    PubMed

    Randell, Rebecca; Ruddle, Roy A; Mello-Thoms, Claudia; Thomas, Rhys G; Quirke, Phil; Treanor, Darren

    2013-01-01

      To create and evaluate a virtual reality (VR) microscope that is as efficient as the conventional microscope, seeking to support the introduction of digital slides into routine practice.   A VR microscope was designed and implemented by combining ultra-high-resolution displays with VR technology, techniques for fast interaction, and high usability. It was evaluated using a mixed factorial experimental design with technology and task as within-participant variables and grade of histopathologist as a between-participant variable. Time to diagnosis was similar for the conventional and VR microscopes. However, there was a significant difference in the mean magnification used between the two technologies, with participants working at a higher level of magnification on the VR microscope.   The results suggest that, with the right technology, efficient use of digital pathology for routine practice is a realistic possibility. Further work is required to explore what magnification is required on the VR microscope for histopathologists to identify diagnostic features, and the effect on this of the digital slide production process. © 2012 Blackwell Publishing Limited.

  4. Chemical vapor deposition techniques and related methods for manufacturing microminiature thermionic converters

    DOEpatents

    King, Donald B.; Sadwick, Laurence P.; Wernsman, Bernard R.

    2002-06-25

    Methods of manufacturing microminiature thermionic converters (MTCs) having high energy-conversion efficiencies and variable operating temperatures using MEMS manufacturing techniques including chemical vapor deposition. The MTCs made using the methods of the invention incorporate cathode to anode spacing of about 1 micron or less and use cathode and anode materials having work functions ranging from about 1 eV to about 3 eV. The MTCs also exhibit maximum efficiencies of just under 30%, and thousands of the devices can be fabricated at modest costs.

  5. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations: High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  6. 25 MHz clock continuous-variable quantum key distribution system over 50 km fiber channel

    PubMed Central

    Wang, Chao; Huang, Duan; Huang, Peng; Lin, Dakai; Peng, Jinye; Zeng, Guihua

    2015-01-01

    In this paper, a practical continuous-variable quantum key distribution system is developed and it runs in the real-world conditions with 25 MHz clock rate. To reach high-rate, we have employed a homodyne detector with maximal bandwidth to 300 MHz and an optimal high-efficiency error reconciliation algorithm with processing speed up to 25 Mbps. To optimize the stability of the system, several key techniques are developed, which include a novel phase compensation algorithm, a polarization feedback algorithm, and related stability method on the modulators. Practically, our system is tested for more than 12 hours with a final secret key rate of 52 kbps over 50 km transmission distance, which is the highest rate so far in such distance. Our system may pave the road for practical broadband secure quantum communication with continuous variables in the commercial conditions. PMID:26419413

  7. 25 MHz clock continuous-variable quantum key distribution system over 50 km fiber channel.

    PubMed

    Wang, Chao; Huang, Duan; Huang, Peng; Lin, Dakai; Peng, Jinye; Zeng, Guihua

    2015-09-30

    In this paper, a practical continuous-variable quantum key distribution system is developed and it runs in the real-world conditions with 25 MHz clock rate. To reach high-rate, we have employed a homodyne detector with maximal bandwidth to 300 MHz and an optimal high-efficiency error reconciliation algorithm with processing speed up to 25 Mbps. To optimize the stability of the system, several key techniques are developed, which include a novel phase compensation algorithm, a polarization feedback algorithm, and related stability method on the modulators. Practically, our system is tested for more than 12 hours with a final secret key rate of 52 kbps over 50 km transmission distance, which is the highest rate so far in such distance. Our system may pave the road for practical broadband secure quantum communication with continuous variables in the commercial conditions.

  8. Hyperspectral and multispectral data fusion based on linear-quadratic nonnegative matrix factorization

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2017-04-01

    This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.

  9. Enhanced production of lovastatin by Omphalotus olearius (DC.) Singer in solid state fermentation.

    PubMed

    Atlı, Burcu; Yamaç, Mustafa; Yıldız, Zeki; Isikhuemnen, Omoanghe S

    2015-01-01

    Although lovastatin production has been reported for different microorganism species, there is limited information about lovastatin production by basidiomycetes. The optimization of culture parameters that enhances lovastatin production by Omphalotus olearius OBCC 2002 was investigated, using statistically based experimental designs under solid state fermentation. The Plackett Burman design was used in the first step to test the relative importance of the variables affecting production of lovastatin. Amount and particle size of barley were identified as efficient variables. In the latter step, the interactive effects of selected efficient variables were studied with a full factorial design. A maximum lovastatin yield of 139.47mg/g substrate was achieved by the fermentation of 5g of barley, 1-2mm particle diam., at 28°C. This study showed that O. olearius OBCC 2002 has a high capacity for lovastatin production which could be enhanced by using solid state fermentation with novel and cost-effective substrates, such as barley. Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.

  10. Size, Loading Efficiency, and Cytotoxicity of Albumin-Loaded Chitosan Nanoparticles: An Artificial Neural Networks Study.

    PubMed

    Baharifar, Hadi; Amani, Amir

    2017-01-01

    When designing nanoparticles for drug delivery, many variables such as size, loading efficiency, and cytotoxicity should be considered. Usually, smaller particles are preferred in drug delivery because of longer blood circulation time and their ability to escape from immune system, whereas smaller nanoparticles often show increased toxicity. Determination of parameters which affect size of particles and factors such as loading efficiency and cytotoxicity could be very helpful in designing drug delivery systems. In this work, albumin (as a protein drug model)-loaded chitosan nanoparticles were prepared by polyelectrolyte complexation method. Simultaneously, effects of 4 independent variables including chitosan and albumin concentrations, pH, and reaction time were determined on 3 dependent variables (i.e., size, loading efficiency, and cytotoxicity) by artificial neural networks. Results showed that concentrations of initial materials are the most important factors which may affect the dependent variables. A drop in the concentrations decreases the size directly, but they simultaneously decrease loading efficiency and increase cytotoxicity. Therefore, an optimization of the independent variables is required to obtain the most useful preparation. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  11. Geographical classification of apple based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Zhao, Chunjiang; Peng, Yankun

    2013-05-01

    Attribute of apple according to geographical origin is often recognized and appreciated by the consumers. It is usually an important factor to determine the price of a commercial product. Hyperspectral imaging technology and supervised pattern recognition was attempted to discriminate apple according to geographical origins in this work. Hyperspectral images of 207 Fuji apple samples were collected by hyperspectral camera (400-1000nm). Principal component analysis (PCA) was performed on hyperspectral imaging data to determine main efficient wavelength images, and then characteristic variables were extracted by texture analysis based on gray level co-occurrence matrix (GLCM) from dominant waveband image. All characteristic variables were obtained by fusing the data of images in efficient spectra. Support vector machine (SVM) was used to construct the classification model, and showed excellent performance in classification results. The total classification rate had the high classify accuracy of 92.75% in the training set and 89.86% in the prediction sets, respectively. The overall results demonstrated that the hyperspectral imaging technique coupled with SVM classifier can be efficiently utilized to discriminate Fuji apple according to geographical origins.

  12. Coapplication of Chicken Litter Biochar and Urea Only to Improve Nutrients Use Efficiency and Yield of Oryza sativa L. Cultivation on a Tropical Acid Soil

    PubMed Central

    Maru, Ali; Haruna, Osumanu Ahmed; Charles Primus, Walter

    2015-01-01

    The excessive use of nitrogen (N) fertilizers in sustaining high rice yields due to N dynamics in tropical acid soils not only is economically unsustainable but also causes environmental pollution. The objective of this study was to coapply biochar and urea to improve soil chemical properties and productivity of rice. Biochar (5 t ha−1) and different rates of urea (100%, 75%, 50%, 25%, and 0% of recommended N application) were evaluated in both pot and field trials. Selected soil chemical properties, rice plants growth variables, nutrient use efficiency, and yield were determined using standard procedures. Coapplication of biochar with 100% and 75% urea recommendation rates significantly increased nutrients availability (especially P and K) and their use efficiency in both pot and field trials. These treatments also significantly increased rice growth variables and grain yield. Coapplication of biochar and urea application at 75% of the recommended rate can be used to improve soil chemical properties and productivity and reduce urea use by 25%. PMID:26273698

  13. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks

    PubMed Central

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-01-01

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666

  14. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks.

    PubMed

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-06-26

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.

  15. A new variable-resolution associative memory for high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annovi, A.; Amerio, S.; Beretta, M.

    2011-07-01

    We describe an important advancement for the Associative Memory device (AM). The AM is a VLSI processor for pattern recognition based on Content Addressable Memory (CAM) architecture. The AM is optimized for on-line track finding in high-energy physics experiments. Pattern matching is carried out by finding track candidates in coarse resolution 'roads'. A large AM bank stores all trajectories of interest, called 'patterns', for a given detector resolution. The AM extracts roads compatible with a given event during detector read-out. Two important variables characterize the quality of the AM bank: its 'coverage' and the level of fake roads. The coverage,more » which describes the geometric efficiency of a bank, is defined as the fraction of tracks that match at least one pattern in the bank. Given a certain road size, the coverage of the bank can be increased just adding patterns to the bank, while the number of fakes unfortunately is roughly proportional to the number of patterns in the bank. Moreover, as the luminosity increases, the fake rate increases rapidly because of the increased silicon occupancy. To counter that, we must reduce the width of our roads. If we decrease the road width using the current technology, the system will become very large and extremely expensive. We propose an elegant solution to this problem: the 'variable resolution patterns'. Each pattern and each detector layer within a pattern will be able to use the optimal width, but we will use a 'don't care' feature (inspired from ternary CAMs) to increase the width when that is more appropriate. In other words we can use patterns of variable shape. As a result we reduce the number of fake roads, while keeping the efficiency high and avoiding excessive bank size due to the reduced width. We describe the idea, the implementation in the new AM design and the implementation of the algorithm in the simulation. Finally we show the effectiveness of the 'variable resolution patterns' idea using simulated high occupancy events in the ATLAS detector. (authors)« less

  16. A logical approach to optimize the nanostructured lipid carrier system of irinotecan: efficient hybrid design methodology

    NASA Astrophysics Data System (ADS)

    Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama

    2013-01-01

    Development of an effective formulation involves careful optimization of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient optimization designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design methods leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different optimization methods. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were optimized for their effect on size using the Taguchi L9 orthogonal array. The optimized values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface method to optimize the entrapment efficiency. Finally, by performing only 38 trials, we have optimized 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.

  17. Reaction wheels for kinetic energy storage

    NASA Astrophysics Data System (ADS)

    Studer, P. A.

    1984-11-01

    In contrast to all existing reaction wheel implementations, an order of magnitude increase in speed can be obtained efficiently if power to the actuators can be recovered. This allows a combined attitude control-energy storage system to be developed with structure mounted reaction wheels. The feasibility of combining reaction wheels with energy storage wwheels is demonstrated. The power required for control torques is a function of wheel speed but this energy is not dissipated; it is stored in the wheel. The I(2)R loss resulting from a given torque is shown to be constant, independent of the design speed of the motor. What remains, in order to efficiently use high speed wheels (essential for energy storage) for control purposes, is to reduce rotational losses to acceptable levels. Progress was made in permanent magnet motor design for high speed operation. Variable field motors offer more control flexibility and efficiency over a broader speed range.

  18. Reaction wheels for kinetic energy storage

    NASA Technical Reports Server (NTRS)

    Studer, P. A.

    1984-01-01

    In contrast to all existing reaction wheel implementations, an order of magnitude increase in speed can be obtained efficiently if power to the actuators can be recovered. This allows a combined attitude control-energy storage system to be developed with structure mounted reaction wheels. The feasibility of combining reaction wheels with energy storage wwheels is demonstrated. The power required for control torques is a function of wheel speed but this energy is not dissipated; it is stored in the wheel. The I(2)R loss resulting from a given torque is shown to be constant, independent of the design speed of the motor. What remains, in order to efficiently use high speed wheels (essential for energy storage) for control purposes, is to reduce rotational losses to acceptable levels. Progress was made in permanent magnet motor design for high speed operation. Variable field motors offer more control flexibility and efficiency over a broader speed range.

  19. Impact of the operation of cascade reservoirs in upper Yangtze River on hydrological variability of the mainstream

    NASA Astrophysics Data System (ADS)

    Changjiang, Xu; Dongdong, Zhang

    2018-06-01

    As the impacts by climate changes and human activities are intensified, variability may occur in river's annual runoff as well as flood and low water characteristics. In order to understand the characteristics of variability in hydrological series, diagnosis and identification must be conducted specific to the variability of hydrological series, i.e., whether there was variability and where the variability began to occur. In this paper, the mainstream of Yangtze River was taken as the object of study. A model was established to simulate the impounding and operation of upstream cascade reservoirs so as to obtain the runoff of downstream hydrological control stations after the regulation by upstream reservoirs in different level years. The Range of Variability Approach was utilized to analyze the impact of the operation of upstream reservoirs on the variability of downstream. The results indicated that the overall hydrologic alterations of Yichang hydrological station in 2010 level year, 2015 level year and the forward level year were 68.4, 72.5 and 74.3 % respectively, belonging to high alteration in all three level years. The runoff series of mainstream hydrological stations presented variability in different degrees, where the runoff series of the four hydrological stations including Xiangjiaba, Gaochang and Wulong belonged to high alteration in the three level years; and the runoff series of Beibei hydrological station in 2010 level year belonged to medium alteration, and high alteration in 2015 level year and the forward level year. The study on the impact of the operation of cascade reservoirs in Upper Yangtze River on hydrological variability of the mainstream had important practical significance on the sustainable utilization of water resources, disaster prevention and mitigation, safe and efficient operation and management of water conservancy projects and stable development of the economic society.

  20. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  1. Highly efficient CRISPR/HDR-mediated knock-in for mouse embryonic stem cells and zygotes.

    PubMed

    Wang, Bangmei; Li, Kunyu; Wang, Amy; Reiser, Michelle; Saunders, Thom; Lockey, Richard F; Wang, Jia-Wang

    2015-10-01

    The clustered regularly interspaced short palindromic repeat (CRISPR) gene editing technique, based on the non-homologous end-joining (NHEJ) repair pathway, has been used to generate gene knock-outs with variable sizes of small insertion/deletions with high efficiency. More precise genome editing, either the insertion or deletion of a desired fragment, can be done by combining the homology-directed-repair (HDR) pathway with CRISPR cleavage. However, HDR-mediated gene knock-in experiments are typically inefficient, and there have been no reports of successful gene knock-in with DNA fragments larger than 4 kb. Here, we describe the targeted insertion of large DNA fragments (7.4 and 5.8 kb) into the genomes of mouse embryonic stem (ES) cells and zygotes, respectively, using the CRISPR/HDR technique without NHEJ inhibitors. Our data show that CRISPR/HDR without NHEJ inhibitors can result in highly efficient gene knock-in, equivalent to CRISPR/HDR with NHEJ inhibitors. Although NHEJ is the dominant repair pathway associated with CRISPR-mediated double-strand breaks (DSBs), and biallelic gene knock-ins are common, NHEJ and biallelic gene knock-ins were not detected. Our results demonstrate that efficient targeted insertion of large DNA fragments without NHEJ inhibitors is possible, a result that should stimulate interest in understanding the mechanisms of high efficiency CRISPR targeting in general.

  2. In-tube electro-membrane extraction with a sub-microliter organic solvent consumption as an efficient technique for synthetic food dyes determination in foodstuff samples.

    PubMed

    Bazregar, Mohammad; Rajabi, Maryam; Yamini, Yadollah; Asghari, Alireza; Abdossalami asl, Yousef

    2015-09-04

    A simple and efficient extraction technique with a sub-microliter organic solvent consumption termed as in-tube electro-membrane extraction (IEME) is introduced. This method is based upon the electro-kinetic migration of ionized compounds by the application of an electrical potential difference. For this purpose, a thin polypropylene (PP) sheet placed inside a tube acts as a support for the membrane solvent, and 30μL of an aqueous acceptor solution is separated by this solvent from 1.2mL of an aqueous donor solution. This method yielded high extraction recoveries (63-81%), and the consumption of the organic solvent used was only 0.5μL. By performing this method, the purification is high, and the utilization of the organic solvent, used as a mediator, is very simple and repeatable. The proposed method was evaluated by extraction of four synthetic food dyes (Amaranth, Ponceau 4R, Allura Red, and Carmoisine) as the model analytes. Optimization of variables affecting the method was carried out in order to achieve the best extraction efficiency. These variables were the type of membrane solvent, applied extraction voltage, extraction time, pH range, and concentration of salt added. Under the optimized conditions, IEME-HPLC-UV provided a good linearity in the range of 1.00-800ngmL(-1), low limits of detection (0.3-1ngmL(-1)), and good extraction repeatabilities (RSDs below 5.2%, n=5). It seems that this design is a proper one for the automation of the method. Also the consumption of the organic solvent in a sub-microliter scale, and its simplicity, high efficiency, and high purification can help one getting closer to the objectives of the green chemistry. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Technical efficiency of teaching hospitals in Iran: the use of Stochastic Frontier Analysis, 1999–2011

    PubMed Central

    Goudarzi, Reza; Pourreza, Abolghasem; Shokoohi, Mostafa; Askari, Roohollah; Mahdavi, Mahdi; Moghri, Javad

    2014-01-01

    Background: Hospitals are highly resource-dependent settings, which spend a large proportion of healthcare financial resources. The analysis of hospital efficiency can provide insight into how scarce resources are used to create health values. This study examines the Technical Efficiency (TE) of 12 teaching hospitals affiliated with Tehran University of Medical Sciences (TUMS) between 1999 and 2011. Methods: The Stochastic Frontier Analysis (SFA) method was applied to estimate the efficiency of TUMS hospitals. A best function, referred to as output and input parameters, was calculated for the hospitals. Number of medical doctors, nurses, and other personnel, active beds, and outpatient admissions were considered as the input variables and number of inpatient admissions as an output variable. Results: The mean level of TE was 59% (ranging from 22 to 81%). During the study period the efficiency increased from 61 to 71%. Outpatient admission, other personnel and medical doctors significantly and positively affected the production (P< 0.05). Concerning the Constant Return to Scale (CRS), an optimal production scale was found, implying that the productions of the hospitals were approximately constant. Conclusion: Findings of this study show a remarkable waste of resources in the TUMS hospital during the decade considered. This warrants policy-makers and top management in TUMS to consider steps to improve the financial management of the university hospitals. PMID:25114947

  4. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  5. Design studies of continuously variable transmissions for electric vehicles

    NASA Technical Reports Server (NTRS)

    Parker, R. J.; Loewenthal, S. H.; Fischer, G. K.

    1981-01-01

    Preliminary design studies were performed on four continuously variable transmission (CVT) concepts for use with a flywheel equipped electric vehicle of 1700 kg gross weight. Requirements of the CVT's were a maximum torque of 450 N-m (330 lb-ft), a maximum output power of 75 kW (100 hp), and a flywheel speed range of 28,000 to 14,000 rpm. Efficiency, size, weight, cost, reliability, maintainability, and controls were evaluated for each of the four concepts which included a steel V-belt type, a flat rubber belt type, a toroidal traction type, and a cone roller traction type. All CVT's exhibited relatively high calculated efficiencies (68 percent to 97 percent) over a broad range of vehicle operating conditions. Estimated weight and size of these transmissions were comparable to or less than equivalent automatic transmission. The design of each concept was carried through the design layout stage.

  6. Gas exchanges and water use efficiency in the selection of tomato genotypes tolerant to water stress.

    PubMed

    Borba, M E A; Maciel, G M; Fraga Júnior, E F; Machado Júnior, C S; Marquez, G R; Silva, I G; Almeida, R S

    2017-06-20

    Water stress can affect the yield in tomato crops and, despite this, there are few types of research aiming to select tomato genotypes resistant to the water stress using physiological parameters. This experiment aimed to study the variables that are related to the gas exchanges and the efficiency in water use, in the selection of tomato genotypes tolerant to water stress. It was done in a greenhouse, measuring 7 x 21 m, in a randomized complete block design, with four replications (blocks), being five genotypes in the F 2 BC 1 generation, which were previously obtained from an interspecific cross between Solanum pennellii versus S. lycopersicum and three check treatments, two susceptible [UFU-22 (pre-commercial line) and cultivar Santa Clara] and one resistant (S. pennellii). At the beginning of flowering, the plants were submitted to a water stress condition, through irrigation suspension. After that CO 2 assimilation, internal CO 2 , stomatal conductance, transpiration, leaf temperature, instantaneous water use efficiency, intrinsic efficiency of water use, instantaneous carboxylation efficiency, chlorophyll a and b, and the potential leaf water (Ψf) were observed. Almost all variables that were analyzed, except CO 2 assimilation and instantaneous carboxylation efficiency, demonstrated the superiority of the wild accession, S. pennellii, concerning the susceptible check treatments. The high photosynthetic rate and the low stomatal conductance and transpiration, presented by the UFU22/F 2 BC 1 #2 population, allowed a better water use efficiency. Because of that, these physiological characteristics are promising in the selection of tomato genotypes tolerant to water stress.

  7. Narrowing the agronomic yield gap with improved nitrogen use efficiency: a modeling approach.

    PubMed

    Ahrens, T D; Lobell, D B; Ortiz-Monasterio, J I; Li, Y; Matson, P A

    2010-01-01

    Improving nitrogen use efficiency (NUE) in the major cereals is critical for more sustainable nitrogen use in high-input agriculture, but our understanding of the potential for NUE improvement is limited by a paucity of reliable on-farm measurements. Limited on-farm data suggest that agronomic NUE (AE(N)) is lower and more variable than data from trials conducted at research stations, on which much of our understanding of AE(N) has been built. The purpose of this study was to determine the magnitude and causes of variability in AE(N) across an agricultural region, which we refer to as the achievement distribution of AE(N). The distribution of simulated AE(N) in 80 farmers' fields in an irrigated wheat system in the Yaqui Valley, Mexico, was compared with trials at a local research center (International Wheat and Maize Improvement Center; CIMMYT). An agroecosystem simulation model WNMM was used to understand factors controlling yield, AE(N), gaseous N emissions, and nitrate leaching in the region. Simulated AE(N) in the Yaqui Valley was highly variable, and mean on-farm AE(N) was 44% lower than trials with similar fertilization rates at CIMMYT. Variability in residual N supply was the most important factor determining simulated AE(N). Better split applications of N fertilizer led to almost a doubling of AE(N), increased profit, and reduced N pollution, and even larger improvements were possible with technologies that allow for direct measurement of soil N supply and plant N demand, such as site-specific nitrogen management.

  8. Influence of some formulation variables on the optimization of pH-dependent, colon-targeted, sustained-release mesalamine microspheres.

    PubMed

    El-Bary, Ahmed Abd; Aboelwafa, Ahmed A; Al Sharabi, Ibrahim M

    2012-03-01

    The aim of this work was to understand the influence of different formulation variables on the optimization of pH-dependent, colon-targeted, sustained-release mesalamine microspheres prepared by O/O emulsion solvent evaporation method, employing pH-dependent Eudragit S and hydrophobic pH-independent ethylcellulose polymers. Formulation variables studied included concentration of Eudragit S in the internal phase and the ratios between; internal to external phase, drug to Eudragit S and Eudragit S to ethylcellulose to mesalamine. Prepared microspheres were evaluated by carrying out in vitro release studies and determination of particle size, production yield, and encapsulation efficiency. In addition, morphology of microspheres was examined using optical and scanning electron microscopy. Emulsion solvent evaporation method was found to be sensitive to the studied formulation variables. Particle size and encapsulation efficiency increased by increasing Eudragit S concentration in the internal phase, ratio of internal to external phase, and ratio of Eudragit S to the drug. Employing Eudragit S alone in preparation of the microspheres is only successful in forming acid-resistant microspheres with pulsatile release pattern at high pH. Eudragit S and ethylcellulose blend microspheres were able to control release under acidic condition and to extend drug release at high pH. The stability studies carried out at 40°C/75% RH for 6 months proved the stability of the optimized formulation. From the results of this investigation, microencapsulation of mesalamine in microspheres using blend of Eudragit S and ethylcellulose could constitute a promising approach for site-specific and controlled delivery of drug in colon.

  9. [Survival strategy of photosynthetic organisms. 1. Variability of the extent of light-harvesting pigment aggregation as a structural factor optimizing the function of oligomeric photosynthetic antenna. Model calculations].

    PubMed

    Fetisova, Z G

    2004-01-01

    In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.

  10. Optimization of controlled release nanoparticle formulation of verapamil hydrochloride using artificial neural networks with genetic algorithm and response surface methodology.

    PubMed

    Li, Yongqiang; Abbaspour, Mohammadreza R; Grootendorst, Paul V; Rauth, Andrew M; Wu, Xiao Yu

    2015-08-01

    This study was performed to optimize the formulation of polymer-lipid hybrid nanoparticles (PLN) for the delivery of an ionic water-soluble drug, verapamil hydrochloride (VRP) and to investigate the roles of formulation factors. Modeling and optimization were conducted based on a spherical central composite design. Three formulation factors, i.e., weight ratio of drug to lipid (X1), and concentrations of Tween 80 (X2) and Pluronic F68 (X3), were chosen as independent variables. Drug loading efficiency (Y1) and mean particle size (Y2) of PLN were selected as dependent variables. The predictive performance of artificial neural networks (ANN) and the response surface methodology (RSM) were compared. As ANN was found to exhibit better recognition and generalization capability over RSM, multi-objective optimization of PLN was then conducted based upon the validated ANN models and continuous genetic algorithms (GA). The optimal PLN possess a high drug loading efficiency (92.4%, w/w) and a small mean particle size (∼100nm). The predicted response variables matched well with the observed results. The three formulation factors exhibited different effects on the properties of PLN. ANN in coordination with continuous GA represent an effective and efficient approach to optimize the PLN formulation of VRP with desired properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  12. Aircrew coordination and decisionmaking: Peer ratings of video tapes made during a full mission simulation

    NASA Technical Reports Server (NTRS)

    Murphy, M. R.; Awe, C. A.

    1986-01-01

    Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.

  13. High Efficiency, Low Distortion 3D Diffusion Tensor Imaging with Variable Density Spiral Fast Spin Echoes (3D DW VDS RARE)

    PubMed Central

    Frank, Lawrence R.; Jung, Youngkyoo; Inati, Souheil; Tyszka, J. Michael; Wong, Eric C.

    2009-01-01

    We present an acquisition and reconstruction method designed to acquire high resolution 3D fast spin echo diffusion tensor images while mitigating the major sources of artifacts in DTI - field distortions, eddy currents and motion. The resulting images, being 3D, are of high SNR, and being fast spin echoes, exhibit greatly reduced field distortions. This sequence utilizes variable density spiral acquisition gradients, which allow for the implementation of a self-navigation scheme by which both eddy current and motion artifacts are removed. The result is that high resolution 3D DTI images are produced without the need for eddy current compensating gradients or B0 field correction. In addition, a novel method for fast and accurate reconstruction of the non-Cartesian data is employed. Results are demonstrated in the brains of normal human volunteers. PMID:19778618

  14. Energy Efficient Engine: High-pressure compressor test hardware detailed design report

    NASA Technical Reports Server (NTRS)

    Howe, David C.; Marchant, R. D.

    1988-01-01

    The objective of the NASA Energy Efficient Engine program is to identify and verify the technology required to achieve significant reductions in fuel consumption and operating cost for future commercial gas turbine engines. The design and analysis is documented of the high pressure compressor which was tested as part of the Pratt and Whitney effort under the Energy Efficient Engine program. This compressor was designed to produce a 14:1 pressure ratio in ten stages with an adiabatic efficiency of 88.2 percent in the flight propulsion system. The corresponding expected efficiency for the compressor component test rig is 86.5 percent. Other performance goals are a surge margin of 20 percent, a corrected flow rate of 35.2 kg/sec (77.5 lb/sec), and a life of 20,000 missions and 30,000 hours. Low loss, highly loaded airfoils are used to increase efficiency while reducing the parts count. Active clearance control and case trenches in abradable strips over the blade tips are included in the compressor component design to further increase the efficiency potential. The test rig incorporates variable geometry stator vanes in all stages to permit maximum flexibility in developing stage-to-stage matching. This provision precluded active clearance control on the rear case of the test rig. Both the component and rig designs meet or exceed design requirements with the exception of life goals, which will be achievable with planned advances in materials technology.

  15. High-efficiency particulate air filter test stand and aerosol generator for particle loading studies

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Hogancamp, Kristina U.; Parsons, Michael S.; Rogers, Donna M.; Norton, Olin P.; Nagel, Brian A.; Alderman, Steven L.; Waggoner, Charles A.

    2007-08-01

    This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30×30×29cm3 nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5to12standardm3/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150°C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7standardm3/min, high mass concentrations (˜25mg/m3) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.

  16. High-efficiency particulate air filter test stand and aerosol generator for particle loading studies.

    PubMed

    Arunkumar, R; Hogancamp, Kristina U; Parsons, Michael S; Rogers, Donna M; Norton, Olin P; Nagel, Brian A; Alderman, Steven L; Waggoner, Charles A

    2007-08-01

    This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30 x 30 x 29 cm(3) nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5 to 12 standard m(3)/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150 degrees C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7 standard m(3)/min, high mass concentrations (approximately 25 mg/m(3)) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160 nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.

  17. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    PubMed Central

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-01-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371

  18. Continuous Flow in Labour-Intensive Manufacturing Process

    NASA Astrophysics Data System (ADS)

    Pacheco Eng., Jhonny; Carbajal MSc., Eduardo; Stoll-Ing., Cesar, Dr.

    2017-06-01

    A continuous-flow manufacturing represents the peak of standard production, and usually it means high production in a strict line production. Furthermore, low-tech industry demands high labour-intensive, in this context the efficient of the line production is tied at the job shop organization. Labour-intensive manufacturing processes are a common characteristic for developing countries. This research aims to propose a methodology for production planning in order to fulfilment a variable monthly production quota. The main idea is to use a clock as orchestra director in order to synchronize the rate time (takt time) of customer demand with the manufacturing time. In this way, the study is able to propose a stark reduction of stock in process, over-processing, and unnecessary variability.

  19. A review of advanced turboprop transport aircraft

    NASA Astrophysics Data System (ADS)

    Lange, Roy H.

    The application of advanced technologies shows the potential for significant improvement in the fuel efficiency and operating costs of future transport aircraft envisioned for operation in the 1990s time period. One of the more promising advanced technologies is embodied in an advanced turboprop concept originated by Hamilton Standard and NASA and known as the propfan. The propfan concept features a highly loaded multibladed, variable pitch propeller geared to a high pressure ratio gas turbine engine. The blades have high sweepback and advanced airfoil sections to achieve 80 percent propulsive efficiency at M=0.80 cruise speed. Aircraft system studies have shown improvements in fuel efficiency of 15-20 percent for propfan advanced transport aircraft as compared to equivalent turbofan transports. Beginning with the Lockheed C-130 and Electra turboprop aircraft, this paper presents an overview of the evolution of propfan aircraft design concepts and system studies. These system studies include possible civil and military transport applications and data on the performance, community and far-field noise characteristics and operating costs of propfan aircraft design concepts. NASA Aircraft Energy Efficiency (ACEE) program propfan projects with industry are reviewed with respect to system studies of propfan aircraft and recommended flight development programs.

  20. Super Turbocharging the Direct Injection Diesel engine

    NASA Astrophysics Data System (ADS)

    Boretti, Albert

    2018-03-01

    The steady operation of a turbocharged diesel direct injection (TDI) engine featuring a variable speed ratio mechanism linking the turbocharger shaft to the crankshaft is modelled in the present study. Key parameters of the variable speed ratio mechanism are range of speed ratios, efficiency and inertia, in addition to the ability to control relative speed and flow of power. The device receives energy from, or delivers energy to, the crankshaft or the turbocharger. In addition to the pistons of the internal combustion engine (ICE), also the turbocharger thus contributes to the total mechanical power output of the engine. The energy supply from the crankshaft is mostly needed during sharp accelerations to avoid turbo-lag, and to boost torque at low speeds. At low speeds, the maximum torque is drastically improved, radically expanding the load range. Additionally, moving closer to the points of operation of a balanced turbocharger, it is also possible to improve both the efficiency η, defined as the ratio of the piston crankshaft power to the fuel flow power, and the total efficiency η*, defined as the ratio of piston crankshaft power augmented of the power from the turbocharger shaft to the fuel flow power, even if of a minimal extent. The energy supply to the crankshaft is possible mostly at high speeds and high loads, where otherwise the turbine could have been waste gated, and during decelerations. The use of the energy at the turbine otherwise waste gated translates in improvements of the total fuel conversion efficiency η* more than the efficiency η. Much smaller improvements are obtained for the maximum torque, yet again moving closer to the points of operation of a balanced turbocharger. Adopting a much larger turbocharger (target displacement x speed 30% larger than a conventional turbocharger), better torque outputs and fuel conversion efficiencies η* and η are possible at every speed vs. the engine with a smaller, balanced turbocharger. This result motivates further studies of the mechanism that may considerably benefit traditional powertrains based on diesel engines.

  1. Neural network configuration and efficiency underlies individual differences in spatial orientation ability.

    PubMed

    Arnold, Aiden E G F; Protzner, Andrea B; Bray, Signe; Levy, Richard M; Iaria, Giuseppe

    2014-02-01

    Spatial orientation is a complex cognitive process requiring the integration of information processed in a distributed system of brain regions. Current models on the neural basis of spatial orientation are based primarily on the functional role of single brain regions, with limited understanding of how interaction among these brain regions relates to behavior. In this study, we investigated two sources of variability in the neural networks that support spatial orientation--network configuration and efficiency--and assessed whether variability in these topological properties relates to individual differences in orientation accuracy. Participants with higher accuracy were shown to express greater activity in the right supramarginal gyrus, the right precentral cortex, and the left hippocampus, over and above a core network engaged by the whole group. Additionally, high-performing individuals had increased levels of global efficiency within a resting-state network composed of brain regions engaged during orientation and increased levels of node centrality in the right supramarginal gyrus, the right primary motor cortex, and the left hippocampus. These results indicate that individual differences in the configuration of task-related networks and their efficiency measured at rest relate to the ability to spatially orient. Our findings advance systems neuroscience models of orientation and navigation by providing insight into the role of functional integration in shaping orientation behavior.

  2. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    NASA Astrophysics Data System (ADS)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  3. Forest type mapping of the Interior West

    Treesearch

    Bonnie Ruefenacht; Gretchen G. Moisen; Jock A. Blackard

    2004-01-01

    This paper develops techniques for the mapping of forest types in Arizona, New Mexico, and Wyoming. The methods involve regression-tree modeling using a variety of remote sensing and GIS layers along with Forest Inventory Analysis (FIA) point data. Regression-tree modeling is a fast and efficient technique of estimating variables for large data sets with high accuracy...

  4. Use of multi-temporal UAV-derived imagery for estimating individual tree growth in Pinus pinea stands

    Treesearch

    Juan Guerra-Hernández; Eduardo González-Ferreiro; Vicente Monleon; Sonia Faias; Margarida Tomé; Ramón Díaz-Varela

    2017-01-01

    High spatial resolution imagery provided by unmanned aerial vehicles (UAVs) can yield accurate and efficient estimation of tree dimensions and canopy structural variables at the local scale. We flew a low-cost, lightweight UAV over an experimental Pinus pinea L. plantation (290 trees distributed over 16 ha with different fertirrigation treatments)...

  5. Control of variable speed variable pitch wind turbine based on a disturbance observer

    NASA Astrophysics Data System (ADS)

    Ren, Haijun; Lei, Xin

    2017-11-01

    In this paper, a novel sliding mode controller based on disturbance observer (DOB) to optimize the efficiency of variable speed variable pitch (VSVP) wind turbine is developed and analyzed. Due to the highly nonlinearity of the VSVP system, the model is linearly processed to obtain the state space model of the system. Then, a conventional sliding mode controller is designed and a DOB is added to estimate wind speed. The proposed control strategy can successfully deal with the random nature of wind speed, the nonlinearity of VSVP system, the uncertainty of parameters and external disturbance. Via adding the observer to the sliding mode controller, it can greatly reduce the chattering produced by the sliding mode switching gain. The simulation results show that the proposed control system has the effectiveness and robustness.

  6. Analysis of Variability in HIV-1 Subtype A Strains in Russia Suggests a Combination of Deep Sequencing and Multitarget RNA Interference for Silencing of the Virus.

    PubMed

    Kretova, Olga V; Chechetkin, Vladimir R; Fedoseeva, Daria M; Kravatsky, Yuri V; Sosin, Dmitri V; Alembekov, Ildar R; Gorbacheva, Maria A; Gashnikova, Natalya M; Tchurikov, Nickolai A

    2017-02-01

    Any method for silencing the activity of the HIV-1 retrovirus should tackle the extremely high variability of HIV-1 sequences and mutational escape. We studied sequence variability in the vicinity of selected RNA interference (RNAi) targets from isolates of HIV-1 subtype A in Russia, and we propose that using artificial RNAi is a potential alternative to traditional antiretroviral therapy. We prove that using multiple RNAi targets overcomes the variability in HIV-1 isolates. The optimal number of targets critically depends on the conservation of the target sequences. The total number of targets that are conserved with a probability of 0.7-0.8 should exceed at least 2. Combining deep sequencing and multitarget RNAi may provide an efficient approach to cure HIV/AIDS.

  7. Future consequences of decreasing marginal production efficiency in the high-yielding dairy cow.

    PubMed

    Moallem, U

    2016-04-01

    The objectives were to examine the gross and marginal production efficiencies in high-yielding dairy cows and the future consequences on dairy industry profitability. Data from 2 experiments were used in across-treatments analysis (n=82 mid-lactation multiparous Israeli-Holstein dairy cows). Milk yields, body weights (BW), and dry matter intakes (DMI) were recorded daily. In both experiments, cows were fed a diet containing 16.5 to 16.6% crude protein and net energy for lactation (NEL) at 1.61 Mcal/kg of dry matter (DM). The means of milk yield, BW, DMI, NEL intake, and energy required for maintenance were calculated individually over the whole study, and used to calculate gross and marginal efficiencies. Data were analyzed in 2 ways: (1) simple correlation between variables; and (2) cows were divided into 3 subgroups, designated low, moderate, and high DMI (LDMI, MDMI, and HDMI), according to actual DMI per day: ≤ 26 kg (n=27); >26 through 28.2 kg (n=28); and >28.2 kg (n=27). The phenotypic Pearson correlations among variables were analyzed, and the GLM procedure was used to test differences between subgroups. The relationships between milk and fat-corrected milk yields and the corresponding gross efficiencies were positive, whereas BW and gross production efficiency were negatively correlated. The marginal production efficiency from DM and energy consumed decreased with increasing DMI. The difference between BW gain as predicted by the National Research Council model (2001) and the present measurements increased with increasing DMI (r=0.68). The average calculated energy balances were 1.38, 2.28, and 4.20 Mcal/d (standard error of the mean=0.64) in the LDMI, MDMI, and HDMI groups, respectively. The marginal efficiency for milk yields from DMI or energy consumed was highest in LDMI, intermediate in MDMI, and lowest in HDMI. The predicted BW gains for the whole study period were 22.9, 37.9, and 75.8 kg for the LDMI, MDMI, and HDMI groups, respectively. The present study demonstrated that marginal production efficiency decreased with increasing feed intake. Because of the close association between production and intake, the principle of diminishing marginal productivity may explain why increasing milk production (and consequently increasing intake) does not always enhance profitability. To maintain high production efficiency in the future, more attention should be given to optimizing rather than maximizing feed intake, a goal that could be achieved by nutritional manipulations that would increase digestibility or by using a diet of denser nutrients that would provide all nutritional requirements from lower intake. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. How does a newly encountered face become familiar? The effect of within-person variability on adults' and children's perception of identity.

    PubMed

    Baker, Kristen A; Laurence, Sarah; Mondloch, Catherine J

    2017-04-01

    Adults and children aged 6years and older easily recognize multiple images of a familiar face, but often perceive two images of an unfamiliar face as belonging to different identities. Here we examined the process by which a newly encountered face becomes familiar, defined as accurate recognition of multiple images that capture natural within-person variability in appearance. In Experiment 1 we examined whether exposure to within-person variability in appearance helps children learn a new face. Children aged 6-13years watched a 10-min video of a woman reading a story; she was filmed on a single day (low variability) or over three days, across which her appearance and filming conditions (e.g., camera, lighting) varied (high variability). After familiarization, participants sorted a set of images comprising novel images of the target identity intermixed with distractors. Compared to participants who received no familiarization, children showed evidence of learning only in the high-variability condition, in contrast to adults who showed evidence of learning in both the low- and high-variability conditions. Experiment 2 highlighted the efficiency with which adults learn a new face; their accuracy was comparable across training conditions despite variability in duration (1 vs. 10min) and type (video vs. static images) of training. Collectively, our findings show that exposure to variability leads to the formation of a robust representation of facial identity, consistent with perceptual learning in other domains (e.g., language), and that the development of face learning is protracted throughout childhood. We discuss possible underlying mechanisms. Copyright © 2016. Published by Elsevier B.V.

  9. Mental health and sleep of older wife caregivers for spouses with Alzheimer's disease and related disorders.

    PubMed

    Willette-Murphy, Karen; Todero, Catherine; Yeaworth, Rosalee

    2006-10-01

    This descriptive study examined sleep and mental health variables in 37 older wife caregivers for spouses with dementia compared to 37 age-matched controls. The relationships among selected caregiving variables (behavioral problems, caregiving hours, and years of caregiving), appraisal of burden, self-reported sleep efficiency for the past week, and mental health outcomes were examined. Lazarus and Folkman's stress and coping framework guided the study. Mental health and sleep were poorer for caregivers. Caregiving and appraisal of burden variables showed direct and indirect effects on mental health. However, caregiving and appraisal of burden variables were not significant for predicting sleep efficiency. Sleep efficiency was a good predictor of mental health in this sample of wife caregivers.

  10. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Meta-analysis of the effects of essential oils and their bioactive compounds on rumen fermentation characteristics and feed efficiency in ruminants.

    PubMed

    Khiaosa-ard, R; Zebeli, Q

    2013-04-01

    The present study aimed at investigating the effects of essential oils and their bioactive compounds (EOBC) on rumen fermentation in vivo as well as animal performance and feed efficiency in different ruminant species, using a meta-analysis approach. Ruminant species were classified into 3 classes consisting of beef cattle, dairy cattle, and small ruminants. Two datasets (i.e., rumen fermentation and animal performance) were constructed, according to the available dependent variables within each animal class, from 28 publications (34 experiments) comprising a total of 97 dietary treatments. In addition, changes in rumen fermentation parameters relative to controls (i.e., no EOBC supplementation) of all animal classes were computed. Data were statistically analyzed within each animal class to evaluate the EOBC dose effect, taking into account variations of other variables across experiments (e.g., diet, feeding duration). The dose effect of EOBC on relative changes in fermentation parameters were analyzed across all animal classes. The primary results were that EOBC at doses <0.75 g/kg diet DM acted as a potential methane inhibitor in the rumen as a result of decreased acetate to propionate ratio. These responses were more pronounced in beef cattle (methane, P = 0.001; acetate to propionate ratio, P = 0.005) than in small ruminants (methane, P = 0.068; acetate to propionate ratio, P = 0.056) and in dairy cattle (P > 0.05), respectively. The analysis of relative changes in rumen fermentation variables suggests that EOBC affected protozoa numbers (P < 0.001) but only high doses (>0.20 g/kg DM) of EOBC had an inhibitory effect on this variable whereas lower doses promoted the number. For performance data, because numbers of observations in beef cattle and small ruminants were small, only those of dairy cattle (DMI, milk yield and milk composition, and feed efficiency) were analyzed. The results revealed no effect of EOBC dose on most parameters, except increased milk protein percentage (P< 0.001) and content (P = 0.006). It appears that EOBC supplementation can enhance rumen fermentation in such a way (i.e., decreased acetate to propionate ratio) that may favor beef production. High doses of EOBC do not necessarily modify rumen fermentation or improve animal performance and feed efficiency. Furthermore, additional attention should be paid to diet composition and supplementation period when evaluating the effects of EOBC in ruminants.

  12. An RNAi in silico approach to find an optimal shRNA cocktail against HIV-1

    PubMed Central

    2010-01-01

    Background HIV-1 can be inhibited by RNA interference in vitro through the expression of short hairpin RNAs (shRNAs) that target conserved genome sequences. In silico shRNA design for HIV has lacked a detailed study of virus variability constituting a possible breaking point in a clinical setting. We designed shRNAs against HIV-1 considering the variability observed in naïve and drug-resistant isolates available at public databases. Methods A Bioperl-based algorithm was developed to automatically scan multiple sequence alignments of HIV, while evaluating the possibility of identifying dominant and subdominant viral variants that could be used as efficient silencing molecules. Student t-test and Bonferroni Dunn correction test were used to assess statistical significance of our findings. Results Our in silico approach identified the most common viral variants within highly conserved genome regions, with a calculated free energy of ≥ -6.6 kcal/mol. This is crucial for strand loading to RISC complex and for a predicted silencing efficiency score, which could be used in combination for achieving over 90% silencing. Resistant and naïve isolate variability revealed that the most frequent shRNA per region targets a maximum of 85% of viral sequences. Adding more divergent sequences maintained this percentage. Specific sequence features that have been found to be related with higher silencing efficiency were hardly accomplished in conserved regions, even when lower entropy values correlated with better scores. We identified a conserved region among most HIV-1 genomes, which meets as many sequence features for efficient silencing. Conclusions HIV-1 variability is an obstacle to achieving absolute silencing using shRNAs designed against a consensus sequence, mainly because there are many functional viral variants. Our shRNA cocktail could be truly effective at silencing dominant and subdominant naïve viral variants. Additionally, resistant isolates might be targeted under specific antiretroviral selective pressure, but in both cases these should be tested exhaustively prior to clinical use. PMID:21172023

  13. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  14. Distributed Monitoring of the R(sup 2) Statistic for Linear Regression

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R.

    2011-01-01

    The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness.

  15. A particle swarm optimization variant with an inner variable learning strategy.

    PubMed

    Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin

    2014-01-01

    Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.

  16. YAMAT-seq: an efficient method for high-throughput sequencing of mature transfer RNAs.

    PubMed

    Shigematsu, Megumi; Honda, Shozo; Loher, Phillipe; Telonis, Aristeidis G; Rigoutsos, Isidore; Kirino, Yohei

    2017-05-19

    Besides translation, transfer RNAs (tRNAs) play many non-canonical roles in various biological pathways and exhibit highly variable expression profiles. To unravel the emerging complexities of tRNA biology and molecular mechanisms underlying them, an efficient tRNA sequencing method is required. However, the rigid structure of tRNA has been presenting a challenge to the development of such methods. We report the development of Y-shaped Adapter-ligated MAture TRNA sequencing (YAMAT-seq), an efficient and convenient method for high-throughput sequencing of mature tRNAs. YAMAT-seq circumvents the issue of inefficient adapter ligation, a characteristic of conventional RNA sequencing methods for mature tRNAs, by employing the efficient and specific ligation of Y-shaped adapter to mature tRNAs using T4 RNA Ligase 2. Subsequent cDNA amplification and next-generation sequencing successfully yield numerous mature tRNA sequences. YAMAT-seq has high specificity for mature tRNAs and high sensitivity to detect most isoacceptors from minute amount of total RNA. Moreover, YAMAT-seq shows quantitative capability to estimate expression levels of mature tRNAs, and has high reproducibility and broad applicability for various cell lines. YAMAT-seq thus provides high-throughput technique for identifying tRNA profiles and their regulations in various transcriptomes, which could play important regulatory roles in translation and other biological processes. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. MHC class I and MHC class II DRB gene variability in wild and captive Bengal tigers (Panthera tigris tigris).

    PubMed

    Pokorny, Ina; Sharma, Reeta; Goyal, Surendra Prakash; Mishra, Sudanshu; Tiedemann, Ralph

    2010-10-01

    Bengal tigers are highly endangered and knowledge on adaptive genetic variation can be essential for efficient conservation and management. Here we present the first assessment of allelic variation in major histocompatibility complex (MHC) class I and MHC class II DRB genes for wild and captive tigers from India. We amplified, cloned, and sequenced alpha-1 and alpha-2 domain of MHC class I and beta-1 domain of MHC class II DRB genes in 16 tiger specimens of different geographic origin. We detected high variability in peptide-binding sites, presumably resulting from positive selection. Tigers exhibit a low number of MHC DRB alleles, similar to other endangered big cats. Our initial assessment-admittedly with limited geographic coverage and sample size-did not reveal significant differences between captive and wild tigers with regard to MHC variability. In addition, we successfully amplified MHC DRB alleles from scat samples. Our characterization of tiger MHC alleles forms a basis for further in-depth analyses of MHC variability in this illustrative threatened mammal.

  18. Momentum and Heat Flux Measurements in the Exhaust of VASIMR using Helium Propellant

    NASA Technical Reports Server (NTRS)

    Chavers, D. Gregory; Chang-Diaz, Franklin R.; Irvine, Claude; Squire, Jared P.

    2003-01-01

    Interplanetary travel requires propulsion systems that can provide high specific impulse (Isp), while also having sufficient thrust to rapidly accelerate large payloads. One such propulsion system is the Variable Specific Impulse Magneto-plasma Rocket (VASIMR), which creates, heats, and ejects plasma to provide variable thrust and Isp, designed to optimally meet the mission requirements. The fraction of the total energy invested in creating the plasma, as compared to the plasma's total kinetic energy, is an important factor in determining the overall system efficiency. In VASIMR, this 'frozen flow loss' is appreciable when at high thrust, but negligible at high Isp. The loss applies to other electric thrusters as well. If some of this energy could be recovered through recombination processes, and reinjected as neutral kinetic energy, the efficiency of VASIMR, in its low Isp/high thrust mode may be improved. An experiment is being conducted to investigate the possibility of recovering some of the energy used to create the plasma by studying the flow characteristics of the charged and neutral particles in the exhaust of the thruster. This paper will cover the measurements of momentum flux and heat flux in the exhaust of the VASIMR test facility using helium as the propellant where the heat flux is comprised of both kinetic and plasma recombination energy. The flux measurements also assist in diagnosing and verifying the plasma conditions in the existing experiment.

  19. Highway-runoff quality, and treatment efficiencies of a hydrodynamic-settling device and a stormwater-filtration device in Milwaukee, Wisconsin

    USGS Publications Warehouse

    Horwatich, Judy A.; Bannerman, Roger T.; Pearson, Robert

    2011-01-01

    The treatment efficiencies of two prefabricated stormwater-treatment devices were tested at a freeway site in a high-density urban part of Milwaukee, Wisconsin. One treatment device is categorized as a hydrodynamic-settling device (HSD), which removes pollutants by sedimentation and flotation. The other treatment device is categorized as a stormwater-filtration device (SFD), which removes pollutants by filtration and sedimentation. During runoff events, flow measurements were recorded and water-quality samples were collected at the inlet and outlet of each device. Efficiency-ratio and summation-of-load (SOL) calculations were used to estimate the treatment efficiency of each device. Event-mean concentrations and loads that were decreased by passing through the HSD include total suspended solids (TSS), suspended sediment (SS), total phosphorus (TP), total copper (TCu), and total zinc (TZn). The efficiency ratios for these constituents were 42, 57, 17, 33, and 23 percent, respectively. The SOL removal rates for these constituents were 25, 49, 10, 27, and 16 percent, respectively. Event-mean concentrations and loads that increased by passing through the HSD include chloride (Cl), total dissolved solids (TDS), and dissolved zinc (DZn). The efficiency ratios for these constituents were -347, -177, and 20 percent, respectively. Four constituents—dissolved phosphorus (DP), chemical oxygen demand (COD), total polycyclic aromatic hydrocarbon (PAH), and dissolved copper (DCu)—are not included in the list of computed efficiency ratio and SOL because the variability between sampled inlet and outlet pairs were not significantly different. Event-mean concentrations and loads that decreased by passing through the SFD include TSS, SS, TP, DCu, TCu, DZn, TZn, and COD. The efficiency ratios for these constituents were 59, 90, 40, 21, 66, 23, 66, and 18, respectively. The SOLs for these constituents were 50, 89, 37, 19, 60, 20, 65, and 21, respectively. Two constituents—DP and PAH—are not included in the lists of computed efficiency ratio and SOL because the variability between sampled inlet and outlet pairs were not significantly different. Similar to the HSD, the average efficiency ratios and SOLs for TDS and Cl were negative. Flow rates, high concentrations of SS, and particle-size distributions (PSD) can affect the treatment efficacies of the two devices. Flow rates equal to or greater than the design flow rate of the HSD had minimal or negative removal efficiencies for TSS and SS loads. Similar TSS removal efficiencies were observed at the SFD, but SS was consistently removed throughout the flow regime. Removal efficiencies were high for both devices when concentrations of SS and TSS approached 200 mg/L. A small number of runoff events were analyzed for PSD; the average sand content at the HSD was 33 percent and at the SFD was 71 percent. The 71-percent sand content may reflect the 90-percent removal efficiency of SS at the SFD. Particles retained at the bottom of both devices were largely sand-size or greater.

  20. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  1. Rapid and inexpensive analysis of genetic variability in Arapaima gigas by PCR multiplex panel of eight microsatellites.

    PubMed

    Hamoy, I G; Santos, E J M; Santos, S E B

    2008-01-22

    The aim of the present study was the development of a multiplex genotyping panel of eight microsatellite markers of Arapaima gigas, previously described. Specific primer pairs were developed, each one of them marked with either FAM-6, HEX or NED. The amplification conditions using the new primers were standardized for a single reaction. The results obtained demonstrate high heterozygosity (average of 0.69) in a Lower Amazon population. The multiplex system described can thus be considered a fast, efficient and inexpensive method for the investigation of genetic variability in Arapaima populations.

  2. High-resolution numerical approximation of traffic flow problems with variable lanes and free-flow velocities.

    PubMed

    Zhang, Peng; Liu, Ru-Xun; Wong, S C

    2005-05-01

    This paper develops macroscopic traffic flow models for a highway section with variable lanes and free-flow velocities, that involve spatially varying flux functions. To address this complex physical property, we develop a Riemann solver that derives the exact flux values at the interface of the Riemann problem. Based on this solver, we formulate Godunov-type numerical schemes to solve the traffic flow models. Numerical examples that simulate the traffic flow around a bottleneck that arises from a drop in traffic capacity on the highway section are given to illustrate the efficiency of these schemes.

  3. Optimization of composite tiltrotor wings with extensions and winglets

    NASA Astrophysics Data System (ADS)

    Kambampati, Sandilya

    Tiltrotors suffer from an aeroelastic instability during forward flight called whirl flutter. Whirl flutter is caused by the whirling motion of the rotor, characterized by highly coupled wing-rotor-pylon modes of vibration. Whirl flutter is a major obstacle for tiltrotors in achieving high-speed flight. The conventional approach to assure adequate whirl flutter stability margins for tiltrotors is to design the wings with high torsional stiffness, typically using 23% thickness-to-chord ratio wings. However, the large aerodynamic drag associated with these high thickness-to-chord ratio wings decreases aerodynamic efficiency and increases fuel consumption. Wingtip devices such as wing extensions and winglets have the potential to increase the whirl flutter characteristics and the aerodynamic efficiency of a tiltrotor. However, wing-tip devices can add more weight to the aircraft. In this study, multi-objective parametric and optimization methodologies for tiltrotor aircraft with wing extensions and winglets are investigated. The objectives are to maximize aircraft aerodynamic efficiency while minimizing weight penalty due to extensions and winglets, subject to whirl flutter constraints. An aeroelastic model that predicts the whirl flutter speed and a wing structural model that computes strength and weight of a composite wing are developed. An existing aerodynamic model (that predicts the aerodynamic efficiency) is merged with the developed structural and aeroelastic models for the purpose of conducting parametric and optimization studies. The variables of interest are the wing thickness and structural properties, and extension and winglet planform variables. The Bell XV-15 tiltrotor aircraft the chosen as the parent aircraft for this study. Parametric studies reveal that a wing extension of span 25% of the inboard wing increases the whirl flutter speed by 10% and also increases the aircraft aerodynamic efficiency by 8%. Structurally tapering the wing of a tiltrotor equipped with an extension and a winglet can increase the whirl flutter speed by 15% while reducing the wing weight by 7.5%. The baseline design for the optimization is the optimized wing with no extension or winglet. The optimization studies reveal that the optimum design for a cruise speed of 250 knots has an increased aerodynamic efficiency of 7% over the baseline design for only a weight penalty of 3% - thus a better transport range of 5.5% more than the baseline. The optimal design for a cruise speed of 300 knots has an increased aerodynamic efficiency of 5%, a weight penalty of 2.5%, and a better transport range of 3.5% more than the baseline.

  4. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  5. Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox

    NASA Astrophysics Data System (ADS)

    Li, R. N.; Liu, X.; Liu, S. J.

    2013-12-01

    In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.

  6. An analytical study of the endoreversible Curzon-Ahlborn cycle for a non-linear heat transfer law

    NASA Astrophysics Data System (ADS)

    Páez-Hernández, Ricardo T.; Portillo-Díaz, Pedro; Ladino-Luna, Delfino; Ramírez-Rojas, Alejandro; Pacheco-Paez, Juan C.

    2016-01-01

    In the present article, an endoreversible Curzon-Ahlborn engine is studied by considering a non-linear heat transfer law, particularly the Dulong-Petit heat transfer law, using the `componendo and dividendo' rule as well as a simple differentiation to obtain the Curzon-Ahlborn efficiency as proposed by Agrawal in 2009. This rule is actually a change of variable that simplifies a two-variable problem to a one-variable problem. From elemental calculus, we obtain an analytical expression of efficiency and the power output. The efficiency is given only in terms of the temperatures of the reservoirs, such as both Carnot and Curzon-Ahlborn cycles. We make a comparison between efficiencies measured in real power plants and theoretical values from analytical expressions obtained in this article and others found in literature from several other authors. This comparison shows that the theoretical values of efficiency are close to real efficiency, and in some cases, they are exactly the same. Therefore, we can say that the Agrawal method is good in calculating thermal engine efficiencies approximately.

  7. AVION: A detailed report on the preliminary design of a 79-passenger, high-efficiency, commercial transport aircraft

    NASA Technical Reports Server (NTRS)

    Mayfield, William; Perkins, Brett; Rogan, William; Schuessler, Randall; Stockert, Joe

    1990-01-01

    The Avion is the result of an investigation into the preliminary design for a high-efficiency commercial transport aircraft. The Avion is designed to carry 79 passengers and a crew of five through a range of 1,500 nm at 455 kts (M=0.78 at 32,000 ft). It has a gross take-off weight of 77,000 lb and an empty weight of 42,400 lb. Currently there are no American-built aircraft designed to fit the 60 to 90 passenger, short/medium range marketplace. The Avion gathers the premier engineering achievements of flight technology and integrates them into an aircraft which will challenge the current standards of flight efficiency, reliability, and performance. The Avion will increase flight efficiency through reduction of structural weight and the improvement of aerodynamic characteristics and propulsion systems. Its design departs from conventional aircraft design tradition with the incorporation of a three-lifting-surface (or tri-wing) configuration. Further aerodynamic improvements are obtained through modest main wing forward sweeping, variable incidence canards, aerodynamic coupling between the canard and main wing, leading edge extensions, winglets, an aerodynamic tailcone, and a T-tail empennage. The Avion is propelled by propfans, which are one of the most promising developments for raising propulsive efficiencies at high subsonic Mach numbers. Special attention is placed on overall configuration, fuselage layout, performance estimations, component weight estimations, and planform design. Leading U.S. technology promises highly efficient flight for the 21st century; the Avion will fulfill this promise to passenger transport aviation.

  8. Efficient Variable Selection Method for Exposure Variables on Binary Data

    NASA Astrophysics Data System (ADS)

    Ohno, Manabu; Tarumi, Tomoyuki

    In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.

  9. Ten-year variability in ecosystem water use efficiency in an oak-dominated temperate forest under a warming climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Jing; Chen, Jiquan; Sun, Ge

    The impacts of extreme weather events on water-carbon (C) coupling and ecosystem-scale water use efficiency (WUE) over a long term are poorly understood. We analyzed the changes in ecosystem water use efficiency (WUE) from 10 years of eddy-covariance measurements (2004-2013) over an oak-dominated temperate forest in Ohio, USA. The aim was to investigate the long-term response of ecosystem WUE to measured changes in site-biophysical conditions and ecosystem attributes. The oak forest produced new plant biomass of 2.5 +/- 0.2 gC kg(-1) of water loss annually. Monthly evapotranspiration (ET) and gross ecosystem production (GEP) were tightly coupled over the 10-year studymore » period (R-2=0.94). Daily WUE had a linear relationship with air temperature (T-a) in low-temperature months and a unimodal relationship with T-a in high-temperature months during the growing season. On average, daily WUE ceased to increase when T-a exceeded 22 degrees C in warm months for both wet and dry years. Monthly WUE had a strong positive linear relationship with leaf area index (LAI), net radiation (R-n), and T-a and weak logarithmic relationship with water vapor pressure deficit (VPD) and precipitation (P) on a growing-season basis. When exploring the regulatory mechanisms on WUE within each season, spring LAI and P, summer R-n and T-a, and autumnal VPD and R-n were found to be the main explanatory variables for seasonal variation in WUE. The model developed in this study was able to capture 78% of growing-season variation in WUE on a monthly basis. The negative correlation between WUE and A in spring was mainly due to the high precipitation amounts in spring, decreasing GEP and WUE when LAI was still small, adding ET being observed to increase with high levels of evaporation as a result of high SWC in spring. Summer WUE had a significant decreasing trend across the 10 years mainly due to the combined effect of seasonal drought and increasing potential and available energy increasing ET, but decreasing GEP in summer. We concluded that seasonal dynamics of the interchange between precipitation and drought status of the system was an important variable in controlling seasonal WUE in wet years. In contrast, despite the negative impacts of unfavorable warming, available groundwater and an early start of the growing season were important contributing variables in high seasonal GEP, and thus, high seasonal WUE in dry years. (C) 2015 Elsevier B.V. All rights reserved.« less

  10. Ten-year variability in ecosystem water use efficiency in an oak-dominated temperate forest under a warming climate

    DOE PAGES

    Xie, Jing; Chen, Jiquan; Sun, Ge; ...

    2016-01-07

    The impacts of extreme weather events on water-carbon (C) coupling and ecosystem-scale water use efficiency (WUE) over a long term are poorly understood. We analyzed the changes in ecosystem water use efficiency (WUE) from 10 years of eddy-covariance measurements (2004-2013) over an oak-dominated temperate forest in Ohio, USA. The aim was to investigate the long-term response of ecosystem WUE to measured changes in site-biophysical conditions and ecosystem attributes. The oak forest produced new plant biomass of 2.5 +/- 0.2 gC kg(-1) of water loss annually. Monthly evapotranspiration (ET) and gross ecosystem production (GEP) were tightly coupled over the 10-year studymore » period (R-2=0.94). Daily WUE had a linear relationship with air temperature (T-a) in low-temperature months and a unimodal relationship with T-a in high-temperature months during the growing season. On average, daily WUE ceased to increase when T-a exceeded 22 degrees C in warm months for both wet and dry years. Monthly WUE had a strong positive linear relationship with leaf area index (LAI), net radiation (R-n), and T-a and weak logarithmic relationship with water vapor pressure deficit (VPD) and precipitation (P) on a growing-season basis. When exploring the regulatory mechanisms on WUE within each season, spring LAI and P, summer R-n and T-a, and autumnal VPD and R-n were found to be the main explanatory variables for seasonal variation in WUE. The model developed in this study was able to capture 78% of growing-season variation in WUE on a monthly basis. The negative correlation between WUE and A in spring was mainly due to the high precipitation amounts in spring, decreasing GEP and WUE when LAI was still small, adding ET being observed to increase with high levels of evaporation as a result of high SWC in spring. Summer WUE had a significant decreasing trend across the 10 years mainly due to the combined effect of seasonal drought and increasing potential and available energy increasing ET, but decreasing GEP in summer. We concluded that seasonal dynamics of the interchange between precipitation and drought status of the system was an important variable in controlling seasonal WUE in wet years. In contrast, despite the negative impacts of unfavorable warming, available groundwater and an early start of the growing season were important contributing variables in high seasonal GEP, and thus, high seasonal WUE in dry years. (C) 2015 Elsevier B.V. All rights reserved.« less

  11. Deep Blue Phosphorescent Organic Light-Emitting Diodes with CIEy Value of 0.11 and External Quantum Efficiency up to 22.5.

    PubMed

    Li, Xiaoyue; Zhang, Juanye; Zhao, Zifeng; Wang, Liding; Yang, Hannan; Chang, Qiaowen; Jiang, Nan; Liu, Zhiwei; Bian, Zuqiang; Liu, Weiping; Lu, Zhenghong; Huang, Chunhui

    2018-03-01

    Organic light-emitting diodes (OLEDs) based on red and green phosphorescent iridium complexes are successfully commercialized in displays and solid-state lighting. However, blue ones still remain a challenge on account of their relatively dissatisfactory Commission International de L'Eclairage (CIE) coordinates and low efficiency. After analyzing the reported blue iridium complexes in the literature, a new deep-blue-emitting iridium complex with improved photoluminescence quantum yield is designed and synthesized. By rational screening host materials showing high triplet energy level in neat film as well as the OLED architecture to balance electron and hole recombination, highly efficient deep-blue-emission OLEDs with a CIE at (0.15, 0.11) and maximum external quantum efficiency (EQE) up to 22.5% are demonstrated. Based on the transition dipole moment vector measurement with a variable-angle spectroscopic ellipsometry method, the ultrahigh EQE is assigned to a preferred horizontal dipole orientation of the iridium complex in doped film, which is beneficial for light extraction from the OLEDs. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Effective Subcritical Butane Extraction of Bifenthrin Residue in Black Tea.

    PubMed

    Zhang, Yating; Gu, Lingbiao; Wang, Fei; Kong, Lingjun; Qin, Guangyong

    2017-03-30

    As a natural and healthy beverage, tea is widely enjoyed; however, the pesticide residues in tea leaves affect the quality and food safety. To develop a highly selective and efficient method for the facile removal of pesticide residues, the subcritical butane extraction (SBE) technique was employed, and three variables involving temperature, time and extraction cycles were studied. The optimum SBE conditions were found to be as follows: extraction temperature 45 °C, extraction time 30 min, number of extraction cycles 1, and in such a condition that the extraction efficiency reached as high as 92%. Further, the catechins, theanine, caffeine and aroma components, which determine the quality of the tea, fluctuated after SBE treatment. Compared with the uncrushed leaves, pesticide residues can more easily be removed from crushed leaves, and the practical extraction efficiency was 97%. These results indicate that SBE is a useful method to efficiently remove the bifenthrin, and as appearance is not relevant in the production process, tea leaves should first be crushed and then extracted in order that residual pesticides are thoroughly removed.

  13. Prediction of Biological Motion Perception Performance from Intrinsic Brain Network Regional Efficiency

    PubMed Central

    Wang, Zengjian; Zhang, Delong; Liang, Bishan; Chang, Song; Pan, Jinghua; Huang, Ruiwang; Liu, Ming

    2016-01-01

    Biological motion perception (BMP) refers to the ability to perceive the moving form of a human figure from a limited amount of stimuli, such as from a few point lights located on the joints of a moving body. BMP is commonplace and important, but there is great inter-individual variability in this ability. This study used multiple regression model analysis to explore the association between BMP performance and intrinsic brain activity, in order to investigate the neural substrates underlying inter-individual variability of BMP performance. The resting-state functional magnetic resonance imaging (rs-fMRI) and BMP performance data were collected from 24 healthy participants, for whom intrinsic brain networks were constructed, and a graph-based network efficiency metric was measured. Then, a multiple linear regression model was used to explore the association between network regional efficiency and BMP performance. We found that the local and global network efficiency of many regions was significantly correlated with BMP performance. Further analysis showed that the local efficiency rather than global efficiency could be used to explain most of the BMP inter-individual variability, and the regions involved were predominately located in the Default Mode Network (DMN). Additionally, discrimination analysis showed that the local efficiency of certain regions such as the thalamus could be used to classify BMP performance across participants. Notably, the association pattern between network nodal efficiency and BMP was different from the association pattern of static directional/gender information perception. Overall, these findings show that intrinsic brain network efficiency may be considered a neural factor that explains BMP inter-individual variability. PMID:27853427

  14. Optimum design of high speed prop rotors including the coupling of performance, aeroelastic stability and structures

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Mccarthy, Thomas R.; Madden, John F., III

    1992-01-01

    An optimization procedure is developed for the design of high speed prop-rotors to be used in civil tiltrotor applications. The goal is to couple aerodynamic performance, aeroelastic stability, and structural design requirements inside a closed-loop optimization procedure. The objective is to minimize the gross weight and maximize the propulsive efficiency in high speed cruise. Constraints are imposed on the rotor aeroelastic stability in both hover and cruise and rotor figure of merit in hover. Both structural and aerodynamic design variables are used.

  15. Progress on Variable Cycle Engines

    NASA Technical Reports Server (NTRS)

    Westmoreland, J. S.; Howlett, R. A.; Lohmann, R. P.

    1979-01-01

    Progress in the development and future requirements of the Variable Stream Control Engine (VSCE) are presented. The two most critical components of this advanced system for future supersonic transports, the high performance duct burner for thrust augmentation, and the low jet coannular nozzle were studied. Nozzle model tests substantiated the jet noise benefit associated with the unique velocity profile possible with a coannular nozzle system on a VSCE. Additional nozzle model performance tests have established high thrust efficiency levels only at takeoff and supersonic cruise for this nozzle system. An experimental program involving both isolated component and complete engine tests has been conducted for the high performance, low emissions duct burner with good results and large scale testing of these two components is being conducted using a F100 engine as the testbed for simulating the VSCE. Future work includes application of computer programs for supersonic flow fields to coannular nozzle geometries, further experimental testing with the duct burner segment rig, and the use of the Variable Cycle Engine (VCE) Testbed Program for evaluating the VSCE duct burner and coannular nozzle technologies.

  16. Projection Exposure with Variable Axis Immersion Lenses: A High-Throughput Electron Beam Approach to “Suboptical” Lithography

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Hans

    1995-12-01

    IBM's high-throughput e-beam stepper approach PRojection Exposure with Variable Axis Immersion Lenses (PREVAIL) is reviewed. The PREVAIL concept combines technology building blocks of our probe-forming EL-3 and EL-4 systems with the exposure efficiency of pattern projection. The technology represents an extension of the shaped-beam approach toward massively parallel pixel projection. As demonstrated, the use of variable-axis lenses can provide large field coverage through reduction of off-axis aberrations which limit the performance of conventional projection systems. Subfield pattern sections containing 107 or more pixels can be electronically selected (mask plane), projected and positioned (wafer plane) at high speed. To generate the entire chip pattern subfields must be stitched together sequentially in a combination of electronic and mechanical positioning of mask and wafer. The PREVAIL technology promises throughput levels competitive with those of optical steppers at superior resolution. The PREVAIL project is being pursued to demonstrate the viability of the technology and to develop an e-beam alternative to “suboptical” lithography.

  17. Brain Signal Variability Differentially Affects Cognitive Flexibility and Cognitive Stability.

    PubMed

    Armbruster-Genç, Diana J N; Ueltzhöffer, Kai; Fiebach, Christian J

    2016-04-06

    Recent research yielded the intriguing conclusion that, in healthy adults, higher levels of variability in neuronal processes are beneficial for cognitive functioning. Beneficial effects of variability in neuronal processing can also be inferred from neurocomputational theories of working memory, albeit this holds only for tasks requiring cognitive flexibility. However, cognitive stability, i.e., the ability to maintain a task goal in the face of irrelevant distractors, should suffer under high levels of brain signal variability. To directly test this prediction, we studied both behavioral and brain signal variability during cognitive flexibility (i.e., task switching) and cognitive stability (i.e., distractor inhibition) in a sample of healthy human subjects and developed an efficient and easy-to-implement analysis approach to assess BOLD-signal variability in event-related fMRI task paradigms. Results show a general positive effect of neural variability on task performance as assessed by accuracy measures. However, higher levels of BOLD-signal variability in the left inferior frontal junction area result in reduced error rate costs during task switching and thus facilitate cognitive flexibility. In contrast, variability in the same area has a detrimental effect on cognitive stability, as shown in a negative effect of variability on response time costs during distractor inhibition. This pattern was mirrored at the behavioral level, with higher behavioral variability predicting better task switching but worse distractor inhibition performance. Our data extend previous results on brain signal variability by showing a differential effect of brain signal variability that depends on task context, in line with predictions from computational theories. Recent neuroscientific research showed that the human brain signal is intrinsically variable and suggested that this variability improves performance. Computational models of prefrontal neural networks predict differential effects of variability for different behavioral situations requiring either cognitive flexibility or stability. However, this hypothesis has so far not been put to an empirical test. In this study, we assessed cognitive flexibility and cognitive stability, and, besides a generally positive effect of neural variability on accuracy measures, we show that neural variability in a prefrontal brain area at the inferior frontal junction is differentially associated with performance: higher levels of variability are beneficial for the effectiveness of task switching (cognitive flexibility) but detrimental for the efficiency of distractor inhibition (cognitive stability). Copyright © 2016 the authors 0270-6474/16/363978-10$15.00/0.

  18. Efficiency of converting nutrient dry matter to milk in Holstein herds.

    PubMed

    Britt, J S; Thomas, R C; Speer, N C; Hall, M B

    2003-11-01

    Production of milk from feed dry matter intakes (DMI), called dairy or feed efficiency, is not commonly measured in dairy herds as is feed conversion to weight gain in swine, beef, and poultry; however, it has relevance to conversion of purchased input to salable product and proportion of dietary nutrients excreted. The purpose of this study was to identify some readily measured factors that affect dairy efficiency. Data were collected from 13 dairy herds visited 34 times over a 14-mo period. Variables measured included cool or warm season (high ambient temperature <21 degrees C or >21 degrees C, respectively), days in milk, DMI, milk yield, milk fat percent, herd size, dietary concentrations (DM basis) and kilograms of crude protein (CP), acid detergent fiber (ADF), neutral detergent fiber (NDF), and forage. Season, days in milk, CP % and forage % of diet DM, and kilograms of dietary CP affected dairy efficiency. When evaluated using a model containing the significant variables, dairy efficiency was lower in the warm season (1.31) than in the cool season (1.40). In terms of simple correlations, dairy efficiency was negatively correlated with days in milk (r = -0.529), DMI (r = -0.316), forage % (r = -0.430), NDF % (r = -0.308), and kilograms of forage (r = -0.516), NDF (r = -0.434), and ADF (r = -0.313), in the diet, respectively. Dairy efficiency was positively correlated with milk yield (r = 0.707). The same relative patterns of significance and correlation were noted for dairy efficiency calculated with 3.5% fat-corrected milk yield. Diets fed by the herds fell within such a small range of variation (mean +/- standard deviation) for CP % (16.3 +/- 0.696), NDF % (33.2 +/- 2.68), and forage % (46.9 +/- 5.56) that these would not be expected to be useful to evaluate the effect of excessive underfeeding or overfeeding of these dietary components. The negative relationships of dairy efficiency with increasing dietary fiber and forage may reflect the effect of decreased diet digestibility. The results of this study suggest that managing herd breeding programs to reduce average days in milk and providing a cooler environment for the cows may help to maximize dairy efficiency. The mechanisms for the effects of the dietary variables on dairy efficiency need to be understood and evaluated over a broader range of diets and conditions before more firm conclusions regarding their impact can be drawn.

  19. Unconditional security of time-energy entanglement quantum key distribution using dual-basis interferometry.

    PubMed

    Zhang, Zheshen; Mower, Jacob; Englund, Dirk; Wong, Franco N C; Shapiro, Jeffrey H

    2014-03-28

    High-dimensional quantum key distribution (HDQKD) offers the possibility of high secure-key rate with high photon-information efficiency. We consider HDQKD based on the time-energy entanglement produced by spontaneous parametric down-conversion and show that it is secure against collective attacks. Its security rests upon visibility data-obtained from Franson and conjugate-Franson interferometers-that probe photon-pair frequency correlations and arrival-time correlations. From these measurements, an upper bound can be established on the eavesdropper's Holevo information by translating the Gaussian-state security analysis for continuous-variable quantum key distribution so that it applies to our protocol. We show that visibility data from just the Franson interferometer provides a weaker, but nonetheless useful, secure-key rate lower bound. To handle multiple-pair emissions, we incorporate the decoy-state approach into our protocol. Our results show that over a 200-km transmission distance in optical fiber, time-energy entanglement HDQKD could permit a 700-bit/sec secure-key rate and a photon information efficiency of 2 secure-key bits per photon coincidence in the key-generation phase using receivers with a 15% system efficiency.

  20. New geospatial approaches for efficiently mapping forest biomass logistics at high resolution over large areas

    Treesearch

    John Hogland; Nathaniel Anderson; Woodam Chung

    2018-01-01

    Adequate biomass feedstock supply is an important factor in evaluating the financial feasibility of alternative site locations for bioenergy facilities and for maintaining profitability once a facility is built. We used newly developed spatial analysis and logistics software to model the variables influencing feedstock supply and to estimate and map two components of...

  1. The measurement of trace emissions and combustion characteristics for a mass fire [Chapter 32

    Treesearch

    Ronald A. Susott; Darold E. Ward; Ronald E. Babbitt; Don J. Latham

    1991-01-01

    Concerns increase about the effects of emissions from biomass burning on global climate. While the burning of biomass constitutes a large fraction of world emissions, there are insufficient data on the combustion efficiency, emission factors, and trace gases produced in these fires, and on how these factors depend on the highly variable chemistry and burning condition...

  2. Sampling strategies for efficient estimation of tree foliage biomass

    Treesearch

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  3. Meta-analysis of RNA-Seq data across cohorts in a multi-season feed efficiency study of crossbred beef steers accounts for biological and technical variability within season

    USDA-ARS?s Scientific Manuscript database

    High-throughput sequencing is often used for studies of the transcriptome, particularly for comparisons between experimental conditions. Due to sequencing costs, a limited number of biological replicates are typically considered in such experiments, leading to low detection power for differential ex...

  4. Self-Heating Effects In Polysilicon Source Gated Transistors

    PubMed Central

    Sporea, R. A.; Burridge, T.; Silva, S. R. P.

    2015-01-01

    Source-gated transistors (SGTs) are thin-film devices which rely on a potential barrier at the source to achieve high gain, tolerance to fabrication variability, and low series voltage drop, relevant to a multitude of energy-efficient, large-area, cost effective applications. The current through the reverse-biased source barrier has a potentially high positive temperature coefficient, which may lead to undesirable thermal runaway effects and even device failure through self-heating. Using numerical simulations we show that, even in highly thermally-confined scenarios and at high current levels, self-heating is insufficient to compromise device integrity. Performance is minimally affected through a modest increase in output conductance, which may limit the maximum attainable gain. Measurements on polysilicon devices confirm the simulated results, with even smaller penalties in performance, largely due to improved heat dissipation through metal contacts. We conclude that SGTs can be reliably used for high gain, power efficient analog and digital circuits without significant performance impact due to self-heating. This further demonstrates the robustness of SGTs. PMID:26351099

  5. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  6. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  7. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  8. Objective Evaluation of Vergence Disorders and a Research-Based Novel Method for Vergence Rehabilitation

    PubMed Central

    Kapoula, Zoï; Morize, Aurélien; Daniel, François; Jonqua, Fabienne; Orssaud, Christophe; Brémond-Gignac, Dominique

    2016-01-01

    Purpose We performed video-oculography to evaluate vergence eye movement abnormalities in students diagnosed clinically with vergence disorders. We tested the efficiency of a novel rehabilitation method and evaluated its benefits with video-oculography cross-correlated with clinical tests and symptomatology. Methods A total of 19 students (20–27 years old) underwent ophthalmologic, orthoptic examination, and a vergence test coupled with video-oculography. Eight patients were diagnosed with vergence disorders with a high symptomatology score (CISS) and performed a 5-week session of vergence rehabilitation. Vergence and rehabilitation tasks were performed with a trapezoid surface of light emitting diodes (LEDs) and adjacent buzzers (US 8851669). We used a novel Vergence double-step (Vd-s) protocol: the target stepped to a second position before the vergence movement completion. Afterward the vergence test was repeated 1 week and 1 month later. Results Abnormally increased intertrial variability was observed for many vergence parameters (gain, duration, and speed) for the subjects with vergence disorders. High CISS scores were correlated with variability and increased latency. After the Vd-s, variability of all parameters dropped to normal or better levels. Moreover, the convergence and divergence latency diminished significantly to levels better than normal; benefits were maintained 1 month after completion of Vd-s. CISS scores dropped to normal level, which was maintained up to 1 year. Conclusions and Translational Relevance: Intertrial variability is the major marker of vergence disorders. The Vd-s research-based method leads to normalization of vergence properties and lasting removal of symptoms. The efficiency of the method is due to the spatiotemporal parameters of repetitive trials that stimulate neural plasticity. PMID:26981330

  9. Multi-scale variability and long-range memory in indoor Radon concentrations from Coimbra, Portugal

    NASA Astrophysics Data System (ADS)

    Donner, Reik V.; Potirakis, Stelios; Barbosa, Susana

    2014-05-01

    The presence or absence of long-range correlations in the variations of indoor Radon concentrations has recently attracted considerable interest. As a radioactive gas naturally emitted from the ground in certain geological settings, understanding environmental factors controlling Radon concentrations and their dynamics is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we re-analyze two high-resolution records of indoor Radon concentrations from Coimbra, Portugal, each of which spans several months of continuous measurements. In order to evaluate the presence of long-range correlations and fractal scaling, we utilize a multiplicity of complementary methods, including power spectral analysis, ARFIMA modeling, classical and multi-fractal detrended fluctuation analysis, and two different estimators of the signals' fractal dimensions. Power spectra and fluctuation functions reveal some complex behavior with qualitatively different properties on different time-scales: white noise in the high-frequency part, indications of some long-range correlated process dominating time scales of several hours to days, and pronounced low-frequency variability associated with tidal and/or meteorological forcing. In order to further decompose these different scales of variability, we apply two different approaches. On the one hand, applying multi-resolution analysis based on the discrete wavelet transform allows separately studying contributions on different time scales and characterize their specific correlation and scaling properties. On the other hand, singular system analysis (SSA) provides a reconstruction of the essential modes of variability. Specifically, by considering only the first leading SSA modes, we achieve an efficient de-noising of our environmental signals, highlighting the low-frequency variations together with some distinct scaling on sub-daily time-scales resembling the properties of a long-range correlated process.

  10. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  11. Proposal and Development of a High Voltage Variable Frequency Alternating Current Power System for Hybrid Electric Aircraft

    NASA Technical Reports Server (NTRS)

    Sadey, David J.; Taylor, Linda M.; Beach, Raymond F.

    2016-01-01

    The development of ultra-efficient commercial vehicles and the transition to low-carbon emission propulsion are seen as thrust paths within NASA Aeronautics. A critical enabler to these paths comes in the form of hybrid-electric propulsion systems. For megawatt-class systems, the best power system topology for these hybrid-electric propulsion systems is debatable. Current proposals within NASA and the Aero community suggest using a combination of AC and DC for power transmission. This paper proposes an alternative to the current thought model through the use of a primarily high voltage AC power generation, transmission, and distribution systems, supported by the Convergent Aeronautics Solutions (CAS) Project. This system relies heavily on the use of dual-fed induction machines, which provide high power densities, minimal power conversion, and variable speed operation. The paper presents background on the project along with the system architecture, development status and preliminary results.

  12. VizieR Online Data Catalog: RR Lyrae in SDSS Stripe 82 (Suveges+, 2012)

    NASA Astrophysics Data System (ADS)

    Suveges, M.; Sesar, B.; Varadi, M.; Mowlavi, N.; Becker, A. C.; Ivezic, Z.; Beck, M.; Nienartowicz, K.; Rimoldini, L.; Dubath, P.; Bartholdi, P.; Eyer, L.

    2013-05-01

    We propose a robust principal component analysis framework for the exploitation of multiband photometric measurements in large surveys. Period search results are improved using the time-series of the first principal component due to its optimized signal-to-noise ratio. The presence of correlated excess variations in the multivariate time-series enables the detection of weaker variability. Furthermore, the direction of the largest variance differs for certain types of variable stars. This can be used as an efficient attribute for classification. The application of the method to a subsample of Sloan Digital Sky Survey Stripe 82 data yielded 132 high-amplitude delta Scuti variables. We also found 129 new RR Lyrae variables, complementary to the catalogue of Sesar et al., extending the halo area mapped by Stripe 82 RR Lyrae stars towards the Galactic bulge. The sample also comprises 25 multiperiodic or Blazhko RR Lyrae stars. (8 data files).

  13. Integrating Variable Renewable Energy - Russia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    To foster sustainable, low-emission development, many countries are establishing ambitious renewable energy targets for their electricity supply. Because solar and wind tend to be more variable and uncertain than conventional sources, meeting these targets will involve changes to power system planning and operations. Grid integration is the practice of developing efficient ways to deliver variable renewable energy (VRE) to the grid. Good integration methods maximize the cost-effectiveness of incorporating VRE into the power system while maintaining or increasing system stability and reliability. When considering grid integration, policy makers, regulators, and system operators consider a variety of issues, which can bemore » organized into four broad topics: New Renewable Energy Generation, New Transmission, Increased System Flexibility, Planning for a High RE Future. This is a Russian-language translation of Integrating Variable Renewable Energy into the Grid: Key Issues, Greening the Grid, originally published in English in May 2015.« less

  14. Color visualization for fluid flow prediction

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Speray, D. E.

    1982-01-01

    High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.

  15. Integrating Variable Renewable Energy into the Grid: Key Issues, Greening the Grid (Spanish Version)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This is the Spanish version of 'Greening the Grid - Integrating Variable Renewable Energy into the Grid: Key Issues'. To foster sustainable, low-emission development, many countries are establishing ambitious renewable energy targets for their electricity supply. Because solar and wind tend to be more variable and uncertain than conventional sources, meeting these targets will involve changes to power system planning and operations. Grid integration is the practice of developing efficient ways to deliver variable renewable energy (VRE) to the grid. Good integration methods maximize the cost-effectiveness of incorporating VRE into the power system while maintaining or increasing system stability andmore » reliability. When considering grid integration, policy makers, regulators, and system operators consider a variety of issues, which can be organized into four broad topics: New Renewable Energy Generation, New Transmission, Increased System Flexibility, and Planning for a High RE Future.« less

  16. Merging National Forest and National Forest Health Inventories to Obtain an Integrated Forest Resource Inventory – Experiences from Bavaria, Slovenia and Sweden

    PubMed Central

    Kovač, Marko; Bauer, Arthur; Ståhl, Göran

    2014-01-01

    Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120

  17. Effective prediction of biodiversity in tidal flat habitats using an artificial neural network.

    PubMed

    Yoo, Jae-Won; Lee, Yong-Woo; Lee, Chang-Gun; Kim, Chang-Soo

    2013-02-01

    Accurate predictions of benthic macrofaunal biodiversity greatly benefit the efficient planning and management of habitat restoration efforts in tidal flat habitats. Artificial neural network (ANN) prediction models for such biodiversity were developed and tested based on 13 biophysical variables, collected from 50 sites of tidal flats along the coast of Korea during 1991-2006. The developed model showed high predictions during training, cross-validation and testing. Besides the training and testing procedures, an independent dataset from a different time period (2007-2010) was used to test the robustness and practical usage of the model. High prediction on the independent dataset (r = 0.84) validated the networks proper learning of predictive relationship and its generality. Key influential variables identified by follow-up sensitivity analyses were related with topographic dimension, environmental heterogeneity, and water column properties. Study demonstrates the successful application of ANN for the accurate prediction of benthic macrofaunal biodiversity and understanding of dynamics of candidate variables. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  19. High-efficiency in situ resonant inelastic x-ray scattering (iRIXS) endstation at the Advanced Light Source

    NASA Astrophysics Data System (ADS)

    Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing; Sallis, Shawn; Fuchs, Oliver; Blum, Monika; Weinhardt, Lothar; Heske, Clemens; Pepper, John; Jones, Michael; Brown, Adam; Spucces, Adrian; Chow, Ken; Smith, Brian; Glans, Per-Anders; Chen, Yanxue; Yan, Shishen; Pan, Feng; Piper, Louis F. J.; Denlinger, Jonathan; Guo, Jinghua; Hussain, Zahid; Chuang, Yi-De; Yang, Wanli

    2017-03-01

    An endstation with two high-efficiency soft x-ray spectrographs was developed at Beamline 8.0.1 of the Advanced Light Source, Lawrence Berkeley National Laboratory. The endstation is capable of performing soft x-ray absorption spectroscopy, emission spectroscopy, and, in particular, resonant inelastic soft x-ray scattering (RIXS). Two slit-less variable line-spacing grating spectrographs are installed at different detection geometries. The endstation covers the photon energy range from 80 to 1500 eV. For studying transition-metal oxides, the large detection energy window allows a simultaneous collection of x-ray emission spectra with energies ranging from the O K-edge to the Ni L-edge without moving any mechanical components. The record-high efficiency enables the recording of comprehensive two-dimensional RIXS maps with good statistics within a short acquisition time. By virtue of the large energy window and high throughput of the spectrographs, partial fluorescence yield and inverse partial fluorescence yield signals could be obtained for all transition metal L-edges including Mn. Moreover, the different geometries of these two spectrographs (parallel and perpendicular to the horizontal polarization of the beamline) provide contrasts in RIXS features with two different momentum transfers.

  20. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  1. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.

  2. Analysis of a Channeled Centerbody Supersonic Inlet for F-15B Flight Research

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.

    2010-01-01

    The Propulsion Flight Test Fixture at the NASA Dryden Flight Research Center is a unique test platform available for use on the NASA F-15B airplane, tail number 836, as a modular host for a variety of aerodynamics and propulsion research. The first experiment that is to be flown on the test fixture is the Channeled Centerbody Inlet Experiment. The objectives of this project at Dryden are twofold: 1) flight evaluation of an innovative new approach to variable geometry for high-speed inlets, and 2) flight validation of channeled inlet performance prediction by complex computational fluid dynamics codes. The inlet itself is a fixed-geometry version of a mixed-compression, variable-geometry, supersonic in- let developed by TechLand Research, Inc. (North Olmsted, Ohio) to improve the efficiency of supersonic flight at off-nominal conditions. The concept utilizes variable channels in the centerbody section to vary the mass flow of the inlet, enabling efficient operation at a range of flight conditions. This study is particularly concerned with the starting characteristics of the inlet. Computational fluid dynamics studies were shown to align well with analytical predictions, showing the inlet to remain unstarted as designed at the primary test point of Mach 1.5 at an equivalent pressure altitude of 29,500 ft local conditions. Mass-flow-related concerns such as the inlet start problem, as well as inlet efficiency in terms of total pressure loss, are assessed using the flight test geometry.

  3. Balancing Area Coordination: Efficiently Integrating Renewable Energy Into the Grid, Greening the Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Jessica; Denholm, Paul; Cochran, Jaquelin

    2015-06-01

    Greening the Grid provides technical assistance to energy system planners, regulators, and grid operators to overcome challenges associated with integrating variable renewable energy into the grid. Coordinating balancing area operation can promote more cost and resource efficient integration of variable renewable energy, such as wind and solar, into power systems. This efficiency is achieved by sharing or coordinating balancing resources and operating reserves across larger geographic boundaries.

  4. Feasibility of Large High-Powered Solar Electric Propulsion Vehicles: Issues and Solutions

    NASA Technical Reports Server (NTRS)

    Capadona, Lynn A.; Woytach, Jeffrey M.; Kerslake, Thomas W.; Manzella, David H.; Christie, Robert J.; Hickman, Tyler A.; Schneidegger, Robert J.; Hoffman, David J.; Klem, Mark D.

    2012-01-01

    Human exploration beyond low Earth orbit will require the use of enabling technologies that are efficient, affordable, and reliable. Solar electric propulsion (SEP) has been proposed by NASA s Human Exploration Framework Team as an option to achieve human exploration missions to near Earth objects (NEOs) because of its favorable mass efficiency as compared to traditional chemical systems. This paper describes the unique challenges and technology hurdles associated with developing a large high-power SEP vehicle. A subsystem level breakdown of factors contributing to the feasibility of SEP as a platform for future exploration missions to NEOs is presented including overall mission feasibility, trip time variables, propellant management issues, solar array power generation, array structure issues, and other areas that warrant investment in additional technology or engineering development.

  5. Implementation of a SVWP-based laser beam shaping technique for generation of 100-mJ-level picosecond pulses.

    PubMed

    Adamonis, J; Aleknavičius, A; Michailovas, K; Balickas, S; Petrauskienė, V; Gertus, T; Michailovas, A

    2016-10-01

    We present implementation of the energy-efficient and flexible laser beam shaping technique in a high-power and high-energy laser amplifier system. The beam shaping is based on a spatially variable wave plate (SVWP) fabricated by femtosecond laser nanostructuring of glass. We reshaped the initially Gaussian beam into a super-Gaussian (SG) of the 12th order with efficiency of about 50%. The 12th order of the SG beam provided the best compromise between large fill factor, low diffraction on the edges of the active media, and moderate intensity distribution modification during free-space propagation. We obtained 150 mJ pulses of 532 nm radiation. High-energy, pulse duration of 85 ps and the nearly flat-top spatial profile of the beam make it ideal for pumping optical parametric chirped pulse amplification systems.

  6. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  7. A flatter gallium profile for high-efficiency Cu(In,Ga)(Se,S)2 solar cell and improved robustness against sulfur-gradient variation

    NASA Astrophysics Data System (ADS)

    Huang, Chien-Yao; Lee, Wen-Chin; Lin, Albert

    2016-09-01

    Co-optimization of the gallium and sulfur profiles in penternary Cu(In,Ga)(Se,S)2 thin film solar cell and its impacts on device performance and variability are investigated in this work. An absorber formation method to modulate the gallium profiling under low sulfur-incorporation is disclosed, which solves the problem of Ga-segregation in selenization. Flatter Ga-profiles, which lack of experimental investigations to date, are explored and an optimal Ga-profile achieving 17.1% conversion efficiency on a 30 cm × 30 cm sub-module without anti-reflection coating is presented. Flatter Ga-profile gives rise to the higher Voc × Jsc by improved bandgap matching to solar spectrum, which is hard to be achieved by the case of Ga-accumulation. However, voltage-induced carrier collection loss is found, as evident from the measured voltage-dependent photocurrent characteristics based on a small-signal circuit model. The simulation results reveal that the loss is attributed to the synergistic effect of the detrimental gallium and sulfur gradients, which can deteriorate the carrier collection especially in quasi-neutral region (QNR). Furthermore, the underlying physics is presented, and it provides a clear physical picture to the empirical trends of device performance, I-V characteristics, and voltage-dependent photocurrent, which cannot be explained by the standard solar circuit model. The parameter "FGa" and front sulfur-gradient are found to play critical roles on the trade-off between space charge region (SCR) recombination and QNR carrier collection. The co-optimized gallium and sulfur gradients are investigated, and the corresponding process modification for further efficiency-enhancement is proposed. In addition, the performance impact of sulfur-gradient variation is studied, and a gallium design for suppressing the sulfur-induced variability is proposed. Device performances of varied Ga-profiles with front sulfur-gradients are simulated based on a compact device model. Finally, an exploratory path toward 20% high-efficiency Ga-profile with robustness against sulfur-induced performance variability is presented.

  8. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  9. New generation of one-dimensional photonic crystal cavities as robust high-efficient frequency converter

    NASA Astrophysics Data System (ADS)

    Parvini, T. S.; Tehranchi, M. M.; Hamidi, S. M.

    2017-07-01

    An effective method is proposed to design finite one-dimensional photonic crystal cavities (PhCCs) as robust high-efficient frequency converter. For this purpose, we consider two groups of PhCCs which are constructed by stacking m nonlinear (LiNbO3) and n linear (air) layers with variable thicknesses. In the first group, the number of linear layers is less than the nonlinear layers by one and in the second group by two. The conversion efficiency is calculated as a function of the arrangement and thicknesses of the linear and nonlinear layers by benefiting from nonlinear transfer matrix method. Our numerical simulations show that for each group of PhCCs, there is a structural formula by which the configurations with the highest efficiency can be constructed for any values of m and n (i.e. any number of layers). The efficient configurations are equivalent to Fabry-Pérot cavities that depend on the relationship between m and n and the mirrors in two sides of these cavities can be periodic or nonperiodic. The conversion efficiencies of these designed PhCCs are more than 5 orders of magnitude higher than the perfect ones which satisfy photonic bandgap edge and quasi-phase matching. Moreover, the results reveal that conversion efficiencies of Fabry-Pérot cavities with non-periodic mirrors are one order of magnitude higher than those with periodic mirrors. The major physical mechanisms of the enhancement are quasi-phase matching effect, cavity effect induced by dispersive mirrors, and double resonance for the pump and the harmonic fields in defect state. We believe that this method is very beneficial to the design of high-efficient compact optical frequency converters.

  10. Converting Constant Volume, Multizone Air Handling Systems to Energy Efficient Variable Air Volume Multizone Systems

    DTIC Science & Technology

    2017-10-26

    1 FINAL REPORT Converting Constant Volume, Multizone Air Handling Systems to Energy Efficient Variable Air Volume Multizone...Systems Energy and Water Projects Project Number: EW-201152 ERDC-CERL 26 October 2017 2 TABLE OF CONTENTS ACKNOWLEDGEMENTS...16 3.2.1 Energy Usage (Quantitative

  11. A Comparative Analysis of the Efficiency of National Education Systems

    ERIC Educational Resources Information Center

    Thieme, Claudio; Gimenez, Victor; Prior, Diego

    2012-01-01

    The present study assesses the performance of 54 participating countries in PISA 2006. It employs efficiency indicators that relate result variables with resource variables used in the production of educational services. Desirable outputs of educational achievement and undesirable outputs of educational inequality are considered jointly as result…

  12. Preflight calibration of the Imaging Magnetograph eXperiment polarization modulation package based on liquid-crystal variable retarders.

    PubMed

    Uribe-Patarroyo, Néstor; Alvarez-Herrero, Alberto; Martínez Pillet, Valentín

    2012-07-20

    We present the study, characterization, and calibration of the polarization modulation package (PMP) of the Imaging Magnetograph eXperiment (IMaX) instrument, a successful Stokes spectropolarimeter on board the SUNRISE balloon project within the NASA Long Duration Balloon program. IMaX was designed to measure the Stokes parameters of incoming light with a signal-to-noise ratio of at least 103, using as polarization modulators two nematic liquid-crystal variable retarders (LCVRs). An ad hoc calibration system that reproduced the optical and environmental characteristics of IMaX was designed, assembled, and aligned. The system recreates the optical beam that IMaX receives from SUNRISE with known polarization across the image plane, as well as an optical system with the same characteristics of IMaX. The system was used to calibrate the IMaX PMP in vacuum and at different temperatures, with a thermal control resembling the in-flight one. The efficiencies obtained were very high, near theoretical maximum values: the total efficiency in vacuum calibration at nominal temperature was 0.972 (1 being the theoretical maximum). The condition number of the demodulation matrix of the same calibration was 0.522 (0.577 theoretical maximum). Some inhomogeneities of the LCVRs were clear during the pixel-by-pixel calibration of the PMP, but it can be concluded that the mere information of a pixel-per-pixel calibration is sufficient to maintain high efficiencies in spite of inhomogeneities of the LCVRs.

  13. Identifying Nonprovider Factors Affecting Pediatric Emergency Medicine Provider Efficiency.

    PubMed

    Saleh, Fareed; Breslin, Kristen; Mullan, Paul C; Tillett, Zachary; Chamberlain, James M

    2017-10-31

    The aim of this study was to create a multivariable model of standardized relative value units per hour by adjusting for nonprovider factors that influence efficiency. We obtained productivity data based on billing records measured in emergency relative value units for (1) both evaluation and management of visits and (2) procedures for 16 pediatric emergency medicine providers with more than 750 hours worked per year. Eligible shifts were in an urban, academic pediatric emergency department (ED) with 2 sites: a tertiary care main campus and a satellite community site. We used multivariable linear regression to adjust for the impact of shift and pediatric ED characteristics on individual-provider efficiency and then removed variables from the model with minimal effect on productivity. There were 2998 eligible shifts for the 16 providers during a 3-year period. The resulting model included 4 variables when looking at both ED sites combined. These variables include the following: (1) number of procedures billed by provider, (2) season of the year, (3) shift start time, and (4) day of week. Results were improved when we separately modeled each ED location. A 3-variable model using procedures billed by provider, shift start time, and season explained 23% of the variation in provider efficiency at the academic ED site. A 3-variable model using procedures billed by provider, patient arrivals per hour, and shift start time explained 45% of the variation in provider efficiency at the satellite ED site. Several nonprovider factors affect provider efficiency. These factors should be considered when designing productivity-based incentives.

  14. Dopaminergic variants in siblings at high risk for autism: Associations with initiating joint attention.

    PubMed

    Gangi, Devon N; Messinger, Daniel S; Martin, Eden R; Cuccaro, Michael L

    2016-11-01

    Younger siblings of children with autism spectrum disorder (ASD; high-risk siblings) exhibit lower levels of initiating joint attention (IJA; sharing an object or experience with a social partner through gaze and/or gesture) than low-risk siblings of children without ASD. However, high-risk siblings also exhibit substantial variability in this domain. The neurotransmitter dopamine is linked to brain areas associated with reward, motivation, and attention, and common dopaminergic variants have been associated with attention difficulties. We examined whether these common dopaminergic variants, DRD4 and DRD2, explain variability in IJA in high-risk (n = 55) and low-risk (n = 38) siblings. IJA was assessed in the first year during a semi-structured interaction with an examiner. DRD4 and DRD2 genotypes were coded according to associated dopaminergic functioning to create a gene score, with higher scores indicating more genotypes associated with less efficient dopaminergic functioning. Higher dopamine gene scores (indicative of less efficient dopaminergic functioning) were associated with lower levels of IJA in the first year for high-risk siblings, while the opposite pattern emerged in low-risk siblings. Findings suggest differential susceptibility-IJA was differentially associated with dopaminergic functioning depending on familial ASD risk. Understanding genes linked to ASD-relevant behaviors in high-risk siblings will aid in early identification of children at greatest risk for difficulties in these behavioral domains, facilitating targeted prevention and intervention. Autism Res 2016, 9: 1142-1150. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  15. A stochastic model for optimizing composite predictors based on gene expression profiles.

    PubMed

    Ramanathan, Murali

    2003-07-01

    This project was done to develop a mathematical model for optimizing composite predictors based on gene expression profiles from DNA arrays and proteomics. The problem was amenable to a formulation and solution analogous to the portfolio optimization problem in mathematical finance: it requires the optimization of a quadratic function subject to linear constraints. The performance of the approach was compared to that of neighborhood analysis using a data set containing cDNA array-derived gene expression profiles from 14 multiple sclerosis patients receiving intramuscular inteferon-beta1a. The Markowitz portfolio model predicts that the covariance between genes can be exploited to construct an efficient composite. The model predicts that a composite is not needed for maximizing the mean value of a treatment effect: only a single gene is needed, but the usefulness of the effect measure may be compromised by high variability. The model optimized the composite to yield the highest mean for a given level of variability or the least variability for a given mean level. The choices that meet this optimization criteria lie on a curve of composite mean vs. composite variability plot referred to as the "efficient frontier." When a composite is constructed using the model, it outperforms the composite constructed using the neighborhood analysis method. The Markowitz portfolio model may find potential applications in constructing composite biomarkers and in the pharmacogenomic modeling of treatment effects derived from gene expression endpoints.

  16. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  17. Coolant Design System for Liquid Propellant Aerospike Engines

    NASA Astrophysics Data System (ADS)

    McConnell, Miranda; Branam, Richard

    2015-11-01

    Liquid propellant rocket engines burn at incredibly high temperatures making it difficult to design an effective coolant system. These particular engines prove to be extremely useful by powering the rocket with a variable thrust that is ideal for space travel. When combined with aerospike engine nozzles, which provide maximum thrust efficiency, this class of rockets offers a promising future for rocketry. In order to troubleshoot the problems that high combustion chamber temperatures pose, this research took a computational approach to heat analysis. Chambers milled into the combustion chamber walls, lined by a copper cover, were tested for their efficiency in cooling the hot copper wall. Various aspect ratios and coolants were explored for the maximum wall temperature by developing our own MATLAB code. The code uses a nodal temperature analysis with conduction and convection equations and assumes no internal heat generation. This heat transfer research will show oxygen is a better coolant than water, and higher aspect ratios are less efficient at cooling. This project funded by NSF REU Grant 1358991.

  18. Research on Design of Tri-color Shift Device

    NASA Astrophysics Data System (ADS)

    Xu, Ping; Yuan, Xia; Huang, Haixuan; Yang, Tuo; Huang, Yanyan; Zhu, Tengfei; Tang, Shaotuo; Peng, Wenda

    2016-11-01

    An azimuth-tuned tri-color shift device based on an embedded subwavelength one-dimensional rectangular structure with single period is proposed. High reflection efficiencies for both TE and TM polarizations can be achieved simultaneously. Under an oblique incidence of 60°, the reflection efficiencies can reach up to 85, 86, and 100 % in blue (azimuth of 24°), green (azimuth of 63°), and red (azimuth of 90°) waveband, respectively. Furthermore, the laws of influence of device period, groove depth, coating thickness, and incident angle on reflection characteristics are investigated and exposed, and feasibility of the device is demonstrated. The proposed device realizes tri-color shift for natural light using a simple structure. It exhibits high efficiency as well as good security. Such a device can be fabricated by the existing embossing and coating technique. All these break through the limit of bi-color shift anti-counterfeiting technology and have great applications in the field of optically variable image security.

  19. Design and testing of a coil-unit barrel for helical coil electromagnetic launcher

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Liu, Zhenxiang; Shu, Ting; Yang, Lijia; Ouyang, Jianming

    2018-01-01

    A coil-unit barrel for a helical coil electromagnetic launcher is described. It provides better features of high structural strength and flexible adjustability. It is convenient to replace the damaged coil units and easy to adjust the number of turns in the stator coils due to the modular design. In our experiments, the highest velocity measured for a 4.5-kg projectile is 47.3 m/s and the mechanical reinforcement of the launcher could bear 35 kA peak current. The relationship between the energy conversion efficiency and the inductance gradient of the launcher is also studied. In the region of low inductance gradient, the efficiency is positively correlated with the inductance gradient. However, in the region of high inductance gradient, the inter-turn arc erosion becomes a major problem of limiting the efficiency and velocity of the launcher. This modular barrel allows further studies in the inter-turn arc and the variable inductance gradient helical coil launcher.

  20. Design and testing of a coil-unit barrel for helical coil electromagnetic launcher.

    PubMed

    Yang, Dong; Liu, Zhenxiang; Shu, Ting; Yang, Lijia; Ouyang, Jianming

    2018-01-01

    A coil-unit barrel for a helical coil electromagnetic launcher is described. It provides better features of high structural strength and flexible adjustability. It is convenient to replace the damaged coil units and easy to adjust the number of turns in the stator coils due to the modular design. In our experiments, the highest velocity measured for a 4.5-kg projectile is 47.3 m/s and the mechanical reinforcement of the launcher could bear 35 kA peak current. The relationship between the energy conversion efficiency and the inductance gradient of the launcher is also studied. In the region of low inductance gradient, the efficiency is positively correlated with the inductance gradient. However, in the region of high inductance gradient, the inter-turn arc erosion becomes a major problem of limiting the efficiency and velocity of the launcher. This modular barrel allows further studies in the inter-turn arc and the variable inductance gradient helical coil launcher.

  1. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  2. Evolutionary search for new high-k dielectric materials: methodology and applications to hafnia-based oxides.

    PubMed

    Zeng, Qingfeng; Oganov, Artem R; Lyakhov, Andriy O; Xie, Congwei; Zhang, Xiaodong; Zhang, Jin; Zhu, Qiang; Wei, Bingqing; Grigorenko, Ilya; Zhang, Litong; Cheng, Laifei

    2014-02-01

    High-k dielectric materials are important as gate oxides in microelectronics and as potential dielectrics for capacitors. In order to enable computational discovery of novel high-k dielectric materials, we propose a fitness model (energy storage density) that includes the dielectric constant, bandgap, and intrinsic breakdown field. This model, used as a fitness function in conjunction with first-principles calculations and the global optimization evolutionary algorithm USPEX, efficiently leads to practically important results. We found a number of high-fitness structures of SiO2 and HfO2, some of which correspond to known phases and some of which are new. The results allow us to propose characteristics (genes) common to high-fitness structures--these are the coordination polyhedra and their degree of distortion. Our variable-composition searches in the HfO2-SiO2 system uncovered several high-fitness states. This hybrid algorithm opens up a new avenue for discovering novel high-k dielectrics with both fixed and variable compositions, and will speed up the process of materials discovery.

  3. Rapid calibrated high-resolution hyperspectral imaging using tunable laser source

    NASA Astrophysics Data System (ADS)

    Nguyen, Lam K.; Margalith, Eli

    2009-05-01

    We present a novel hyperspectral imaging technique based on tunable laser technology. By replacing the broadband source and tunable filters of a typical NIR imaging instrument, several advantages are realized, including: high spectral resolution, highly variable field-of-views, fast scan-rates, high signal-to-noise ratio, and the ability to use optical fiber for efficient and flexible sample illumination. With this technique, high-resolution, calibrated hyperspectral images over the NIR range can be acquired in seconds. The performance of system features will be demonstrated on two example applications: detecting melamine contamination in wheat gluten and separating bovine protein from wheat protein in cattle feed.

  4. Highly diverse variable number tandem repeat loci in the E. coli O157:H7 and O55:H7 genomes for high-resolution molecular typing.

    PubMed

    Keys, C; Kemper, S; Keim, P

    2005-01-01

    Evaluation of the Escherichia coli genome for variable number tandem repeat (VNTR) loci in order to provide a subtyping tool with greater discrimination and more efficient capacity. Twenty-nine putative VNTR loci were identified from the E. coli genomic sequence. Their variability was validated by characterizing the number of repeats at each locus in a set of 56 E. coli O157:H7/HN and O55:H7 isolates. An optimized multiplex assay system was developed to facility high capacity analysis. Locus diversity values ranged from 0.23 to 0.95 while the number of alleles ranged from two to 29. This multiple-locus VNTR analysis (MLVA) data was used to describe genetic relationships among these isolates and was compared with PFGE (pulse field gel electrophoresis) data from a subset of the same strains. Genetic similarity values were highly correlated between the two approaches, through MLVA was capable of discrimination amongst closely related isolates when PFGE similar values were equal to 1.0. Highly variable VNTR loci exist in the E. coli O157:H7 genome and are excellent estimators of genetic relationships, in particular for closely related isolates. Escherichia coli O157:H7 MLVA offers a complimentary analysis to the more traditional PFGE approach. Application of MLVA to an outbreak cluster could generate superior molecular epidemiology and result in a more effective public health response.

  5. Pregnancy care in Germany, France and Japan: an international comparison of quality and efficiency using structural equation modelling and data envelopment analysis.

    PubMed

    Rump, A; Schöffski, O

    2018-07-01

    Healthcare systems in developed countries may differ in financing and organisation. Maternity services and delivery are particularly influenced by culture and habits. In this study, we compared the pregnancy care quality and efficiency of the German, French and Japanese healthcare systems. Comparative healthcare data analysis. In an international comparison based mainly on Organisation for Economic Co-operation and Development (OECD) indicators, we analysed the health resources significantly affecting pregnancy care and quantified its quality using structural equation modelling. Pregnancy care efficiency was studied using data envelopment analysis. Pregnancy output was quantified overall or separately using indicators based on perinatal, neonatal or maternal mortality. The density of obstetricians, midwives, paediatricians and the average annual doctor's consultations were positively and the caesarean delivery rate negatively associated with pregnancy outcome. In the international comparison at an aggregate level, Japan ranked first for pregnancy care quality, whereas Germany and France were positioned in the second part of the ranking. Similarly, at an aggregate level, the Japanese system showed pure technical efficiency, whereas Germany and France revealed mediocre efficiency results. Perinatal, neonatal and maternal care quality and efficiency taken separately were quite similar and mediocre in Germany and France. In Japan, there was a marked difference between a highly effective and efficient care of the unborn and newborn baby, and a rather mediocre quality and efficiency of maternal care. Germany, France, and Japan have to struggle with quality and efficiency issues that are nevertheless different: in Germany and France, disappointing pregnancy care quality does not correspond to the high health care expenditures and lead to low technical efficiency. The Japanese system shows a high variability in outcomes and technical efficiency. Maternal care quality during delivery seems to be a particular issue that could possibly be addressed by legally implementing quality assurance systems with stricter rules for reimbursement in obstetrics. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  6. Automatic design of basin-specific drought indexes for highly regulated water systems

    NASA Astrophysics Data System (ADS)

    Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea Francesco; Pulido-Velazquez, Manuel

    2018-04-01

    Socio-economic costs of drought are progressively increasing worldwide due to undergoing alterations of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, traditional drought indexes often fail at detecting critical events in highly regulated systems, where natural water availability is conditioned by the operation of water infrastructures such as dams, diversions, and pumping wells. Here, ad hoc index formulations are usually adopted based on empirical combinations of several, supposed-to-be significant, hydro-meteorological variables. These customized formulations, however, while effective in the design basin, can hardly be generalized and transferred to different contexts. In this study, we contribute FRIDA (FRamework for Index-based Drought Analysis), a novel framework for the automatic design of basin-customized drought indexes. In contrast to ad hoc empirical approaches, FRIDA is fully automated, generalizable, and portable across different basins. FRIDA builds an index representing a surrogate of the drought conditions of the basin, computed by combining all the relevant available information about the water circulating in the system identified by means of a feature extraction algorithm. We used the Wrapper for Quasi-Equally Informative Subset Selection (W-QEISS), which features a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The preferred variable subset is selected among the efficient solutions and used to formulate the final index according to alternative model structures. We apply FRIDA to the case study of the Jucar river basin (Spain), a drought-prone and highly regulated Mediterranean water resource system, where an advanced drought management plan relying on the formulation of an ad hoc state index is used for triggering drought management measures. The state index was constructed empirically with a trial-and-error process begun in the 1980s and finalized in 2007, guided by the experts from the Confederación Hidrográfica del Júcar (CHJ). Our results show that the automated variable selection outcomes align with CHJ's 25-year-long empirical refinement. In addition, the resultant FRIDA index outperforms the official State Index in terms of accuracy in reproducing the target variable and cardinality of the selected inputs set.

  7. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  8. Electroscavenging and Inferred Effects on Precipitation Efficiency

    NASA Astrophysics Data System (ADS)

    Tinsley, B. A.

    2002-12-01

    The evaporation of charged droplets leaves charged aerosol particles that can act as cloud condensation nuclei and ice forming nuclei. New calculations of scavenging of such charged particles by droplets have been made, that now include the effects of inertia and variable particle density, and variable cloud altitudes ranging into the stratosphere. They show that the Greenfield Gap closes for particles of low density, or for high altitude clouds, or for a few hundred elementary charges on the particles. A few tens of elementary charges on the particles gives collision efficiencies typically an order of magnitude greater than that due to phoretic forces alone. The numerical integrations show that electroscavenging of ice forming nuclei leading to contact ice nucleation is competitive with deposition ice nucleation, for cloud top temperatures in the range 0§C to -15§C and droplet size distributions extending past 10-15 mm radius. This implies that for marine stratocumulus or nimbostratus clouds with tops just below freezing temperature, where precipitation is initiated by the Wegener-Bergeron-Findeisen process, the precipitation efficiency can be affected by the amount of charge on the ice-forming nuclei. This in turn depends on the extent of the (weak) electrification of the cloud. Similarly, electroscavenging of condensation nuclei can increase the average droplet size in successive cycles of cloud evaporation and formation, and can also affect precipitation efficiency.

  9. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    NASA Astrophysics Data System (ADS)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  10. Multidrug Resistance among New Tuberculosis Cases: Detecting Local Variation through Lot Quality-Assurance Sampling

    PubMed Central

    Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-01-01

    Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242

  11. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    PubMed

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  12. Efficient harvesting of marine Chlorella vulgaris microalgae utilizing cationic starch nanoparticles by response surface methodology.

    PubMed

    Bayat Tork, Mahya; Khalilzadeh, Rasoul; Kouchakzadeh, Hasan

    2017-11-01

    Harvesting involves nearly thirty percent of total production cost of microalgae that needs to be done efficiently. Utilizing inexpensive and highly available biopolymer-based flocculants can be a solution for reducing the harvest costs. Herein, flocculation process of Chlorella vulgaris microalgae using cationic starch nanoparticles (CSNPs) was evaluated and optimized through the response surface methodology (RSM). pH, microalgae and CSNPs concentrations were considered as the main independent variables. Under the optimum conditions of microalgae concentration 0.75gdry weight/L, CSNPs concentration 7.1mgdry weight/L and pH 11.8, the maximum flocculation efficiency (90%) achieved. Twenty percent increase in flocculation efficiency observed with the use of CSNPs instead of the non-particulate starch which can be due to the more electrostatic interactions between the cationic nanoparticles and the microalgae. Therefore, the synthesized CSNPs can be employed as a convenient and economical flocculants for efficient harvest of Chlorella vulgaris microalgae at large scale. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Performance Indicators of the Top Basketball Players: Relations with Several Variables.

    PubMed

    Sindik, Josko

    2015-09-01

    The aim of this study was to determine the differences in performance indicators for top senior male basketball players, with respect to several independent variables: position in the team, total situation-related efficiency, age, playing experience and the time spent on the court within the game and during championship season. The final sample of participants was selected from all teams in A-1 Croatian men's basketball league. Significant differences have been found according to the players': position in the team, total situation-related efficiency, and in interactions of the position in the team / total situation-related efficiency and minutes spent on the court in a game / playing experience. The differences in the situation-related efficiency between players have not been found according to the players' age and the number of games played. Further research can be directed towards deeper analysis of the influence of more complex differentiated variables playing experience and time spent on the court in a game on situation-related efficiency in basketball.

  14. Evaluating a Local Ensemble Transform Kalman Filter snow cover data assimilation method to estimate SWE within a high-resolution hydrologic modeling framework across Western US mountainous regions

    NASA Astrophysics Data System (ADS)

    Oaida, C. M.; Andreadis, K.; Reager, J. T., II; Famiglietti, J. S.; Levoe, S.

    2017-12-01

    Accurately estimating how much snow water equivalent (SWE) is stored in mountainous regions characterized by complex terrain and snowmelt-driven hydrologic cycles is not only greatly desirable, but also a big challenge. Mountain snowpack exhibits high spatial variability across a broad range of spatial and temporal scales due to a multitude of physical and climatic factors, making it difficult to observe or estimate in its entirety. Combing remotely sensed data and high resolution hydrologic modeling through data assimilation (DA) has the potential to provide a spatially and temporally continuous SWE dataset at horizontal scales that capture sub-grid snow spatial variability and are also relevant to stakeholders such as water resource managers. Here, we present the evaluation of a new snow DA approach that uses a Local Ensemble Transform Kalman Filter (LETKF) in tandem with the Variable Infiltration Capacity macro-scale hydrologic model across the Western United States, at a daily temporal resolution, and a horizontal resolution of 1.75 km x 1.75 km. The LETKF is chosen for its relative simplicity, ease of implementation, and computational efficiency and scalability. The modeling/DA system assimilates daily MODIS Snow Covered Area and Grain Size (MODSCAG) fractional snow cover over, and has been developed to efficiently calculate SWE estimates over extended periods of time and covering large regional-scale areas at relatively high spatial resolution, ultimately producing a snow reanalysis-type dataset. Here we focus on the assessment of SWE produced by the DA scheme over several basins in California's Sierra Nevada Mountain range where Airborne Snow Observatory data is available, during the last five water years (2013-2017), which include both one of the driest and one of the wettest years. Comparison against such a spatially distributed SWE observational product provides a greater understanding of the model's ability to estimate SWE and SWE spatial variability, and highlights under which conditions snow cover DA can add value in estimating SWE.

  15. New type side weir discharge coefficient simulation using three novel hybrid adaptive neuro-fuzzy inference systems

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Zaji, Amir Hossein

    2018-03-01

    In many hydraulic structures, side weirs have a critical role. Accurately predicting the discharge coefficient is one of the most important stages in the side weir design process. In the present paper, a new high efficient side weir is investigated. To simulate the discharge coefficient of these side weirs, three novel soft computing methods are used. The process includes modeling the discharge coefficient with the hybrid Adaptive Neuro-Fuzzy Interface System (ANFIS) and three optimization algorithms, namely Differential Evaluation (ANFIS-DE), Genetic Algorithm (ANFIS-GA) and Particle Swarm Optimization (ANFIS-PSO). In addition, sensitivity analysis is done to find the most efficient input variables for modeling the discharge coefficient of these types of side weirs. According to the results, the ANFIS method has higher performance when using simpler input variables. In addition, the ANFIS-DE with RMSE of 0.077 has higher performance than the ANFIS-GA and ANFIS-PSO methods with RMSE of 0.079 and 0.096, respectively.

  16. Effects of formulation variables and characterization of guaifenesin wax microspheres for controlled release.

    PubMed

    Mani, Narasimhan; Park, M O; Jun, H W

    2005-01-01

    Sustained-release wax microspheres of guaifenesin, a highly water-soluble drug, were prepared by the hydrophobic congealable disperse method using a salting-out procedure. The effects of formulation variables on the loading efficiency, particle properties, and in-vitro drug release from the microspheres were determined. The type of dispersant, the amount of wetting agent, and initial stirring time used affected the loading efficiency, while the volume of external phase and emulsification speed affected the particle size of the microspheres to a greater extent. The crystal properties of the drug in the wax matrix and the morphology of the microspheres were studied by differential scanning calorimetry (DSC), powder x-ray diffraction (XRD), and scanning electron microscopy (SEM). The DSC thermograms of the microspheres showed that the drug lost its crystallinity during the microencapsulation process, which was further confirmed by the XRD data. The electron micrographs of the drug-loaded microspheres showed well-formed spherical particles with a rough exterior.

  17. Reinforcement Learning Trees

    PubMed Central

    Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.

    2015-01-01

    In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687

  18. Quantum cryptography with a predetermined key, using continuous-variable Einstein-Podolsky-Rosen correlations

    NASA Astrophysics Data System (ADS)

    Reid, M. D.

    2000-12-01

    Correlations of the type discussed by EPR in their original 1935 paradox for continuous variables exist for the quadrature phase amplitudes of two spatially separated fields. These correlations were first experimentally reported in 1992. We propose to use such EPR beams in quantum cryptography, to transmit with high efficiency messages in such a way that the receiver and sender may later determine whether eavesdropping has occurred. The merit of the new proposal is in the possibility of transmitting a reasonably secure yet predetermined key. This would allow relay of a cryptographic key over long distances in the presence of lossy channels.

  19. Quiet Clean Short-haul Experimental Engine (QCSEE): Hamilton Standard cam/harmonic drive variable pitch fan actuation system detail design report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A variable pitch fan actuation system was designed which incorporates a remote nacelle-mounted blade angle regulator. The regulator drives a rotating fan-mounted mechanical actuator through a flexible shaft and differential gear train. The actuator incorporates a high ratio harmonic drive attached to a multitrack spherical cam which changes blade pitch through individual cam follower arms attached to each blade trunnion. Detail design parameters of the actuation system are presented. These include the following: design philosophies, operating limits, mechanical, hydraulic and thermal characteristics, mechanical efficiencies, materials, weights, lubrication, stress analyses, reliability and failure analyses.

  20. Data Processing Aspects of MEDLARS

    PubMed Central

    Austin, Charles J.

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287

  1. DATA PROCESSING ASPECTS OF MEDLARS.

    PubMed

    AUSTIN, C J

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.

  2. Variable transmittance electrochromic windows

    NASA Astrophysics Data System (ADS)

    Rauh, R. D.

    1983-11-01

    Electrochromic apertures based on RF sputtered thin films of WO3 are projected to have widely different sunlight attenuation properties when converted to MxWO3 (M = H, Li, Na, Ag, etc.), depending on the initial preparation conditions. Amorphous WO3, prepared at low temperature, has a coloration spectrum centered in the visible, while high temperature crystalline WO3 attenuates infrared light most efficiently, but appears to become highly reflective at high values of x. The possibility therefore exists of producing variable light transmission apertures of the general form (a-MxWO3/FIC/c-WO3), where the FIC is an ion conducting thin film, such as LiAlF4 (for M = Li). The attenuation of 90% of the solar spectrum requires an injected charge of 30 to 40 mcoul/sq cm in either amorphous or crystalline WO3, corresponding to 0.2 Whr/sq m per coloration cycle. In order to produce windows with very high solar transparency in the bleached form, new counter electrode materials must be found with complementary electrochromism to WO3.

  3. Lightweight High Efficiency Electric Motors for Space Applications

    NASA Technical Reports Server (NTRS)

    Robertson, Glen A.; Tyler, Tony R.; Piper, P. J.

    2011-01-01

    Lightweight high efficiency electric motors are needed across a wide range of space applications from - thrust vector actuator control for launch and flight applications to - general vehicle, base camp habitat and experiment control for various mechanisms to - robotics for various stationary and mobile space exploration missions. QM Power?s Parallel Path Magnetic Technology Motors have slowly proven themselves to be a leading motor technology in this area; winning a NASA Phase II for "Lightweight High Efficiency Electric Motors and Actuators for Low Temperature Mobility and Robotics Applications" a US Army Phase II SBIR for "Improved Robot Actuator Motors for Medical Applications", an NSF Phase II SBIR for "Novel Low-Cost Electric Motors for Variable Speed Applications" and a DOE SBIR Phase I for "High Efficiency Commercial Refrigeration Motors" Parallel Path Magnetic Technology obtains the benefits of using permanent magnets while minimizing the historical trade-offs/limitations found in conventional permanent magnet designs. The resulting devices are smaller, lower weight, lower cost and have higher efficiency than competitive permanent magnet and non-permanent magnet designs. QM Power?s motors have been extensively tested and successfully validated by multiple commercial and aerospace customers and partners as Boeing Research and Technology. Prototypes have been made between 0.1 and 10 HP. They are also in the process of scaling motors to over 100kW with their development partners. In this paper, Parallel Path Magnetic Technology Motors will be discussed; specifically addressing their higher efficiency, higher power density, lighter weight, smaller physical size, higher low end torque, wider power zone, cooler temperatures, and greater reliability with lower cost and significant environment benefit for the same peak output power compared to typically motors. A further discussion on the inherent redundancy of these motors for space applications will be provided.

  4. Effects of a Self-Exercise Program on Activities of Daily Living in Patients After Acute Stroke: A Propensity Score Analysis Based on the Japan Association of Rehabilitation Database.

    PubMed

    Shiraishi, Nariaki; Suzuki, Yusuke; Matsumoto, Daisuke; Jeong, Seungwon; Sugiyama, Motoya; Kondo, Katsunori

    2017-03-01

    To investigate whether self-exercise programs for patients after stroke contribute to improved activities of daily living (ADL) at hospital discharge. Retrospective, observational, propensity score (PS)-matched case-control study. General hospitals. Participants included patients after stroke (N=1560) hospitalized between January 3, 2006, and December 26, 2012, satisfying the following criteria: (1) data on age, sex, duration from stroke to hospital admission, length of stay, FIM score, modified Rankin Scale (mRS) score, Glasgow Coma Scale score, Japan Stroke Scale score, and self-exercise program participation were available; and (2) admitted within 7 days after stroke onset, length of stay was between 7 and 60 days, prestroke mRS score was ≤2, and not discharged because of FIM or mRS exacerbation. A total of 780 PS-matched pairs were selected for each of the self-exercise program and no-self-exercise program groups. Self-exercise program participation. At discharge, FIM motor score, FIM cognitive score, FIM motor score gain (discharge value - admission value), FIM motor score gain rate (gain/length of stay), a binary variable divided by the median FIM motor score gain rate (high efficiency or no-high efficiency), and mRS score. Patients were classified into a self-exercise program (n=780) or a no-self-exercise program (n=780) group. After matching, there were no significant between-group differences, except motor system variables. The receiver operating characteristic curve for PS had an area under the curve value of .71 with a 95% confidence interval of .68 to .73, and the model was believed to have a relatively favorable fit. A logistic regression analysis of PS-matched pairs suggested that the self-exercise program was effective, with an overall odds ratio for ADL (high efficiency or no-high efficiency) of 2.2 (95% confidence ratio, 1.75-2.70). SEPs may contribute to improving ADL. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  5. A High-Order Finite Spectral Volume Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A time accurate, high-order, conservative, yet efficient method named Finite Spectral Volume (FSV) is developed for conservation laws on unstructured grids. The concept of a 'spectral volume' is introduced to achieve high-order accuracy in an efficient manner similar to spectral element and multi-domain spectral methods. In addition, each spectral volume is further sub-divided into control volumes (CVs), and cell-averaged data from these control volumes is used to reconstruct a high-order approximation in the spectral volume. Riemann solvers are used to compute the fluxes at spectral volume boundaries. Then cell-averaged state variables in the control volumes are updated independently. Furthermore, TVD (Total Variation Diminishing) and TVB (Total Variation Bounded) limiters are introduced in the FSV method to remove/reduce spurious oscillations near discontinuities. A very desirable feature of the FSV method is that the reconstruction is carried out only once, and analytically, and is the same for all cells of the same type, and that the reconstruction stencil is always non-singular, in contrast to the memory and CPU-intensive reconstruction in a high-order finite volume (FV) method. Discussions are made concerning why the FSV method is significantly more efficient than high-order finite volume and the Discontinuous Galerkin (DG) methods. Fundamental properties of the FSV method are studied and high-order accuracy is demonstrated for several model problems with and without discontinuities.

  6. Distributed Relaxation Multigrid and Defect Correction Applied to the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Diskin, B.; Brandt, A.

    1999-01-01

    The distributed-relaxation multigrid and defect- correction methods are applied to the two- dimensional compressible Navier-Stokes equations. The formulation is intended for high Reynolds number applications and several applications are made at a laminar Reynolds number of 10,000. A staggered- grid arrangement of variables is used; the coupled pressure and internal energy equations are solved together with multigrid, requiring a block 2x2 matrix solution. Textbook multigrid efficiencies are attained for incompressible and slightly compressible simulations of the boundary layer on a flat plate. Textbook efficiencies are obtained for compressible simulations up to Mach numbers of 0.7 for a viscous wake simulation.

  7. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  8. Converting Constant Volume, Multizone Air Handling Systems to Energy Efficient Variable Air Volume Multizone Systems

    DTIC Science & Technology

    2017-10-26

    30. Energy Information Agency Natural Gas Price Data ..................................................................................... 65 Figure...different market sectors (residential, commercial, and industrial). Figure 30. Energy Information Agency Natural Gas Price Data 7.2.3 AHU Size...1 FINAL REPORT Converting Constant Volume, Multizone Air Handling Systems to Energy Efficient Variable Air Volume Multizone

  9. Environmental impact efficiency of natural gas combined cycle power plants: A combined life cycle assessment and dynamic data envelopment analysis approach.

    PubMed

    Martín-Gamboa, Mario; Iribarren, Diego; Dufour, Javier

    2018-02-15

    The energy sector is still dominated by the use of fossil resources. In particular, natural gas represents the third most consumed resource, being a significant source of electricity in many countries. Since electricity production in natural gas combined cycle (NGCC) plants provides some benefits with respect to other non-renewable technologies, it is often seen as a transitional solution towards a future low‑carbon power generation system. However, given the environmental profile and operational variability of NGCC power plants, their eco-efficiency assessment is required. In this respect, this article uses a novel combined Life Cycle Assessment (LCA) and dynamic Data Envelopment Analysis (DEA) approach in order to estimate -over the period 2010-2015- the environmental impact efficiencies of 20 NGCC power plants located in Spain. A three-step LCA+DEA method is applied, which involves data acquisition, calculation of environmental impacts through LCA, and the novel estimation of environmental impact efficiency (overall- and term-efficiency scores) through dynamic DEA. Although only 1 out of 20 NGCC power plants is found to be environmentally efficient, all plants show a relatively good environmental performance with overall eco-efficiency scores above 60%. Regarding individual periods, 2011 was -on average- the year with the highest environmental impact efficiency (95%), accounting for 5 efficient NGCC plants. In this respect, a link between high number of operating hours and high environmental impact efficiency is observed. Finally, preliminary environmental benchmarks are presented as an additional outcome in order to further support decision-makers in the path towards eco-efficiency in NGCC power plants. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Sparse PLS discriminant analysis: biologically relevant feature selection and graphical displays for multiclass problems.

    PubMed

    Lê Cao, Kim-Anh; Boitard, Simon; Besse, Philippe

    2011-06-22

    Variable selection on high throughput biological data, such as gene expression or single nucleotide polymorphisms (SNPs), becomes inevitable to select relevant information and, therefore, to better characterize diseases or assess genetic structure. There are different ways to perform variable selection in large data sets. Statistical tests are commonly used to identify differentially expressed features for explanatory purposes, whereas Machine Learning wrapper approaches can be used for predictive purposes. In the case of multiple highly correlated variables, another option is to use multivariate exploratory approaches to give more insight into cell biology, biological pathways or complex traits. A simple extension of a sparse PLS exploratory approach is proposed to perform variable selection in a multiclass classification framework. sPLS-DA has a classification performance similar to other wrapper or sparse discriminant analysis approaches on public microarray and SNP data sets. More importantly, sPLS-DA is clearly competitive in terms of computational efficiency and superior in terms of interpretability of the results via valuable graphical outputs. sPLS-DA is available in the R package mixOmics, which is dedicated to the analysis of large biological data sets.

  11. Evaluation of Working Fluids for Organic Rankine Cycle Based on Exergy Analysis

    NASA Astrophysics Data System (ADS)

    Setiawan, D.; Subrata, I. D. M.; Purwanto, Y. A.; Tambunan, A. H.

    2018-05-01

    One of the crucial aspects to determine the performance of Organic Rankine Cycle (ORC) is the selection of appropriate working fluids. This paper describes the simulative performance of several organic fluid and water as working fluid of an ORC based on exergy analysis with a heat source from waste heat recovery. The simulation was conducted by using Engineering Equation Solver (EES). The effect of several parameters and thermodynamic properties of working fluid was analyzed, and part of them was used as variables for the simulation in order to determine their sensitivity to the exergy efficiency changes. The results of this study showed that water is not appropriate to be used as working fluid at temperature lower than 130 °C, because the expansion process falls in saturated area. It was also found that Benzene had the highest exergy efficiency, i.e. about 10.49%, among the dry type working fluid. The increasing turbine inlet temperature did not lead to the increase of exergy efficiency when using organic working fluids with critical temperature near heat source temperature. Meanwhile, exergy efficiency decreasing linearly with the increasing condenser inlet temperature. In addition, it was found that working fluid with high latent heat of vaporization and specific heat exert in high exergy efficiency.

  12. Evaluation of removal efficiency of residual diclofenac in aqueous solution by nanocomposite tungsten-carbon using design of experiment.

    PubMed

    Salmani, M H; Mokhtari, M; Raeisi, Z; Ehrampoush, M H; Sadeghian, H A

    2017-09-01

    Wastewater containing pharmaceutical residual components must be treated before being discharged to the environment. This study was conducted to investigate the efficiency of tungsten-carbon nanocomposite in diclofenac removal using design of experiment (DOE). The 27 batch adsorption experiments were done by choosing three effective parameters (pH, adsorbent dose, and initial concentration) at three levels. The nanocomposite was prepared by tungsten oxide and activated carbon powder in a ratio of 1 to 4 mass. The remaining concentration of diclofenac was measured by a spectrometer with adding reagents of 2, 2'-bipyridine, and ferric chloride. Analysis of variance (ANOVA) was applied to determine the main and interaction effects. The equilibrium time for removal process was determined as 30 min. It was observed that the pH had the lowest influence on the removal efficiency of diclofenac. Nanocomposite gave a high removal at low concentration of 5.0 mg/L. The maximum removal for an initial concentration of 5.0 mg/L was 88.0% at contact time of 30 min. The results of ANOVA showed that adsorbent mass was among the most effective variables. Using DOE as an efficient method revealed that tungsten-carbon nanocomposite has high efficiency in the removal of residual diclofenac from the aqueous solution.

  13. On-chip continuous-variable quantum entanglement

    NASA Astrophysics Data System (ADS)

    Masada, Genta; Furusawa, Akira

    2016-09-01

    Entanglement is an essential feature of quantum theory and the core of the majority of quantum information science and technologies. Quantum computing is one of the most important fruits of quantum entanglement and requires not only a bipartite entangled state but also more complicated multipartite entanglement. In previous experimental works to demonstrate various entanglement-based quantum information processing, light has been extensively used. Experiments utilizing such a complicated state need highly complex optical circuits to propagate optical beams and a high level of spatial interference between different light beams to generate quantum entanglement or to efficiently perform balanced homodyne measurement. Current experiments have been performed in conventional free-space optics with large numbers of optical components and a relatively large-sized optical setup. Therefore, they are limited in stability and scalability. Integrated photonics offer new tools and additional capabilities for manipulating light in quantum information technology. Owing to integrated waveguide circuits, it is possible to stabilize and miniaturize complex optical circuits and achieve high interference of light beams. The integrated circuits have been firstly developed for discrete-variable systems and then applied to continuous-variable systems. In this article, we review the currently developed scheme for generation and verification of continuous-variable quantum entanglement such as Einstein-Podolsky-Rosen beams using a photonic chip where waveguide circuits are integrated. This includes balanced homodyne measurement of a squeezed state of light. As a simple example, we also review an experiment for generating discrete-variable quantum entanglement using integrated waveguide circuits.

  14. 86% internal differential efficiency from 8 to 9 µm-emitting, step-taper active-region quantum cascade lasers.

    PubMed

    Kirch, Jeremy D; Chang, Chun-Chieh; Boyle, Colin; Mawst, Luke J; Lindberg, Don; Earles, Tom; Botez, Dan

    2016-10-17

    8.4 μm-emitting quantum cascade lasers (QCLs) have been designed to have, right from threshold, both carrier-leakage suppression and miniband-like carrier extraction. The slope-efficiency characteristic temperature T1, the signature of carrier-leakage suppression, is found to be 665 K. Resonant-tunneling carrier extraction from both the lower laser level (ll) and the level below it, coupled with highly effective ll-depopulation provide a very short ll lifetime (~0.12 ps). As a result the laser-transition differential efficiency reaches 89%, and the internal differential efficiency ηid, derived from a variable mirror-loss study, is found to be 86%, in good agreement with theory. A study of 8.8 μm-emitting QCLs also provides an ηid value of 86%. A corrected equation for the external differential efficiency is derived which leads to a fundamental limit of ~90% for the ηid values of mid-infrared QCLs. In turn, the fundamental wallplug-efficiency limits become ~34% higher than previously predicted.

  15. How reliable are efficiency measurements of perovskite solar cells? The first inter-comparison, between two accredited and eight non-accredited laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunbar, Ricky B.; Duck, Benjamin C.; Moriarty, Tom E.

    Perovskite materials have generated significant interest from academia and industry as a potential component in next-generation, high-efficiency, low-cost, photovoltaic (PV) devices. The record efficiency reported for perovskite solar cells has risen rapidly, and is now more than 22%. However, due to their complex dynamic behaviour, the process of measuring the efficiency of perovskite solar cells appears to be much more complicated than for other technologies. It has long been acknowledged that this is likely to greatly reduce the reliability of reported efficiency measurements, but the quantitative extent to which this occurs has not been determined. To investigate this, we conductmore » the first major inter-comparison of this PV technology. The participants included two labs accredited for PV performance measurement (CSIRO and NREL) and eight PV research laboratories. We find that the inter-laboratory measurement variability can be almost ten times larger for a slowly responding perovskite cell than for a control silicon cell. We show that for such a cell, the choice of measurement method, far more so than measurement hardware, is the single-greatest cause for this undesirably large variability. We provide recommendations for identifying the most appropriate method for a given cell, depending on its stabilization and degradation behaviour. Moreover, the results of this study suggest that identifying a consensus technique for accurate and meaningful efficiency measurements of perovskite solar cells will lead to an immediate improvement in reliability. This, in turn, should assist device researchers to correctly evaluate promising new materials and fabrication methods, and further boost the development of this technology.« less

  16. How reliable are efficiency measurements of perovskite solar cells? The first inter-comparison, between two accredited and eight non-accredited laboratories

    DOE PAGES

    Dunbar, Ricky B.; Duck, Benjamin C.; Moriarty, Tom E.; ...

    2017-10-24

    Perovskite materials have generated significant interest from academia and industry as a potential component in next-generation, high-efficiency, low-cost, photovoltaic (PV) devices. The record efficiency reported for perovskite solar cells has risen rapidly, and is now more than 22%. However, due to their complex dynamic behaviour, the process of measuring the efficiency of perovskite solar cells appears to be much more complicated than for other technologies. It has long been acknowledged that this is likely to greatly reduce the reliability of reported efficiency measurements, but the quantitative extent to which this occurs has not been determined. To investigate this, we conductmore » the first major inter-comparison of this PV technology. The participants included two labs accredited for PV performance measurement (CSIRO and NREL) and eight PV research laboratories. We find that the inter-laboratory measurement variability can be almost ten times larger for a slowly responding perovskite cell than for a control silicon cell. We show that for such a cell, the choice of measurement method, far more so than measurement hardware, is the single-greatest cause for this undesirably large variability. We provide recommendations for identifying the most appropriate method for a given cell, depending on its stabilization and degradation behaviour. Moreover, the results of this study suggest that identifying a consensus technique for accurate and meaningful efficiency measurements of perovskite solar cells will lead to an immediate improvement in reliability. This, in turn, should assist device researchers to correctly evaluate promising new materials and fabrication methods, and further boost the development of this technology.« less

  17. Effects of Solar Ultraviolet Radiation on the Potential Efficiency of Photosystem II in Leaves of Tropical Plants1

    PubMed Central

    Krause, G. Heinrich; Schmude, Claudia; Garden, Hermann; Koroleva, Olga Y.; Winter, Klaus

    1999-01-01

    The effects of solar ultraviolet (UV)-B and UV-A radiation on the potential efficiency of photosystem II (PSII) in leaves of tropical plants were investigated in Panama (9°N). Shade-grown tree seedlings or detached sun leaves from the outer crown of mature trees were exposed for short periods (up to 75 min) to direct sunlight filtered through plastic or glass filters that absorbed either UV-B or UV-A+B radiation, or transmitted the complete solar spectrum. Persistent changes in potential PSII efficiency were monitored by means of the dark-adapted ratio of variable to maximum chlorophyll a fluorescence. In leaves of shade-grown tree seedlings, exposure to the complete solar spectrum resulted in a strong decrease in potential PSII efficiency, probably involving protein damage. A substantially smaller decline in the dark-adapted ratio of variable to maximum chlorophyll a fluorescence was observed when UV-B irradiation was excluded. The loss in PSII efficiency was further reduced by excluding both UV-B and UV-A light. The photoinactivation of PSII was reversible under shade conditions, but restoration of nearly full activity required at least 10 d. Repeated exposure to direct sunlight induced an increase in the pool size of xanthophyll cycle pigments and in the content of UV-absorbing vacuolar compounds. In sun leaves of mature trees, which contained high levels of UV-absorbing compounds, effects of UV-B on PSII efficiency were observed in several cases and varied with developmental age and acclimation state of the leaves. The results show that natural UV-B and UV-A radiation in the tropics may significantly contribute to photoinhibition of PSII during sun exposure in situ, particularly in shade leaves exposed to full sunlight. PMID:10594122

  18. Spatial pattern analysis of Cu, Zn and Ni and their interpretation in the Campania region (Italy)

    NASA Astrophysics Data System (ADS)

    Petrik, Attila; Albanese, Stefano; Jordan, Gyozo; Rolandi, Roberto; De Vivo, Benedetto

    2017-04-01

    The uniquely abundant Campanian topsoil dataset enabled us to perform a spatial pattern analysis on 3 potentially toxic elements of Cu, Zn and Ni. This study is focusing on revealing the spatial texture and distribution of these elements by spatial point pattern and image processing analysis such as lineament density and spatial variability index calculation. The application of these methods on geochemical data provides a new and efficient tool to understand the spatial variation of concentrations and their background/baseline values. The determination and quantification of spatial variability is crucial to understand how fast the change in concentration is in a certain area and what processes might govern the variation. The spatial variability index calculation and image processing analysis including lineament density enables us to delineate homogenous areas and analyse them with respect to lithology and land use. Identification of spatial outliers and their patterns were also investigated by local spatial autocorrelation and image processing analysis including the determination of local minima and maxima points and singularity index analysis. The spatial variability of Cu and Zn reveals the highest zone (Cu: 0.5 MAD, Zn: 0.8-0.9 MAD, Median Deviation Index) along the coast between Campi Flegrei and the Sorrento Peninsula with the vast majority of statistically identified outliers and high-high spatial clustered points. The background/baseline maps of Cu and Zn reveals a moderate to high variability (Cu: 0.3 MAD, Zn: 0.4-0.5 MAD) NW-SE oriented zone including disrupted patches from Bisaccia to Mignano following the alluvial plains of Appenine's rivers. This zone has high abundance of anomaly concentrations identified using singularity analysis and it also has a high density of lineaments. The spatial variability of Ni shows the highest variability zone (0.6-0.7 MAD) around Campi Flegrei where the majority of low outliers are concentrated. The variability of background/baseline map of Ni reveals a shift to the east in case of highest variability zones coinciding with limestone outcrops. The high segmented area between Mignano and Bisaccia partially follows the alluvial plains of Appenine's rivers which seem to be playing a crucial role in the distribution and redistribution pattern of Cu, Zn and Ni in Campania. The high spatial variability zones of the later elements are located in topsoils on volcanoclastic rocks and are mostly related to cultivation and urbanised areas.

  19. GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.

    PubMed

    Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua

    2018-06-19

    Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.

  20. Large-scale maps of variable infection efficiencies in aquatic Bacteroidetes phage-host model systems: Variable phage-host infection interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmfeldt, Karin; Solonenko, Natalie; Howard-Varona, Cristina

    Microbes drive ecosystem functioning and their viruses modulate these impacts through mortality, gene transfer and metabolic reprogramming. Despite the importance of virus-host interactions and likely variable infection efficiencies of individual phages across hosts, such variability is seldom quantified. In this paper, we quantify infection efficiencies of 38 phages against 19 host strains in aquatic Cellulophaga (Bacteroidetes) phage-host model systems. Binary data revealed that some phages infected only one strain while others infected 17, whereas quantitative data revealed that efficiency of infection could vary 10 orders of magnitude, even among phages within one population. This provides a baseline for understanding andmore » modeling intrapopulation host range variation. Genera specific host ranges were also informative. For example, the Cellulophaga Microviridae, showed a markedly broader intra-species host range than previously observed in Escherichia coli systems. Further, one phage genus, Cba41, was examined to investigate nonheritable changes in plating efficiency and burst size that depended on which host strain it most recently infected. While consistent with host modification of phage DNA, no differences in nucleotide sequence or DNA modifications were detected, leaving the observation repeatable, but the mechanism unresolved. Overall, this study highlights the importance of quantitatively considering replication variations in studies of phage-host interactions.« less

  1. Large-scale maps of variable infection efficiencies in aquatic Bacteroidetes phage-host model systems: Variable phage-host infection interactions

    DOE PAGES

    Holmfeldt, Karin; Solonenko, Natalie; Howard-Varona, Cristina; ...

    2016-06-28

    Microbes drive ecosystem functioning and their viruses modulate these impacts through mortality, gene transfer and metabolic reprogramming. Despite the importance of virus-host interactions and likely variable infection efficiencies of individual phages across hosts, such variability is seldom quantified. In this paper, we quantify infection efficiencies of 38 phages against 19 host strains in aquatic Cellulophaga (Bacteroidetes) phage-host model systems. Binary data revealed that some phages infected only one strain while others infected 17, whereas quantitative data revealed that efficiency of infection could vary 10 orders of magnitude, even among phages within one population. This provides a baseline for understanding andmore » modeling intrapopulation host range variation. Genera specific host ranges were also informative. For example, the Cellulophaga Microviridae, showed a markedly broader intra-species host range than previously observed in Escherichia coli systems. Further, one phage genus, Cba41, was examined to investigate nonheritable changes in plating efficiency and burst size that depended on which host strain it most recently infected. While consistent with host modification of phage DNA, no differences in nucleotide sequence or DNA modifications were detected, leaving the observation repeatable, but the mechanism unresolved. Overall, this study highlights the importance of quantitatively considering replication variations in studies of phage-host interactions.« less

  2. Investigating the neural bases for intra-subject cognitive efficiency changes using functional magnetic resonance imaging

    PubMed Central

    Rao, Neena K.; Motes, Michael A.; Rypma, Bart

    2014-01-01

    Several fMRI studies have examined brain regions mediating inter-subject variability in cognitive efficiency, but none have examined regions mediating intra-subject variability in efficiency. Thus, the present study was designed to identify brain regions involved in intra-subject variability in cognitive efficiency via participant-level correlations between trial-level reaction time (RT) and trial-level fMRI BOLD percent signal change on a processing speed task. On each trial, participants indicated whether a digit-symbol probe-pair was present or absent in an array of nine digit-symbol probe-pairs while fMRI data were collected. Deconvolution analyses, using RT time-series models (derived from the proportional scaling of an event-related hemodynamic response function model by trial-level RT), were used to evaluate relationships between trial-level RTs and BOLD percent signal change. Although task-related patterns of activation and deactivation were observed in regions including bilateral occipital, bilateral parietal, portions of the medial wall such as the precuneus, default mode network regions including anterior cingulate, posterior cingulate, bilateral temporal, right cerebellum, and right cuneus, RT-BOLD correlations were observed in a more circumscribed set of regions. Positive RT-BOLD correlations, where fast RTs were associated with lower BOLD percent signal change, were observed in regions including bilateral occipital, bilateral parietal, and the precuneus. RT-BOLD correlations were not observed in the default mode network indicating a smaller set of regions associated with intra-subject variability in cognitive efficiency. The results are discussed in terms of a distributed area of regions that mediate variability in the cognitive efficiency that might underlie processing speed differences between individuals. PMID:25374527

  3. Investigating the neural bases for intra-subject cognitive efficiency changes using functional magnetic resonance imaging.

    PubMed

    Rao, Neena K; Motes, Michael A; Rypma, Bart

    2014-01-01

    Several fMRI studies have examined brain regions mediating inter-subject variability in cognitive efficiency, but none have examined regions mediating intra-subject variability in efficiency. Thus, the present study was designed to identify brain regions involved in intra-subject variability in cognitive efficiency via participant-level correlations between trial-level reaction time (RT) and trial-level fMRI BOLD percent signal change on a processing speed task. On each trial, participants indicated whether a digit-symbol probe-pair was present or absent in an array of nine digit-symbol probe-pairs while fMRI data were collected. Deconvolution analyses, using RT time-series models (derived from the proportional scaling of an event-related hemodynamic response function model by trial-level RT), were used to evaluate relationships between trial-level RTs and BOLD percent signal change. Although task-related patterns of activation and deactivation were observed in regions including bilateral occipital, bilateral parietal, portions of the medial wall such as the precuneus, default mode network regions including anterior cingulate, posterior cingulate, bilateral temporal, right cerebellum, and right cuneus, RT-BOLD correlations were observed in a more circumscribed set of regions. Positive RT-BOLD correlations, where fast RTs were associated with lower BOLD percent signal change, were observed in regions including bilateral occipital, bilateral parietal, and the precuneus. RT-BOLD correlations were not observed in the default mode network indicating a smaller set of regions associated with intra-subject variability in cognitive efficiency. The results are discussed in terms of a distributed area of regions that mediate variability in the cognitive efficiency that might underlie processing speed differences between individuals.

  4. Power Smoothing and MPPT for Grid-connected Wind Power Generation with Doubly Fed Induction Generator

    NASA Astrophysics Data System (ADS)

    Kai, Takaaki; Tanaka, Yuji; Kaneda, Hirotoshi; Kobayashi, Daichi; Tanaka, Akio

    Recently, doubly fed induction generator (DFIG) and synchronous generator are mostly applied for wind power generation, and variable speed control and power factor control are executed for high efficiently for wind energy capture and high quality for power system voltage. In variable speed control, a wind speed or a generator speed is used for maximum power point tracking. However, performances of a wind generation power fluctuation due to wind speed variation have not yet investigated for those controls. The authors discuss power smoothing by those controls for the DFIG inter-connected to 6.6kV distribution line. The performances are verified using power system simulation software PSCAD/EMTDC for actual wind speed data and are examined from an approximate equation of wind generation power fluctuation for wind speed variation.

  5. The dynamic ocean biological pump: Insights from a global compilation of particulate organic carbon, CaCO3, and opal concentration profiles from the mesopelagic

    NASA Astrophysics Data System (ADS)

    Lam, Phoebe J.; Doney, Scott C.; Bishop, James K. B.

    2011-09-01

    We have compiled a global data set of 62 open ocean profiles of particulate organic carbon (POC), CaCO3, and opal concentrations collected by large volume in situ filtration in the upper 1000 m over the last 30 years. We define concentration-based metrics for the strength (POC concentration at depth) and efficiency (attenuation of POC with depth in the mesopelagic) of the biological pump. We show that the strength and efficiency of the biological pump are dynamic and are characterized by a regime of constant and high transfer efficiency at low to moderate surface POC and a bloom regime where the height of the bloom is characterized by a weak deep biological pump and low transfer efficiency. The variability in POC attenuation length scale manifests in a clear decoupling between the strength of the shallow biological pump (e.g., POC at the export depth) and the strength of the deep biological pump (POC at 500 m). We suggest that the paradigm of diatom-driven export production is driven by a too restrictive perspective on upper mesopelagic dynamics. Indeed, our full mesopelagic analysis suggests that large, blooming diatoms have low transfer efficiency and thus may not export substantially to depth; rather, our analysis suggests that ecosystems characterized by smaller cells and moderately high %CaCO3 have a high mesopelagic transfer efficiency and can have higher POC concentrations in the deep mesopelagic even with relatively low surface or near-surface POC. This has negative implications for the carbon sequestration prospects of deliberate iron fertilization.

  6. Maximizing the performance of a multiple-stage variable-throat venturi scrubber for particle collection

    NASA Astrophysics Data System (ADS)

    Muir, D. M.; Akeredolu, F.

    The high collection efficiencies that are required nowadays to meet the stricter pollution control standards necessitate the use of high-energy scrubbers, such as the venturi scrubber, for the arrestment of fine particulate matter from exhaust gas streams. To achieve more energy-efficient particle collection, several venturi stages may be used in series. This paper is principally a theoretical investigation of the performance of a multiple-stage venturi scrubber, the main objective of the study being to establish the best venturi design configuration for any given set of operating conditions. A mathematical model is used to predict collection efficiency vs pressure drop relationships for particle sizes in the range 0.2-5.0 μm for one-, two-, three- and four-stage scrubbers. The theoretical predictions are borne out qualitatively by experimental work. The paper shows that the three-stage venturi produces the highest collection efficiencies over the normal operating range except for the collection of very fine particles at low pressure drops, when the single-stage venturi is best. The significant improvement in performance achieved by the three-stage venturi when compared with conventional single-stage operation increases as both the particle size and system pressure drop increase.

  7. Evaluating the causes of photovoltaics cost reduction: Why is PV different?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trancik, Jessika; McNerney, James; Kavlak, Goksin

    The goals of this project were to quantify sources of cost reduction in photovoltaics (PV), improve theories of technological evolution, develop new analytical methods, and formu- late guidelines for continued cost reduction in photovoltaics. A number of explanations have been suggested for why photovoltaics have come down in cost rapidly over time, including increased production rates, significant R&D expenditures, heavy patenting ac- tivity, decreasing material and input costs, scale economies, reduced plant construction costs, and higher conversion efficiencies. We classified these proposed causes into low- level factors and high-level drivers. Low-level factors include technical characteristics, such as module efficiency ormore » wafer area, which are easily posed in terms of variables of a cost equation. High-level factors include scale economies, research and development (R&D), and learning-by-doing.« less

  8. Does intrinsic light heterogeneity in Ricinus communis L. monospecific thickets drive species' population dynamics?

    PubMed

    Goyal, Neha; Shah, Kanhaiya; Sharma, Gyan Prakash

    2018-06-19

    Ricinus communis L. colonizes heterogeneous urban landscapes as monospecific thickets. The ecological understanding on colonization success of R. communis population due to variable light availability is lacking. Therefore, to understand the effect of intrinsic light heterogeneity on species' population dynamics, R. communis populations exposed to variable light availability (low, intermediate, and high) were examined for performance strategies through estimation of key vegetative, eco-physiological, biochemical, and reproductive traits. Considerable variability existed in studied plant traits in response to available light. Individuals inhabiting high-light conditions exhibited high eco-physiological efficiency and reproductive performance that potentially confers population boom. Individuals exposed to low light showed poor performance in terms of eco-physiology and reproduction, which attribute to bust. However, individuals in intermediate light were observed to be indeterminate to light availability, potentially undergoing trait modulations with uncertainty of available light. Heterogeneous light availability potentially drives the boom and bust cycles in R. communis monospecific thickets. Such boom and bust cycles subsequently affect species' dominance, persistence, collapse, and/or resurgence as an aggressive colonizer in contrasting urban environments. The study fosters extensive monitoring of R. communis thickets to probe underlying mechanism(s) affecting expansions and/or collapses of colonizing populations.

  9. Population activity statistics dissect subthreshold and spiking variability in V1.

    PubMed

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.

  10. Complexity reduction in the H.264/AVC using highly adaptive fast mode decision based on macroblock motion activity

    NASA Astrophysics Data System (ADS)

    Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir

    2015-11-01

    The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.

  11. Efficient Reformulation of HOTFGM: Heat Conduction with Variable Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Zhong, Yi; Pindera, Marek-Jerzy; Arnold, Steven M. (Technical Monitor)

    2002-01-01

    Functionally graded materials (FGMs) have become one of the major research topics in the mechanics of materials community during the past fifteen years. FGMs are heterogeneous materials, characterized by spatially variable microstructure, and thus spatially variable macroscopic properties, introduced to enhance material or structural performance. The spatially variable material properties make FGMs challenging to analyze. The review of the various techniques employed to analyze the thermodynamical response of FGMs reveals two distinct and fundamentally different computational strategies, called uncoupled macromechanical and coupled micromechanical approaches by some investigators. The uncoupled macromechanical approaches ignore the effect of microstructural gradation by employing specific spatial variations of material properties, which are either assumed or obtained by local homogenization, thereby resulting in erroneous results under certain circumstances. In contrast, the coupled approaches explicitly account for the micro-macrostructural interaction, albeit at a significantly higher computational cost. The higher-order theory for functionally graded materials (HOTFGM) developed by Aboudi et al. is representative of the coupled approach. However, despite its demonstrated utility in applications where micro-macrostructural coupling effects are important, the theory's full potential is yet to be realized because the original formulation of HOTFGM is computationally intensive. This, in turn, limits the size of problems that can be solved due to the large number of equations required to mimic realistic material microstructures. Therefore, a basis for an efficient reformulation of HOTFGM, referred to as user-friendly formulation, is developed herein, and subsequently employed in the construction of the efficient reformulation using the local/global conductivity matrix approach. In order to extend HOTFGM's range of applicability, spatially variable thermal conductivity capability at the local level is incorporated into the efficient reformulation. Analytical solutions to validate both the user-friendly and efficient reformulations am also developed. Volume discretization sensitivity and validation studies, as well as a practical application of the developed efficient reformulation are subsequently carried out. The presented results illustrate the accuracy and implementability of both the user-friendly formulation and the efficient reformulation of HOTFGM.

  12. Quantifying Impact of Chromosome Copy Number on Recombination in Escherichia coli.

    PubMed

    Reynolds, T Steele; Gill, Ryan T

    2015-07-17

    The ability to precisely and efficiently recombineer synthetic DNA into organisms of interest in a quantitative manner is a key requirement in genome engineering. Even though considerable effort has gone into the characterization of recombination in Escherichia coli, there is still substantial variability in reported recombination efficiencies. We hypothesized that this observed variability could, in part, be explained by the variability in chromosome copy number as well as the location of the replication forks relative to the recombination site. During rapid growth, E. coli cells may contain several pairs of open replication forks. While recombineered forks are resolving and segregating within the population, changes in apparent recombineering efficiency should be observed. In the case of dominant phenotypes, we predicted and then experimentally confirmed that the apparent recombination efficiency declined during recovery until complete segregation of recombineered and wild-type genomes had occurred. We observed the reverse trend for recessive phenotypes. The observed changes in apparent recombination efficiency were found to be in agreement with mathematical calculations based on our proposed mechanism. We also provide a model that can be used to estimate the total segregated recombination efficiency based on an initial efficiency and growth rate. These results emphasize the importance of employing quantitative strategies in the design of genome-scale engineering efforts.

  13. Chapter 13: Assessing Persistence and Other Evaluation Issues Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Violette, Daniel M.

    Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).

  14. On the use of LiF:Mg,Ti thermoluminescence dosemeters in space--a critical review.

    PubMed

    Horowitz, Y S; Satinger, D; Fuks, E; Oster, L; Podpalov, L

    2003-01-01

    The use of LiF:Mg,Ti thermoluminescence dosemeters (TLDs) in space radiation fields is reviewed. It is demonstrated in the context of modified track structure theory and microdosimetric track structure theory that there is no unique correlation between the relative thermoluminescence (TL) efficiency of heavy charged particles, neutrons of all energies and linear energy transfer (LET). Many experimental measurements dating back more than two decades also demonstrate the multivalued, non-universal, relationship between relative TL efficiency and LET. It is further demonstrated that the relative intensities of the dosimetric peaks and especially the high-temperature structure are dependent on a large number of variables, some controllable, some not. It is concluded that TL techniques employing the concept of LET (e.g. measurement of total dose, the high-temperature ratio (HTR) methods and other combinations of the relative TL efficiency of the various peaks used to estimate average Q or simulate Q-LET relationships) should be regarded as lacking a sound theoretical basis, highly prone to error and, as well, lack of reproducibility/universality due to the absence of a standardised experimental protocol essential to reliable experimental methodology.

  15. Does price efficiency increase with trading volume? Evidence of nonlinearity and power laws in ETFs

    NASA Astrophysics Data System (ADS)

    Caginalp, Gunduz; DeSantis, Mark

    2017-02-01

    Whether efficiency increases with increasing volume is an important issue that may illuminate trader strategies and distinguish between market theories. This relationship is tested using 124,236 daily observations comprising 68 large and liquid U.S. equity exchange traded funds (ETFs). ETFs have the advantage that efficiency can be measured in terms of the deviation between the trading price and the underlying net asset value that is reported each day. Our findings support the hypothesis that the relationship between volume and efficiency is nonlinear. Indeed, efficiency increases as volume increases from low to moderately high levels, but then decreases as volume increases further. The first part tends to support the idea that higher volume simply facilitates transactions and maintains efficiency, while the latter part, i.e., even higher volumes, supports the ansatz that increased volume is associated with increased speculation that ignores valuation and decreases efficiency. The results are consistent with the hypothesis that valuation is only part of the motivation for traders. Our methodology accounts for fund heterogeneity and contemporaneous correlations. Similar results are obtained when daily price volatility is introduced as an additional independent variable.

  16. Effects of Process-Oriented and Product-Oriented Worked Examples and Prior Knowledge on Learner Problem Solving and Attitude: A Study in the Domain of Microeconomics

    ERIC Educational Resources Information Center

    Brooks, Christopher Darren

    2009-01-01

    The purpose of this study was to investigate the effectiveness of process-oriented and product-oriented worked example strategies and the mediating effect of prior knowledge (high versus low) on problem solving and learner attitude in the domain of microeconomics. In addition, the effect of these variables on learning efficiency as well as the…

  17. TEPAPA: a novel in silico feature learning pipeline for mining prognostic and associative factors from text-based electronic medical records.

    PubMed

    Lin, Frank Po-Yen; Pokorny, Adrian; Teng, Christina; Epstein, Richard J

    2017-07-31

    Vast amounts of clinically relevant text-based variables lie undiscovered and unexploited in electronic medical records (EMR). To exploit this untapped resource, and thus facilitate the discovery of informative covariates from unstructured clinical narratives, we have built a novel computational pipeline termed Text-based Exploratory Pattern Analyser for Prognosticator and Associator discovery (TEPAPA). This pipeline combines semantic-free natural language processing (NLP), regular expression induction, and statistical association testing to identify conserved text patterns associated with outcome variables of clinical interest. When we applied TEPAPA to a cohort of head and neck squamous cell carcinoma patients, plausible concepts known to be correlated with human papilloma virus (HPV) status were identified from the EMR text, including site of primary disease, tumour stage, pathologic characteristics, and treatment modalities. Similarly, correlates of other variables (including gender, nodal status, recurrent disease, smoking and alcohol status) were also reliably recovered. Using highly-associated patterns as covariates, a patient's HPV status was classifiable using a bootstrap analysis with a mean area under the ROC curve of 0.861, suggesting its predictive utility in supporting EMR-based phenotyping tasks. These data support using this integrative approach to efficiently identify disease-associated factors from unstructured EMR narratives, and thus to efficiently generate testable hypotheses.

  18. Processing Pipeline of Sugarcane Spectral Response to Characterize the Fallen Plants Phenomenon

    NASA Astrophysics Data System (ADS)

    Solano, Agustín; Kemerer, Alejandra; Hadad, Alejandro

    2016-04-01

    Nowadays, in agronomic systems it is possible to make a variable management of inputs to improve the efficiency of agronomic industry and optimize the logistics of the harvesting process. In this way, it was proposed for sugarcane culture the use of remote sensing tools and computational methods to identify useful areas in the cultivated lands. The objective was to use these areas to make variable management of the crop. When at the moment of harvesting the sugarcane there are fallen stalks, together with them some strange material (vegetal or mineral) is collected. This strange material is not millable and when it enters onto the sugar mill it causes important looses of efficiency in the sugar extraction processes and affects its quality. Considering this issue, the spectral response of sugarcane plants in aerial multispectral images was studied. The spectral response was analyzed in different bands of the electromagnetic spectrum. Then, the aerial images were segmented to obtain homogeneous regions useful for producers to make decisions related to the use of inputs and resources according to the variability of the system (existence of fallen cane and standing cane). The obtained segmentation results were satisfactory. It was possible to identify regions with fallen cane and regions with standing cane with high precision rates.

  19. Bioactive compounds and antioxidant activity exhibit high intraspecific variability in Pleurotus ostreatus mushrooms and correlate well with cultivation performance parameters.

    PubMed

    Koutrotsios, Georgios; Kalogeropoulos, Nick; Stathopoulos, Pantelis; Kaliora, Andriana C; Zervakis, Georgios I

    2017-05-01

    Experimental data related with oyster mushroom production and nutritional properties usually derive from the examination of only one strain, and hence their representativeness/usefulness is questionable. This work aims at assessing intraspecific variability in Pleurotus ostreatus by studying 16 strains, under the same conditions, in respect to essential cultivation and mushroom quality aspects, and by defining the impact of intrinsic/genetic factors on such parameters. Hence, mushroom yield, earliness, crop length, biological efficiency, productivity, and their content in selected macro and microconstituents (e.g. fatty acids, sterols, individual phenolic compounds, terpenic acids, glucans) as well as their antioxidant properties (i.e., antiradical activity, ferric reducing potential, inhibition of serum oxidation) were assayed. The effect of intrinsic/genetic factors was evident, especially as regards earliness, yield of each production flush and mushroom weight, whereas biological efficiency was not particularly influenced by the cultivated strain. Moreover, phenolics, ergosterol and antiradical activity demonstrated significant variability among strains in contrast to what was observed for fatty acids, β-glucans and ferric reducing potential. The observed heterogeneity reveals the limitations of using a low number of strains for evaluating mushroom production and/or their content in bioactive compounds, and as evidenced, it is valuable for breeding and commercial purposes.

  20. Overcoming ecologic bias using the two-phase study design.

    PubMed

    Wakefield, Jon; Haneuse, Sebastien J-P A

    2008-04-15

    Ecologic (aggregate) data are widely available and widely utilized in epidemiologic studies. However, ecologic bias, which arises because aggregate data cannot characterize within-group variability in exposure and confounder variables, can only be removed by supplementing ecologic data with individual-level data. Here the authors describe the two-phase study design as a framework for achieving this objective. In phase 1, outcomes are stratified by any combination of area, confounders, and error-prone (or discretized) versions of exposures of interest. Phase 2 data, sampled within each phase 1 stratum, provide accurate measures of exposure and possibly of additional confounders. The phase 1 aggregate-level data provide a high level of statistical power and a cross-classification by which individuals may be efficiently sampled in phase 2. The phase 2 individual-level data then provide a control for ecologic bias by characterizing the within-area variability in exposures and confounders. In this paper, the authors illustrate the two-phase study design by estimating the association between infant mortality and birth weight in several regions of North Carolina for 2000-2004, controlling for gender and race. This example shows that the two-phase design removes ecologic bias and produces gains in efficiency over the use of case-control data alone. The authors discuss the advantages and disadvantages of the approach.

  1. Computational health economics for identification of unprofitable health care enrollees

    PubMed Central

    Rose, Sherri; Bergquist, Savannah L.; Layton, Timothy J.

    2017-01-01

    SUMMARY Health insurers may attempt to design their health plans to attract profitable enrollees while deterring unprofitable ones. Such insurers would not be delivering socially efficient levels of care by providing health plans that maximize societal benefit, but rather intentionally distorting plan benefits to avoid high-cost enrollees, potentially to the detriment of health and efficiency. In this work, we focus on a specific component of health plan design at risk for health insurer distortion in the Health Insurance Marketplaces: the prescription drug formulary. We introduce an ensembled machine learning function to determine whether drug utilization variables are predictive of a new measure of enrollee unprofitability we derive, and thus vulnerable to distortions by insurers. Our implementation also contains a unique application-specific variable selection tool. This study demonstrates that super learning is effective in extracting the relevant signal for this prediction problem, and that a small number of drug variables can be used to identify unprofitable enrollees. The results are both encouraging and concerning. While risk adjustment appears to have been reasonably successful at weakening the relationship between therapeutic-class-specific drug utilization and unprofitability, some classes remain predictive of insurer losses. The vulnerable enrollees whose prescription drug regimens include drugs in these classes may need special protection from regulators in health insurance market design. PMID:28369273

  2. Analysis of a Temperature-Controlled Exhaust Thermoelectric Generator During a Driving Cycle

    NASA Astrophysics Data System (ADS)

    Brito, F. P.; Alves, A.; Pires, J. M.; Martins, L. B.; Martins, J.; Oliveira, J.; Teixeira, J.; Goncalves, L. M.; Hall, M. J.

    2016-03-01

    Thermoelectric generators can be used in automotive exhaust energy recovery. As car engines operate under wide variable loads, it is a challenge to design a system for operating efficiently under these variable conditions. This means being able to avoid excessive thermal dilution under low engine loads and being able to operate under high load, high temperature events without the need to deflect the exhaust gases with bypass systems. The authors have previously proposed a thermoelectric generator (TEG) concept with temperature control based on the operating principle of the variable conductance heat pipe/thermosiphon. This strategy allows the TEG modules’ hot face to work under constant, optimized temperature. The variable engine load will only affect the number of modules exposed to the heat source, not the heat transfer temperature. This prevents module overheating under high engine loads and avoids thermal dilution under low engine loads. The present work assesses the merit of the aforementioned approach by analysing the generator output during driving cycles simulated with an energy model of a light vehicle. For the baseline evaporator and condenser configuration, the driving cycle averaged electrical power outputs were approximately 320 W and 550 W for the type-approval Worldwide harmonized light vehicles test procedure Class 3 driving cycle and for a real-world highway driving cycle, respectively.

  3. Effects of variable practice on the motor learning outcomes in manual wheelchair propulsion.

    PubMed

    Leving, Marika T; Vegter, Riemer J K; de Groot, Sonja; van der Woude, Lucas H V

    2016-11-23

    Handrim wheelchair propulsion is a cyclic skill that needs to be learned during rehabilitation. It has been suggested that more variability in propulsion technique benefits the motor learning process of wheelchair propulsion. The purpose of this study was to determine the influence of variable practice on the motor learning outcomes of wheelchair propulsion in able-bodied participants. Variable practice was introduced in the form of wheelchair basketball practice and wheelchair-skill practice. Motor learning was operationalized as improvements in mechanical efficiency and propulsion technique. Eleven Participants in the variable practice group and 12 participants in the control group performed an identical pre-test and a post-test. Pre- and post-test were performed in a wheelchair on a motor-driven treadmill (1.11 m/s) at a relative power output of 0.23 W/kg. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated. Between the pre- and the post-test the variable practice group received 7 practice sessions. During the practice sessions participants performed one-hour of variable practice, consisting of five wheelchair-skill tasks and a 30 min wheelchair basketball game. The control group did not receive any practice between the pre- and the post-test. Comparison of the pre- and the post-test showed that the variable practice group significantly improved the mechanical efficiency (4.5 ± 0.6% → 5.7 ± 0.7%) in contrast to the control group (4.5 ± 0.6% → 4.4 ± 0.5%) (group x time interaction effect p < 0.001).With regard to propulsion technique, both groups significantly reduced the push frequency and increased the contact angle of the hand with the handrim (within group, time effect). No significant group × time interaction effects were found for propulsion technique. With regard to propulsion variability, the variable practice group increased variability when compared to the control group (interaction effect p < 0.001). Compared to a control, variable practice, resulted in an increase in mechanical efficiency and increased variability. Interestingly, the large relative improvement in mechanical efficiency was concomitant with only moderate improvements in the propulsion technique, which were similar in the control group, suggesting that other factors besides propulsion technique contributed to the lower energy expenditure.

  4. Voltammetric and Mathematical Evidence for Dual Transport Mediation of Serotonin Clearance In Vivo

    PubMed Central

    Wood, Kevin M.; Zeqja, Anisa; Nijhout, H. Frederik; Reed, Michael C.; Best, Janet; Hashemi, Parastoo

    2014-01-01

    The neurotransmitter serotonin underlies many of the brain’s functions. Understanding serotonin neurochemistry is important for improving treatments for neuropsychiatric disorders such as depression. Antidepressants commonly target serotonin clearance via serotonin transporters (SERTs) and have variable clinical effects. Adjunctive therapies, targeting other systems including serotonin autoreceptors, also vary clinically and carry adverse consequences. Fast scan cyclic voltammetry (FSCV) is particularly well suited for studying antidepressant effects on serotonin clearance and autoreceptors by providing real-time chemical information on serotonin kinetics in vivo. However, the complex nature of in vivo serotonin responses makes it difficult to interpret experimental data with established kinetic models. Here, we electrically stimulated the mouse medial forebrain bundle (MFB) to provoke and detect terminal serotonin in the substantia nigra reticulata (SNr). In response to MFB stimulation we found three dynamically distinct serotonin signals. To interpret these signals we developed a computational model that supports two independent serotonin reuptake mechanisms (high affinity, low efficiency reuptake mechanism and low affinity, high efficiency reuptake system) and bolsters an important inhibitory role for the serotonin autoreceptors. Our data and analysis, afforded by the powerful combination of voltammetric and theoretical methods, gives new understanding of the chemical heterogeneity of serotonin dynamics in the brain. This diverse serotonergic matrix likely contributes to clinical variability of antidepressants. PMID:24702305

  5. Pressure Pulsation in a High Head Francis Turbine Operating at Variable Speed

    NASA Astrophysics Data System (ADS)

    Sannes, D. B.; Iliev, I.; Agnalt, E.; Dahlhaug, O. G.

    2018-06-01

    This paper presents the preliminary work of the master thesis of the author, written at the Norwegian University of Science and Technology. Today, many Francis turbines experience formations of cracks in the runner due to pressure pulsations. This can eventually cause failure. One way to reduce this effect is to change the operation point of the turbine, by utilizing variable speed technology. This work presents the results from measurements of the Francis turbine at the Waterpower Laboratory at NTNU. Measurements of pressure pulsations and efficiency were done for the whole operating range of a high head Francis model turbine. The results will be presented in a similar diagram as the Hill Chart, but instead of constant efficiency curves there will be curves of constant peak-peak values. This way, it is possible to find an optimal operation point for the same power production, were the pressure pulsations are at its lowest. Six points were chosen for further analysis to instigate the effect of changing the speed by ±50 rpm. The analysis shows best results for operation below BEP when the speed was reduced. The change in speed also introduced the possibility to have other frequencies in the system. It is therefore important avoid runner speeds that can cause resonance in the system.

  6. A Bioinformatic Pipeline for Monitoring of the Mutational Stability of Viral Drug Targets with Deep-Sequencing Technology.

    PubMed

    Kravatsky, Yuri; Chechetkin, Vladimir; Fedoseeva, Daria; Gorbacheva, Maria; Kravatskaya, Galina; Kretova, Olga; Tchurikov, Nickolai

    2017-11-23

    The efficient development of antiviral drugs, including efficient antiviral small interfering RNAs (siRNAs), requires continuous monitoring of the strict correspondence between a drug and the related highly variable viral DNA/RNA target(s). Deep sequencing is able to provide an assessment of both the general target conservation and the frequency of particular mutations in the different target sites. The aim of this study was to develop a reliable bioinformatic pipeline for the analysis of millions of short, deep sequencing reads corresponding to selected highly variable viral sequences that are drug target(s). The suggested bioinformatic pipeline combines the available programs and the ad hoc scripts based on an original algorithm of the search for the conserved targets in the deep sequencing data. We also present the statistical criteria for the threshold of reliable mutation detection and for the assessment of variations between corresponding data sets. These criteria are robust against the possible sequencing errors in the reads. As an example, the bioinformatic pipeline is applied to the study of the conservation of RNA interference (RNAi) targets in human immunodeficiency virus 1 (HIV-1) subtype A. The developed pipeline is freely available to download at the website http://virmut.eimb.ru/. Brief comments and comparisons between VirMut and other pipelines are also presented.

  7. Box-Behnken study design for optimization of bicalutamide-loaded nanostructured lipid carrier: stability assessment.

    PubMed

    Kudarha, Ritu; Dhas, Namdev L; Pandey, Abhijeet; Belgamwar, Veena S; Ige, Pradum P

    2015-01-01

    Bicalutamide (BCM) is an anti-androgen drug used to treat prostate cancer. In this study, nanostructured lipid carriers (NLCs) were chosen as a carrier for delivery of BCM using Box-Behnken (BB) design for optimizing various quality attributes such as particle size and entrapment efficiency which is very critical for efficient drug delivery and high therapeutic efficacy. Stability of formulated NLCs was assessed with respect to storage stability, pH stability, hemolysis, protein stability, serum protein stability and accelerated stability. Hot high-pressure homogenizer was utilized for formulation of BCM-loaded NLCs. In BB response surface methodology, total lipid, % liquid lipid and % soya lecithin was selected as independent variable and particle size and %EE as dependent variables. Scanning electron microscopy (SEM) was done for morphological study of NLCs. Differential scanning calorimeter and X-ray diffraction study were used to study crystalline and amorphous behavior. Analysis of design space showed that process was robust with the particle size less than 200 nm and EE up to 78%. Results of stability studies showed stability of carrier in various storage conditions and in different pH condition. From all the above study, it can be concluded that NLCs may be suitable carrier for the delivery of BCM with respect to stability and quality attributes.

  8. High-efficiency in situ resonant inelastic x-ray scattering (iRIXS) endstation at the Advanced Light Source

    DOE PAGES

    Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing; ...

    2017-03-17

    In this paper, an endstation with two high-efficiency soft x-ray spectrographs was developed at Beamline 8.0.1 of the Advanced Light Source, Lawrence Berkeley National Laboratory. The endstation is capable of performing soft x-ray absorption spectroscopy, emission spectroscopy, and, in particular, resonant inelastic soft x-ray scattering (RIXS). Two slit-less variable line-spacing grating spectrographs are installed at different detection geometries. The endstation covers the photon energy range from 80 to 1500 eV. For studying transition-metal oxides, the large detection energy window allows a simultaneous collection of x-ray emission spectra with energies ranging from the O K-edge to the Ni L-edge without movingmore » any mechanical components. The record-high efficiency enables the recording of comprehensive two-dimensional RIXS maps with good statistics within a short acquisition time. By virtue of the large energy window and high throughput of the spectrographs, partial fluorescence yield and inverse partial fluorescence yield signals could be obtained for all transition metal L-edges including Mn. Finally and moreover, the different geometries of these two spectrographs (parallel and perpendicular to the horizontal polarization of the beamline) provide contrasts in RIXS features with two different momentum transfers.« less

  9. Application of magnetic ionomer for development of very fast and highly efficient uptake of triazo dye Direct Blue 71 form different water samples.

    PubMed

    Khani, Rouhollah; Sobhani, Sara; Beyki, Mostafa Hossein; Miri, Simin

    2018-04-15

    This research focuses on removing Direct Blue 71 (DB 71) from aqueous solution in an efficient and very fast route by ionic liquid mediated γ-Fe 2 O 3 magnetic ionomer. 2-hydroxyethylammonium sulphonate immobilized on γ-Fe 2 O 3 nanoparticles (γ-Fe 2 O 3 -2-HEAS) was used for this purpose. The influence of shaking time, medium pH, the concentration of sorbent and NaNO 3 on removal was evaluated to greatly influence removal extent. The optimal removal conditions were determined by response surface methodology based on the four-variable central composite design to obtain maximum removal efficiency and determine the significance and interaction effect of the variables on the removal of target triazo dye. The results have shown that an amount of 98.2% as % removal under the optimum conditions. The adsorption kinetics and isotherms were well fitted to a pseudo-second order model and Freundlich model, respectively. Based on these models, the maximum dye adsorption capacity (Q m ) of 47.60mgg -1 was obtained. Finally, the proposed nano-adsorbent was applied satisfactorily for removal of target triazo dye from different water samples. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. System solution to improve energy efficiency of HVAC systems

    NASA Astrophysics Data System (ADS)

    Chretien, L.; Becerra, R.; Salts, N. P.; Groll, E. A.

    2017-08-01

    According to recent surveys, heating and air conditioning systems account for over 45% of the total energy usage in US households. Three main types of HVAC systems are available to homeowners: (1) fixed-speed systems, where the compressor cycles on and off to match the cooling load; (2) multi-speed (typically, two-speed) systems, where the compressor can operate at multiple cooling capacities, leading to reduced cycling; and (3) variable-speed systems, where the compressor speed is adjusted to match the cooling load of the household, thereby providing higher efficiency and comfort levels through better temperature and humidity control. While energy consumption could reduce significantly by adopting variable-speed compressor systems, the market penetration has been limited to less than 10% of the total HVAC units and a vast majority of systems installed in new construction remains single speed. A few reasons may explain this phenomenon such as the complexity of the electronic circuitry required to vary compressor speed as well as the associated system cost. This paper outlines a system solution to boost the Seasonal Energy Efficiency Rating (SEER) of a traditional single-speed unit through using a low power electronic converter that allows the compressor to operate at multiple low capacity settings and is disabled at high compressor speeds.

  11. The relation between polysomnography and subjective sleep and its dependence on age - poor sleep may become good sleep.

    PubMed

    Åkerstedt, Torbjörn; Schwarz, Johanna; Gruber, Georg; Lindberg, Eva; Theorell-Haglöw, Jenny

    2016-10-01

    Women complain more about sleep than men, but polysomnography (PSG) seems to suggest worse sleep in men. This raises the question of how women (or men) perceive objective (PSG) sleep. The present study sought to investigate the relation between morning subjective sleep quality and PSG variables in older and younger women. A representative sample of 251 women was analysed in age groups above and below 51.5 years (median). PSG was recorded at home during one night. Perceived poor sleep was related to short total sleep time (TST), long wake within total sleep time (WTSP), low sleep efficiency and a high number of awakenings. The older women showed lower TST and sleep efficiency and higher WTSP for a rating of good sleep than did the younger women. For these PSG variables the values for good sleep in the older group were similar to the values for poor sleep in the young group. It was concluded that women perceive different levels of sleep duration, sleep efficiency and wake after sleep onset relatively well, but that older women adjust their objective criteria for good sleep downwards. It was also concluded that age is an important factor in the relation between subjective and objective sleep. © 2016 European Sleep Research Society.

  12. Multi-Objective Aerodynamic Optimization of the Streamlined Shape of High-Speed Trains Based on the Kriging Model.

    PubMed

    Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei

    2017-01-01

    Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.

  13. Perspectives to breed for improved baking quality wheat varieties adapted to organic growing conditions.

    PubMed

    Osman, Aart M; Struik, Paul C; van Bueren, Edith T Lammerts

    2012-01-30

    Northwestern European consumers like their bread to be voluminous and easy to chew. These attributes require a raw material that is rich in protein with, among other characteristics, a suitable ratio between gliadins and glutenins. Achieving this is a challenge for organic growers, because they lack cultivars that can realise high protein concentrations under the relatively low and variable availability of nitrogen during the grain-filling phase common in organic farming. Relatively low protein content in wheat grains thus needs to be compensated by a high proportion of high-quality protein. Organic farming therefore needs cultivars with genes encoding for optimal levels of glutenins and gliadins, a maximum ability for nitrogen uptake, a large storage capacity of nitrogen in the biomass, an adequate balance between vegetative and reproductive growth, a high nitrogen translocation efficiency for the vegetative parts into the grains during grain filling and an efficient conversion of nitrogen into high-quality proteins. In this perspective paper the options to breed and grow such varieties are discussed. Copyright © 2011 Society of Chemical Industry.

  14. Evaluation of three high abundance protein depletion kits for umbilical cord serum proteomics

    PubMed Central

    2011-01-01

    Background High abundance protein depletion is a major challenge in the study of serum/plasma proteomics. Prior to this study, most commercially available kits for depletion of highly abundant proteins had only been tested and evaluated in adult serum/plasma, while the depletion efficiency on umbilical cord serum/plasma had not been clarified. Structural differences between some adult and fetal proteins (such as albumin) make it likely that depletion approaches for adult and umbilical cord serum/plasma will be variable. Therefore, the primary purposes of the present study are to investigate the efficiencies of several commonly-used commercial kits during high abundance protein depletion from umbilical cord serum and to determine which kit yields the most effective and reproducible results for further proteomics research on umbilical cord serum. Results The immunoaffinity based kits (PROTIA-Sigma and 5185-Agilent) displayed higher depletion efficiency than the immobilized dye based kit (PROTBA-Sigma) in umbilical cord serum samples. Both the PROTIA-Sigma and 5185-Agilent kit maintained high depletion efficiency when used three consecutive times. Depletion by the PROTIA-Sigma Kit improved 2DE gel quality by reducing smeared bands produced by the presence of high abundance proteins and increasing the intensity of other protein spots. During image analysis using the identical detection parameters, 411 ± 18 spots were detected in crude serum gels, while 757 ± 43 spots were detected in depleted serum gels. Eight spots unique to depleted serum gels were identified by MALDI- TOF/TOF MS, seven of which were low abundance proteins. Conclusions The immunoaffinity based kits exceeded the immobilized dye based kit in high abundance protein depletion of umbilical cord serum samples and dramatically improved 2DE gel quality for detection of trace biomarkers. PMID:21554704

  15. Genetic fidelity and variability of micropropagated cassava plants (Manihot esculenta Crantz) evaluated using ISSR markers.

    PubMed

    Vidal, Á M; Vieira, L J; Ferreira, C F; Souza, F V D; Souza, A S; Ledo, C A S

    2015-07-14

    Molecular markers are efficient for assessing the genetic fidelity of various species of plants after in vitro culture. In this study, we evaluated the genetic fidelity and variability of micropropagated cassava plants (Manihot esculenta Crantz) using inter-simple sequence repeat markers. Twenty-two cassava accessions from the Embrapa Cassava & Fruits Germplasm Bank were used. For each accession, DNA was extracted from a plant maintained in the field and from 3 plants grown in vitro. For DNA amplification, 27 inter-simple sequence repeat primers were used, of which 24 generated 175 bands; 100 of those bands were polymorphic and were used to study genetic variability among accessions of cassava plants maintained in the field. Based on the genetic distance matrix calculated using the arithmetic complement of the Jaccard's index, genotypes were clustered using the unweighted pair group method using arithmetic averages. The number of bands per primer was 2-13, with an average of 7.3. For most micropropagated accessions, the fidelity study showed no genetic variation between plants of the same accessions maintained in the field and those maintained in vitro, confirming the high genetic fidelity of the micropropagated plants. However, genetic variability was observed among different accessions grown in the field, and clustering based on the dissimilarity matrix revealed 7 groups. Inter-simple sequence repeat markers were efficient for detecting the genetic homogeneity of cassava plants derived from meristem culture, demonstrating the reliability of this propagation system.

  16. Transferability of species distribution models: a functional habitat approach for two regionally threatened butterflies.

    PubMed

    Vanreusel, Wouter; Maes, Dirk; Van Dyck, Hans

    2007-02-01

    Numerous models for predicting species distribution have been developed for conservation purposes. Most of them make use of environmental data (e.g., climate, topography, land use) at a coarse grid resolution (often kilometres). Such approaches are useful for conservation policy issues including reserve-network selection. The efficiency of predictive models for species distribution is usually tested on the area for which they were developed. Although highly interesting from the point of view of conservation efficiency, transferability of such models to independent areas is still under debate. We tested the transferability of habitat-based predictive distribution models for two regionally threatened butterflies, the green hairstreak (Callophrys rubi) and the grayling (Hipparchia semele), within and among three nature reserves in northeastern Belgium. We built predictive models based on spatially detailed maps of area-wide distribution and density of ecological resources. We used resources directly related to ecological functions (host plants, nectar sources, shelter, microclimate) rather than environmental surrogate variables. We obtained models that performed well with few resource variables. All models were transferable--although to different degrees--among the independent areas within the same broad geographical region. We argue that habitat models based on essential functional resources could transfer better in space than models that use indirect environmental variables. Because functional variables can easily be interpreted and even be directly affected by terrain managers, these models can be useful tools to guide species-adapted reserve management.

  17. Energy Efficiency and Performance Limiting Effects in Thermo-Osmotic Energy Conversion from Low-Grade Heat.

    PubMed

    Straub, Anthony P; Elimelech, Menachem

    2017-11-07

    Low-grade heat energy from sources below 100 °C is available in massive quantities around the world, but cannot be converted to electricity effectively using existing technologies due to variability in the heat output and the small temperature difference between the source and environment. The recently developed thermo-osmotic energy conversion (TOEC) process has the potential to harvest energy from low-grade heat sources by using a temperature difference to create a pressurized liquid flux across a membrane, which can be converted to mechanical work via a turbine. In this study, we perform the first analysis of energy efficiency and the expected performance of the TOEC technology, focusing on systems utilizing hydrophobic porous vapor-gap membranes and water as a working fluid. We begin by developing a framework to analyze realistic mass and heat transport in the process, probing the impact of various membrane parameters and system operating conditions. Our analysis reveals that an optimized system can achieve heat-to-electricity energy conversion efficiencies up to 4.1% (34% of the Carnot efficiency) with hot and cold working temperatures of 60 and 20 °C, respectively, and an operating pressure of 5 MPa (50 bar). Lower energy efficiencies, however, will occur in systems operating with high power densities (>5 W/m 2 ) and with finite-sized heat exchangers. We identify that the most important membrane properties for achieving high performance are an asymmetric pore structure, high pressure resistance, a high porosity, and a thickness of 30 to 100 μm. We also quantify the benefits in performance from utilizing deaerated water streams, strong hydrodynamic mixing in the membrane module, and high heat exchanger efficiencies. Overall, our study demonstrates the promise of full-scale TOEC systems to extract energy from low-grade heat and identifies key factors for performance optimization moving forward.

  18. Development and optimization of enteric coated mucoadhesive microspheres of duloxetine hydrochloride using 3(2) full factorial design.

    PubMed

    Setia, Anupama; Kansal, Sahil; Goyal, Naveen

    2013-07-01

    Microspheres constitute an important part of oral drug delivery system by virtue of their small size and efficient carrier capacity. However, the success of these microspheres is limited due to their short residence time at the site of absorption. The objective of the present study was to formulate and systematically evaluate in vitro performance of enteric coated mucoadhesive microspheres of duloxetine hydrochloride (DLX), an acid labile drug. DLX microspheres were prepared by simple emulsification phase separation technique using chitosan as carrier and glutaraldehyde as a cross-linking agent. Microspheres prepared were coated with eudragit L-100 using an oil-in-oil solvent evaporation method. Eudragit L-100was used as enteric coating polymer with the aim to release the drug in small intestine The microspheres prepared were characterized by particle size, entrapment efficiency, swelling index (SI), mucoadhesion time, in vitro drug release and surface morphology. A 3(2) full factorial design was employed to study the effect of independent variables polymer-to-drug ratio (X1) and stirring speed (X2) on dependent variables, particle size, entrapment efficiency, SI, in vitro mucoadhesion and drug release up to 24 h (t24). Microspheres formed were discrete, spherical and free flowing. The microspheres exhibited good mucoadhesive property and also showed high percentage entrapment efficiency. The microspheres were able to sustain the drug release up to 24 h. Thus, the prepared enteric coated mucoadhesive microspheres may prove to be a potential controlled release formulation of DLX for oral administration.

  19. The match-to-match variation of match-running in elite female soccer.

    PubMed

    Trewin, Joshua; Meylan, César; Varley, Matthew C; Cronin, John

    2018-02-01

    The purpose of this study was to examine the match-to-match variation of match-running in elite female soccer players utilising GPS, using full-match and rolling period analyses. Longitudinal study. Elite female soccer players (n=45) from the same national team were observed during 55 international fixtures across 5 years (2012-2016). Data was analysed using a custom built MS Excel spreadsheet as full-matches and using a rolling 5-min analysis period, for all players who played 90-min matches (files=172). Variation was examined using co-efficient of variation and 90% confidence limits, calculated following log transformation. Total distance per minute exhibited the smallest variation when both the full-match and peak 5-min running periods were examined (CV=6.8-7.2%). Sprint-efforts were the most variable during a full-match (CV=53%), whilst high-speed running per minute exhibited the greatest variation in the post-peak 5-min period (CV=143%). Peak running periods were observed as slightly more variable than full-match analyses, with the post-peak period very-highly variable. Variability of accelerations (CV=17%) and Player Load (CV=14%) was lower than that of high-speed actions. Positional differences were also present, with centre backs exhibiting the greatest variation in high-speed movements (CV=41-65%). Practitioners and researchers should account for within player variability when examining match performances. Identification of peak running periods should be used to assist worst case scenarios. Whilst micro-sensor technology should be further examined as to its viable use within match-analyses. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Compact continuous-variable entanglement distillation.

    PubMed

    Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A

    2012-02-10

    We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.

  1. Information content of IRIS spectra. [from Nimbus 4 satellite

    NASA Technical Reports Server (NTRS)

    Price, J. C.

    1974-01-01

    Spectra from the satellite instrument IRIS (infra red interferometer spectrometer) were examined to find the number of independent variables needed to describe these broadband high spectral resolution data. The radiated power in the atmospheric window from 771 to 981/cm was the first parameter chosen for fitting observed spectra. At succeeding levels of analysis the residual variability (observed spectrum - best fit spectrum) in an ensemble of observations was partioned into spectral eigenvectors. The eigenvector describing the largest fraction of this variability was examined for a strong spectral signature; the power in the corresponding spectral band was then used as the next fitting parameter. The measured power in nine spectral intervals, when inserted in the spectral fitting functions, was adequate to describe most spectra to within the noise level of IRIS. Considerations of relative signal strength and scales of atmospheric variability suggest a combination sounder (multichannel-broad field of view) scanner (window channel-small field of view) as an efficient observing instrument.

  2. Early warning of changing drinking water quality by trend analysis.

    PubMed

    Tomperi, Jani; Juuso, Esko; Leiviskä, Kauko

    2016-06-01

    Monitoring and control of water treatment plants play an essential role in ensuring high quality drinking water and avoiding health-related problems or economic losses. The most common quality variables, which can be used also for assessing the efficiency of the water treatment process, are turbidity and residual levels of coagulation and disinfection chemicals. In the present study, the trend indices are developed from scaled measurements to detect warning signs of changes in the quality variables of drinking water and some operating condition variables that strongly affect water quality. The scaling is based on monotonically increasing nonlinear functions, which are generated with generalized norms and moments. Triangular episodes are classified with the trend index and its derivative. Deviation indices are used to assess the severity of situations. The study shows the potential of the described trend analysis as a predictive monitoring tool, as it provides an advantage over the traditional manual inspection of variables by detecting changes in water quality and giving early warnings.

  3. Information content in Iris spectra. [Infrared Interferometer Spectrometer of Nimbus 4 satellite

    NASA Technical Reports Server (NTRS)

    Price, J. C.

    1975-01-01

    Spectra from the satellite instrument Iris (infrared interferometer spectrometer) were examined to find the number of independent variables needed to describe the broad-band high-resolution spectral data. The radiated power in the atmospheric window from 771 to 981 per cm was the first parameter chosen for fitting observed spectra. At succeeding levels of analysis, the residual variability (observed spectrum minus best-fit spectrum) in an ensemble of observations was partitioned into spectral eigenvectors. The eigenvector describing the largest fraction of this variability was examined for a strong spectral signature; the power in the corresponding spectral band was then used as the next fitting parameter. The measured power in nine spectral intervals, when it was inserted in the spectral-fitting functions, was adequate to describe most spectra to within the noise level of Iris. Considerations of relative signal strength and scales of atmospheric variability suggest a combination sounder (multichannel, broad field of view) scanner (window channel, small field of view) as an efficient observing instrument.

  4. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  5. Improved first-pass spiral myocardial perfusion imaging with variable density trajectories.

    PubMed

    Salerno, Michael; Sica, Christopher; Kramer, Christopher M; Meyer, Craig H

    2013-11-01

    To develop and evaluate variable-density spiral first-pass perfusion pulse sequences for improved efficiency and off-resonance performance and to demonstrate the utility of an apodizing density compensation function (DCF) to improve signal-to-noise ratio (SNR) and reduce dark-rim artifact caused by cardiac motion and Gibbs Ringing. Three variable density spiral trajectories were designed, simulated, and evaluated in 18 normal subjects, and in eight patients with cardiac pathology on a 1.5T scanner. By using a DCF, which intentionally apodizes the k-space data, the sidelobe amplitude of the theoretical point spread function (PSF) is reduced by 68%, with only a 13% increase in the full-width at half-maximum of the main-lobe when compared with the same data corrected with a conventional variable-density DCF, and has an 8% higher resolution than a uniform density spiral with the same number of interleaves and readout duration. Furthermore, this strategy results in a greater than 60% increase in measured SNR when compared with the same variable-density spiral data corrected with a conventional DCF (P < 0.01). Perfusion defects could be clearly visualized with minimal off-resonance and dark-rim artifacts. Variable-density spiral pulse sequences using an apodized DCF produce high-quality first-pass perfusion images with minimal dark-rim and off-resonance artifacts, high SNR and contrast-to-noise ratio, and good delineation of resting perfusion abnormalities. Copyright © 2012 Wiley Periodicals, Inc.

  6. Variable range hopping electric and thermoelectric transport in anisotropic black phosphorus

    DOE PAGES

    Liu, Huili; Sung Choe, Hwan; Chen, Yabin; ...

    2017-09-05

    Black phosphorus (BP) is a layered semiconductor with a high mobility of up to ~1000 cm 2 V -1 s -1 and a narrow bandgap of ~0.3 eV, and shows potential applications in thermoelectrics. In stark contrast to most other layered materials, electrical and thermoelectric properties in the basal plane of BP are highly anisotropic. In order to elucidate the mechanism for such anisotropy, we fabricated BP nanoribbons (~100 nm thick) along the armchair and zigzag directions, and measured the transport properties. It is found that both the electrical conductivity and Seebeck co efficient increase with temperature, a behavior contradictorymore » to that of traditional semiconductors. The three-dimensional variable range hopping model is adopted to analyze this abnormal temperature dependency of electrical conductivity and Seebeck coefficient. Furthermore, the hopping transport of the BP nanoribbons, attributed to high density of trap states in the samples, provides a fundamental understanding of the anisotropic BP for potential thermoelectric applications.« less

  7. Variable range hopping electric and thermoelectric transport in anisotropic black phosphorus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Huili; Sung Choe, Hwan; Chen, Yabin

    Black phosphorus (BP) is a layered semiconductor with a high mobility of up to ~1000 cm 2 V -1 s -1 and a narrow bandgap of ~0.3 eV, and shows potential applications in thermoelectrics. In stark contrast to most other layered materials, electrical and thermoelectric properties in the basal plane of BP are highly anisotropic. In order to elucidate the mechanism for such anisotropy, we fabricated BP nanoribbons (~100 nm thick) along the armchair and zigzag directions, and measured the transport properties. It is found that both the electrical conductivity and Seebeck co efficient increase with temperature, a behavior contradictorymore » to that of traditional semiconductors. The three-dimensional variable range hopping model is adopted to analyze this abnormal temperature dependency of electrical conductivity and Seebeck coefficient. Furthermore, the hopping transport of the BP nanoribbons, attributed to high density of trap states in the samples, provides a fundamental understanding of the anisotropic BP for potential thermoelectric applications.« less

  8. Proposal and Development of a High Voltage Variable Frequency Alternating Current Power System for Hybrid Electric Aircraft

    NASA Technical Reports Server (NTRS)

    Sadey, David J.; Taylor, Linda M.; Beach, Raymond F.

    2017-01-01

    The development of ultra-efficient commercial vehicles and the transition to low-carbon emission propulsion are seen as strategic thrust paths within NASA Aeronautics. A critical enabler to these paths comes in the form of hybrid electric propulsion systems. For megawatt-class systems, the best power system topology for these hybrid electric propulsion systems is debatable. Current proposals within NASA and the Aero community suggest using a combination of alternating current (AC) and direct current (DC) for power generation, transmission, and distribution. This paper proposes an alternative to the current thought model through the use of a primarily high voltage AC power system, supported by the Convergent Aeronautics Solutions (CAS) Project. This system relies heavily on the use of doubly-fed induction machines (DFIMs), which provide high power densities, minimal power conversion, and variable speed operation. The paper presents background on the activity along with the system architecture, development status, and preliminary results.

  9. Variable high gradient permanent magnet quadrupole (QUAPEVA)

    NASA Astrophysics Data System (ADS)

    Marteau, F.; Ghaith, A.; N'Gotta, P.; Benabderrahmane, C.; Valléau, M.; Kitegi, C.; Loulergue, A.; Vétéran, J.; Sebdaoui, M.; André, T.; Le Bec, G.; Chavanne, J.; Vallerand, C.; Oumbarek, D.; Cosson, O.; Forest, F.; Jivkov, P.; Lancelot, J. L.; Couprie, M. E.

    2017-12-01

    Different applications such as laser plasma acceleration, colliders, and diffraction limited light sources require high gradient quadrupoles, with strength that can reach up to 200 T/m for a typical 10 mm bore diameter. We present here a permanent magnet based quadrupole (so-called QUAPEVA) composed of a Halbach ring and surrounded by four permanent magnet cylinders. Its design including magnetic simulation modeling enabling us to reach 201 T/m with a gradient variability of 45% and mechanical issues are reported. Magnetic measurements of seven systems of different lengths are presented and confirmed the theoretical expectations. The variation of the magnetic center while changing the gradient strength is ±10 μm. A triplet of QUAPEVA magnets is used to efficiently focus a beam with large energy spread and high divergence that is generated by a Laser Plasma Acceleration source for a free electron laser demonstration and has enabled us to perform beam based alignment and control the dispersion of the beam.

  10. Spatial analysis of participation in the Waterloo Residential Energy Efficiency Project

    NASA Astrophysics Data System (ADS)

    Song, Ge Bella

    Researchers are in broad agreement that energy-conserving actions produce economic as well as energy savings. Household energy rating systems (HERS) have been established in many countries to inform households of their house's current energy performance and to help reduce their energy consumption and greenhouse gas emissions. In Canada, the national EnerGuide for Houses (EGH) program is delivered by many local delivery agents, including non-profit green community organizations. Waterloo Region Green Solutions is the local non-profit that offers the EGH residential energy evaluation service to local households. The purpose of this thesis is to explore the determinants of household's participation in the residential energy efficiency program (REEP) in Waterloo Region, to explain the relationship between the explanatory variables and REEP participation, and to propose ways to improve this kind of program. A spatial (trend) analysis was conducted within a geographic information system (GIS) to determine the spatial patterns of the REEP participation in Waterloo Region from 1999 to 2006. The impact of sources of information on participation and relationships between participation rates and explanatory variables were identified. GIS proved successful in presenting a visual interpretation of spatial patterns of the REEP participation. In general, the participating households tend to be clustered in urban areas and scattered in rural areas. Different sources of information played significant roles in reaching participants in different years. Moreover, there was a relationship between each explanatory variable and the REEP participation rates. Statistical analysis was applied to obtain a quantitative assessment of relationships between hypothesized explanatory variables and participation in the REEP. The Poisson regression model was used to determine the relationship between hypothesized explanatory variables and REEP participation at the CDA level. The results show that all of the independent variables have a statistically significant positive relationship with REEP participation. These variables include level of education, average household income, employment rate, home ownership, population aged 65 and over, age of home, and number of eligible dwellings. The logistic regression model was used to assess the ability of the hypothesized explanatory variables to predict whether or not households would participate in a second follow-up evaluation after completing upgrades to their home. The results show all the explanatory variables have significant relationships with the dependent variable. The increased rating score, average household income, aged population, and age of home are positively related to the dependent variable. While the dwelling size and education has negative relationships with the dependent variable. In general, the contribution of this work provides a practical understanding of how the energy efficiency program operates, and insight into the type of variables that may be successful in bringing about changes in performance in the energy efficiency project in Waterloo Region. Secondly, with the completion of this research, future residential energy efficiency programs can use the information from this research and emulate or expand upon the efforts and lessons learned from the Residential Energy Efficiency Project in Waterloo Region case study. Thirdly, this research also contributes to practical experience on how to integrate different datasets using GIS.

  11. Development of a Novel Brayton-Cycle Cryocooler and Key Component Technologies

    NASA Astrophysics Data System (ADS)

    Nieczkoski, S. J.; Mohling, R. A.

    2004-06-01

    Brayton-cycle cryocoolers are being developed to provide efficient cooling in the 6 K to 70 K temperature range. The cryocoolers are being developed for use in space and in terrestrial applications where combinations of long lifetime, high efficiency, compactness, low mass, low vibration, flexible interfacing, load variability, and reliability are essential. The key enabling technologies for these systems are a mesoscale expander and an advanced oil-free scroll compressor. Both these components are nearing completion of their prototype development phase. The emphasis on the component and system development has been on invoking fabrication processes and techniques that can be evolved to further reduction in scale tending toward cryocooler miniaturization.

  12. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  13. Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.

    2013-10-01

    In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.

  14. Spectral methods in time for a class of parabolic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ierley, G.; Spencer, B.; Worthing, R.

    1992-09-01

    In this paper, we introduce a fully spectral solution for the partial differential equation u[sub t] + uu[sub x] + vu[sub xx] + [mu]u[sub xxx] + [lambda]u[sub xxxx] = O. For periodic boundary conditions in space, the use of a Fourier expansion in x admits of a particularly efficient algorithm with respect to expansion of the time dependence in a Chebyshev series. Boundary conditions other than periodic may still be treated with reasonable, though lesser, efficiency. for all cases, very high accuracy is attainable at moderate computational cost relative to the expense of variable order finite difference methods in time.more » 14 refs., 9 figs.« less

  15. Flexible Control of Safety Margins for Action Based on Environmental Variability.

    PubMed

    Hadjiosif, Alkis M; Smith, Maurice A

    2015-06-17

    To reduce the risk of slip, grip force (GF) control includes a safety margin above the force level ordinarily sufficient for the expected load force (LF) dynamics. The current view is that this safety margin is based on the expected LF dynamics, amounting to a static safety factor like that often used in engineering design. More efficient control could be achieved, however, if the motor system reduces the safety margin when LF variability is low and increases it when this variability is high. Here we show that this is indeed the case by demonstrating that the human motor system sizes the GF safety margin in proportion to an internal estimate of LF variability to maintain a fixed statistical confidence against slip. In contrast to current models of GF control that neglect the variability of LF dynamics, we demonstrate that GF is threefold more sensitive to the SD than the expected value of LF dynamics, in line with the maintenance of a 3-sigma confidence level. We then show that a computational model of GF control that includes a variability-driven safety margin predicts highly asymmetric GF adaptation between increases versus decreases in load. We find clear experimental evidence for this asymmetry and show that it explains previously reported differences in how rapidly GFs and manipulatory forces adapt. This model further predicts bizarre nonmonotonic shapes for GF learning curves, which are faithfully borne out in our experimental data. Our findings establish a new role for environmental variability in the control of action. Copyright © 2015 the authors 0270-6474/15/359106-16$15.00/0.

  16. Individual differences and time-varying features of modular brain architecture.

    PubMed

    Liao, Xuhong; Cao, Miao; Xia, Mingrui; He, Yong

    2017-05-15

    Recent studies have suggested that human brain functional networks are topologically organized into functionally specialized but inter-connected modules to facilitate efficient information processing and highly flexible cognitive function. However, these studies have mainly focused on group-level network modularity analyses using "static" functional connectivity approaches. How these extraordinary modular brain structures vary across individuals and spontaneously reconfigure over time remain largely unknown. Here, we employed multiband resting-state functional MRI data (N=105) from the Human Connectome Project and a graph-based modularity analysis to systematically investigate individual variability and dynamic properties in modular brain networks. We showed that the modular structures of brain networks dramatically vary across individuals, with higher modular variability primarily in the association cortex (e.g., fronto-parietal and attention systems) and lower variability in the primary systems. Moreover, brain regions spontaneously changed their module affiliations on a temporal scale of seconds, which cannot be simply attributable to head motion and sampling error. Interestingly, the spatial pattern of intra-subject dynamic modular variability largely overlapped with that of inter-subject modular variability, both of which were highly reproducible across repeated scanning sessions. Finally, the regions with remarkable individual/temporal modular variability were closely associated with network connectors and the number of cognitive components, suggesting a potential contribution to information integration and flexible cognitive function. Collectively, our findings highlight individual modular variability and the notable dynamic characteristics in large-scale brain networks, which enhance our understanding of the neural substrates underlying individual differences in a variety of cognition and behaviors. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Analysis of the energy efficiency of an integrated ethanol processor for PEM fuel cell systems

    NASA Astrophysics Data System (ADS)

    Francesconi, Javier A.; Mussati, Miguel C.; Mato, Roberto O.; Aguirre, Pio A.

    The aim of this work is to investigate the energy integration and to determine the maximum efficiency of an ethanol processor for hydrogen production and fuel cell operation. Ethanol, which can be produced from renewable feedstocks or agriculture residues, is an attractive option as feed to a fuel processor. The fuel processor investigated is based on steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying simulation techniques and using thermodynamic models the performance of the complete system has been evaluated for a variety of operating conditions and possible reforming reactions pathways. These models involve mass and energy balances, chemical equilibrium and feasible heat transfer conditions (Δ T min). The main operating variables were determined for those conditions. The endothermic nature of the reformer has a significant effect on the overall system efficiency. The highest energy consumption is demanded by the reforming reactor, the evaporator and re-heater operations. To obtain an efficient integration, the heat exchanged between the reformer outgoing streams of higher thermal level (reforming and combustion gases) and the feed stream should be maximized. Another process variable that affects the process efficiency is the water-to-fuel ratio fed to the reformer. Large amounts of water involve large heat exchangers and the associated heat losses. A net electric efficiency around 35% was calculated based on the ethanol HHV. The responsibilities for the remaining 65% are: dissipation as heat in the PEMFC cooling system (38%), energy in the flue gases (10%) and irreversibilities in compression and expansion of gases. In addition, it has been possible to determine the self-sufficient limit conditions, and to analyze the effect on the net efficiency of the input temperatures of the clean-up system reactors, combustion preheating, expander unit and crude ethanol as fuel.

  18. CRISPRscan: designing highly efficient sgRNAs for CRISPR/Cas9 targeting in vivo

    PubMed Central

    Moreno-Mateos, Miguel A.; Vejnar, Charles E.; Beaudoin, Jean-Denis; Fernandez, Juan P.; Mis, Emily K.; Khokha, Mustafa K.; Giraldez, Antonio J.

    2015-01-01

    CRISPR/Cas9 technology provides a powerful system for genome engineering. However, variable activity across different single guide RNAs (sgRNAs) remains a significant limitation. We have analyzed the molecular features that influence sgRNA stability, activity and loading into Cas9 in vivo. We observe that guanine enrichment and adenine depletion increase sgRNA stability and activity, while loading, nucleosome positioning and Cas9 off-target binding are not major determinants. We additionally identified truncated and 5′ mismatch-containing sgRNAs as efficient alternatives to canonical sgRNAs. Based on these results, we created a predictive sgRNA-scoring algorithm (CRISPRscan.org) that effectively captures the sequence features affecting Cas9/sgRNA activity in vivo. Finally, we show that targeting Cas9 to the germ line using a Cas9-nanos-3′-UTR fusion can generate maternal-zygotic mutants, increase viability and reduce somatic mutations. Together, these results provide novel insights into the determinants that influence Cas9 activity and a framework to identify highly efficient sgRNAs for genome targeting in vivo. PMID:26322839

  19. Wavepacket dynamics and the multi-configurational time-dependent Hartree approach

    NASA Astrophysics Data System (ADS)

    Manthe, Uwe

    2017-06-01

    Multi-configurational time-dependent Hartree (MCTDH) based approaches are efficient, accurate, and versatile methods for high-dimensional quantum dynamics simulations. Applications range from detailed investigations of polyatomic reaction processes in the gas phase to high-dimensional simulations studying the dynamics of condensed phase systems described by typical solid state physics model Hamiltonians. The present article presents an overview of the different areas of application and provides a comprehensive review of the underlying theory. The concepts and guiding ideas underlying the MCTDH approach and its multi-mode and multi-layer extensions are discussed in detail. The general structure of the equations of motion is highlighted. The representation of the Hamiltonian and the correlated discrete variable representation (CDVR), which provides an efficient multi-dimensional quadrature in MCTDH calculations, are discussed. Methods which facilitate the calculation of eigenstates, the evaluation of correlation functions, and the efficient representation of thermal ensembles in MCTDH calculations are described. Different schemes for the treatment of indistinguishable particles in MCTDH calculations and recent developments towards a unified multi-layer MCTDH theory for systems including bosons and fermions are discussed.

  20. Efficiency in the European agricultural sector: environment and resources.

    PubMed

    Moutinho, Victor; Madaleno, Mara; Macedo, Pedro; Robaina, Margarita; Marques, Carlos

    2018-04-22

    This article intends to compute agriculture technical efficiency scores of 27 European countries during the period 2005-2012, using both data envelopment analysis (DEA) and stochastic frontier analysis (SFA) with a generalized cross-entropy (GCE) approach, for comparison purposes. Afterwards, by using the scores as dependent variable, we apply quantile regressions using a set of possible influencing variables within the agricultural sector able to explain technical efficiency scores. Results allow us to conclude that although DEA and SFA are quite distinguishable methodologies, and despite attained results are different in terms of technical efficiency scores, both are able to identify analogously the worst and better countries. They also suggest that it is important to include resources productivity and subsidies in determining technical efficiency due to its positive and significant exerted influence.

  1. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  2. Human hair-derived high surface area porous carbon material for the adsorption isotherm and kinetics of tetracycline antibiotics.

    PubMed

    Ahmed, M J; Islam, Md Azharul; Asif, M; Hameed, B H

    2017-11-01

    In this work, a human hair-derived high surface area porous carbon material (HHC) was prepared using potassium hydroxide activation. The morphology and textural properties of the HHC structure, along with its adsorption performance for tetracycline (TC) antibiotics, were evaluated. HHC showed a high surface area of 1505.11m 2 /g and 68.34% microporosity. The effects of most important variables, such as initial concentration (25-355mg/L), solution pH (3-13), and temperatures (30-50°C), on the HHC adsorption performance were investigated. Isotherm data analysis revealed the favorable application of the Langmuir model, with maximum TC uptakes of 128.52, 162.62, and 210.18mg/g at 30, 40, and 50°C, respectively. The experimental data of TC uptakes versus time were analyzed efficiently using a pseudo-first order model. Porous HHC could be an efficient adsorbent for eliminating antibiotic pollutants in wastewater. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Processing-Structure-Property Relationships for Lignin-Based Carbonaceous Materials Used in Energy-Storage Applications

    DOE PAGES

    García-Negrón, Valerie; Phillip, Nathan D.; Li, Jianlin; ...

    2016-11-18

    Lignin, an abundant organic polymer and a byproduct of pulp and biofuel production, has potential applications owing to its high carbon content and aromatic structure. Processing structure relationships are difficult to predict because of the heterogeneity of lignin. Here, this work discusses the roles of unit operations in the carbonization process of softwood lignin, and their resulting impacts on the material structure and electrochemical properties in application as the anode in lithium-ion cells. The processing variables include the lignin source, temperature, and duration of thermal stabilization, pyrolysis, and reduction. Materials are characterized at the atomic and microscales. High-temperature carbonization, atmore » 2000 °C, produces larger graphitic domains than at 1050 °C, but results in a reduced capacity. Coulombic efficiencies over 98 % are achieved for extended galvanostatic cycling. Consequently, a properly designed carbonization process for lignin is well suited for the generation of low-cost, high-efficiency electrodes.« less

  4. Classification of ROTSE Variable Stars using Machine Learning

    NASA Astrophysics Data System (ADS)

    Wozniak, P. R.; Akerlof, C.; Amrose, S.; Brumby, S.; Casperson, D.; Gisler, G.; Kehoe, R.; Lee, B.; Marshall, S.; McGowan, K. E.; McKay, T.; Perkins, S.; Priedhorsky, W.; Rykoff, E.; Smith, D. A.; Theiler, J.; Vestrand, W. T.; Wren, J.; ROTSE Collaboration

    2001-12-01

    We evaluate several Machine Learning algorithms as potential tools for automated classification of variable stars. Using the ROTSE sample of ~1800 variables from a pilot study of 5% of the whole sky, we compare the effectiveness of a supervised technique (Support Vector Machines, SVM) versus unsupervised methods (K-means and Autoclass). There are 8 types of variables in the sample: RR Lyr AB, RR Lyr C, Delta Scuti, Cepheids, detached eclipsing binaries, contact binaries, Miras and LPVs. Preliminary results suggest a very high ( ~95%) efficiency of SVM in isolating a few best defined classes against the rest of the sample, and good accuracy ( ~70-75%) for all classes considered simultaneously. This includes some degeneracies, irreducible with the information at hand. Supervised methods naturally outperform unsupervised methods, in terms of final error rate, but unsupervised methods offer many advantages for large sets of unlabeled data. Therefore, both types of methods should be considered as promising tools for mining vast variability surveys. We project that there are more than 30,000 periodic variables in the ROTSE-I data base covering the entire local sky between V=10 and 15.5 mag. This sample size is already stretching the time capabilities of human analysts.

  5. Process configuration of Liquid-nitrogen Energy Storage System (LESS) for maximum turnaround efficiency

    NASA Astrophysics Data System (ADS)

    Dutta, Rohan; Ghosh, Parthasarathi; Chowdhury, Kanchan

    2017-12-01

    Diverse power generation sector requires energy storage due to penetration of variable renewable energy sources and use of CO2 capture plants with fossil fuel based power plants. Cryogenic energy storage being large-scale, decoupled system with capability of producing large power in the range of MWs is one of the options. The drawback of these systems is low turnaround efficiencies due to liquefaction processes being highly energy intensive. In this paper, the scopes of improving the turnaround efficiency of such a plant based on liquid Nitrogen were identified and some of them were addressed. A method using multiple stages of reheat and expansion was proposed for improved turnaround efficiency from 22% to 47% using four such stages in the cycle. The novelty here is the application of reheating in a cryogenic system and utilization of waste heat for that purpose. Based on the study, process conditions for a laboratory-scale setup were determined and presented here.

  6. Apparatus and method for variable angle slant hole collimator

    DOEpatents

    Lee, Seung Joon; Kross, Brian J.; McKisson, John E.

    2017-07-18

    A variable angle slant hole (VASH) collimator for providing collimation of high energy photons such as gamma rays during radiological imaging of humans. The VASH collimator includes a stack of multiple collimator leaves and a means of quickly aligning each leaf to provide various projection angles. Rather than rotate the detector around the subject, the VASH collimator enables the detector to remain stationary while the projection angle of the collimator is varied for tomographic acquisition. High collimator efficiency is achieved by maintaining the leaves in accurate alignment through the various projection angles. Individual leaves include unique angled cuts to maintain a precise target collimation angle. Matching wedge blocks driven by two actuators with twin-lead screws accurately position each leaf in the stack resulting in the precise target collimation angle. A computer interface with the actuators enables precise control of the projection angle of the collimator.

  7. Health Care Provider Burnout in a United States Military Medical Center During a Period of War.

    PubMed

    Sargent, Paul; Millegan, Jeffrey; Delaney, Eileen; Roesch, Scott; Sanders, Martha; Mak, Heather; Mallahan, Leonard; Raducha, Stephanie; Webb-Murphy, Jennifer

    2016-02-01

    Provider burnout can impact efficiency, empathy, and medical errors. Our study examines burnout in a military medical center during a period of war. A survey including the Maslach Burnout Inventory (MBI), deployment history, and work variables was distributed to health care providers. MBI subscale means were calculated and associations between variables were analyzed. Approximately 60% of 523 respondents were active duty and 34% had deployed. MBI subscale means were 19.99 emotional exhaustion, 4.84 depersonalization, and 40.56 personal accomplishment. Frustration over administrative support was associated with high emotional exhaustion and depersonalization; frustration over life/work balance was associated with high emotional exhaustion. Levels of burnout in our sample were similar to civilian medical centers. Sources of frustration were related to administrative support and life/work balance. Deployment had no effect on burnout levels. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  8. Dynamically variable negative stiffness structures.

    PubMed

    Churchill, Christopher B; Shahan, David W; Smith, Sloan P; Keefe, Andrew C; McKnight, Geoffrey P

    2016-02-01

    Variable stiffness structures that enable a wide range of efficient load-bearing and dexterous activity are ubiquitous in mammalian musculoskeletal systems but are rare in engineered systems because of their complexity, power, and cost. We present a new negative stiffness-based load-bearing structure with dynamically tunable stiffness. Negative stiffness, traditionally used to achieve novel response from passive structures, is a powerful tool to achieve dynamic stiffness changes when configured with an active component. Using relatively simple hardware and low-power, low-frequency actuation, we show an assembly capable of fast (<10 ms) and useful (>100×) dynamic stiffness control. This approach mitigates limitations of conventional tunable stiffness structures that exhibit either small (<30%) stiffness change, high friction, poor load/torque transmission at low stiffness, or high power active control at the frequencies of interest. We experimentally demonstrate actively tunable vibration isolation and stiffness tuning independent of supported loads, enhancing applications such as humanoid robotic limbs and lightweight adaptive vibration isolators.

  9. Research on electricity consumption forecast based on mutual information and random forests algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu

    2018-02-01

    Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.

  10. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  11. Impact of bimodal textural heterogeneity and connectivity on flow and transport through unsaturated mine waste rock

    NASA Astrophysics Data System (ADS)

    Appels, Willemijn M.; Ireson, Andrew M.; Barbour, S. Lee

    2018-02-01

    Mine waste rock dumps have highly variable flowpaths caused by contrasting textures and geometry of materials laid down during the 'plug dumping' process. Numerical experiments were conducted to investigate how these characteristics control unsaturated zone flow and transport. Hypothetical profiles of inner-lift structure were generated with multiple point statistics and populated with hydraulic parameters of a finer and coarser material. Early arrival of water and solutes at the bottom of the lifts was observed after spring snowmelt. The leaching efficiency, a measure of the proportion of a resident solute that is flushed out of the rock via infiltrating snowmelt or rainfall, was consistently high, but modified by the structure and texture of the lift. Under high rates of net percolation during snowmelt, preferential flow was generated in coarse textured part of the rock, and solutes in the fine textured parts of the rock remained stagnant. Under lower rates of net percolation during the summer and fall, finer materialswere flushed too, and the spatial variability of solute concentration in the lift was reduced. Layering of lifts leads to lower flow rates at depth, minimizing preferential flow and increased leaching of resident solutes. These findings highlight the limited role of large scale connected geometries on focusing flow and transport under dynamic surface net percolation conditions. As such, our findings agree with recent numerical results from soil studies with Gaussian connected geometries as well as recent experimental findings, emphasizing the dominant role of matrix flow and high leaching efficiency in large waste rock dumps.

  12. Application of high temperature phase change materials for improved efficiency in waste-to-energy plants.

    PubMed

    Dal Magro, Fabio; Xu, Haoxin; Nardin, Gioacchino; Romagnoli, Alessandro

    2018-03-01

    This study reports the thermal analysis of a novel thermal energy storage based on high temperature phase change material (PCM) used to improve efficiency in waste-to-energy plants. Current waste-to-energy plants efficiency is limited by the steam generation cycle which is carried out with boilers composed by water-walls (i.e. radiant evaporators), evaporators, economizers and superheaters. Although being well established, this technology is subjected to limitations related with high temperature corrosion and fluctuation in steam production due to the non-homogenous composition of solid waste; this leads to increased maintenance costs and limited plants availability and electrical efficiency. The proposed solution in this paper consists of replacing the typical refractory brick installed in the combustion chamber with a PCM-based refractory brick capable of storing a variable heat flux and to release it on demand as a steady heat flux. By means of this technology it is possible to mitigate steam production fluctuation, to increase temperature of superheated steam over current corrosion limits (450°C) without using coated superheaters and to increase the electrical efficiency beyond 34%. In the current paper a detailed thermo-mechanical analysis has been carried out in order to compare the performance of the PCM-based refractory brick against the traditional alumina refractory bricks. The PCM considered in this paper is aluminium (and its alloys) whereas its container consists of high density ceramics (such as Al 2 O 3 , AlN and Si 3 N 4 ); the different coefficient of linear thermal expansion for the different materials requires a detailed thermo-mechanical analysis to be carried out to ascertain the feasibility of the proposed technology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Experimental evaluation of a translating nozzle sidewall radial turbine

    NASA Technical Reports Server (NTRS)

    Roelke, Richard J.; Rogo, Casimir

    1987-01-01

    An experimental performance evaluation was made of two movable sidewall variable area radial turbines. The turbine designs were representative of the gas generator turbine of a variable flow capacity rotorcraft engine. The first turbine was an uncooled design while the second turbine had a cooled nozzle but an uncooled rotor. The cooled nozzle turbine was evaluated both with and without coolant flow. The test results showed that the movable nozzle wall is a viable and efficient means to effectively control the flow capacity of a radial turbine. Peak efficiencies of the second turbine with and without nozzle coolant were 86.5 and 88 percent respectively. These values are comparable to pivoting vane variable geometry turbines; however, the decrease in efficiency as the flow was varied from the design value was much less for the movable wall turbine. Several design improvements which should increase the turbine efficiency one or two more points are identified. These design improvements include reduced leakage losses and relocation of the vane coolant ejection holes to reduce mainstream disturbance.

  14. Sap flow measurements combining sap-flux density radial profiles with punctual sap-flux density measurements in oak trees (Quercus ilex and Quercus pyrenaica) - water-use implications in a water-limited savanna-

    NASA Astrophysics Data System (ADS)

    Reyes, J. Leonardo; Lubczynski1, Maciek W.

    2010-05-01

    Sap flow measurement is a key aspect for understanding how plants use water and their impacts on the ecosystems. A variety of sensors have been developed to measure sap flow, each one with its unique characteristics. When the aim of a research is to have accurate tree water use calculations, with high temporal and spatial resolution (i.e. scaled), a sensor with high accuracy, high measurement efficiency, low signal-to-noise ratio and low price is ideal, but such has not been developed yet. Granier's thermal dissipation probes (TDP) have been widely used in many studies and various environmental conditions because of its simplicity, reliability, efficiency and low cost. However, it has two major flaws when is used in semi-arid environments and broad-stem tree species: it is often affected by high natural thermal gradients (NTG), which distorts the measurements, and it cannot measure the radial variability of sap-flux density in trees with sapwood thicker than two centimeters. The new, multi point heat field deformation sensor (HFD) is theoretically not affected by NTG, and it can measure the radial variability of the sap flow at different depths. However, its high cost is a serious limitation when simultaneous measurements are required in several trees (e.g. catchment-scale studies). The underlying challenge is to develop a monitoring schema in which HFD and TDP are combined to satisfy the needs of measurement efficiency and accuracy in water accounting. To assess the level of agreement between TDP and HFD methods in quantifying sap flow rates and temporal patterns on Quercus ilex (Q.i ) and Quercus pyrenaica trees (Q.p.), three measurement schemas: standard TDP, TDP-NTG-corrected and HFD were compared in dry season at the semi-arid Sardon area, near Salamanca in Spain in the period from June to September 2009. To correct TDP measurements with regard to radial sap flow variability, a radial sap flux density correction factor was applied and tested by adjusting TDP measurements using the HFD-measured radial profiles. The standard TDP daily mean of sap-flux density was 95% higher than the 2cm equivalent of the HFD for Q. ilex and 70% higher for Q. pyrenaica. NTG-corrected TDP daily mean of sap-flux density was 34% higher than HFD for Q. ilex and 47% lower for Q. pyrenaica. Regarding sap flow measurements, the standard TDP sap flow was 81% higher than HFD sap flow for Q. ilex and 297% for Q. pyrenaica. The NTG-corrected TDP sap flow was 24% higher than HFD sap flow for Q. ilex and 23% for Q. pyrenaica. The radial correction, for TDP-NTG-corrected sap-flux density, produced sap-flow measurements in well agreement with HFD, just slightly lower (-3% Q.i. and -4% Q.p.). The TDP-HFD sap flow data acquired in dry season over the savanna type of sparsely distributed oak trees (Q. ilex & Q. pyrenaica) showed that the TDP method must be corrected for NTG and for radial variability of sap flux density in trees with sapwood thicker than 2 cm. If such corrections are not taken into consideration, the amount of accounted water used by the trees is prone to overestimation, especially for Quercus pyrenaica. The obtained results indicate also that the combination of HFD and TDP leads to an efficient and accurate operational sap flow measurement schema that is currently in the optimization stage.

  15. Description and validation of the Simple, Efficient, Dynamic, Global, Ecological Simulator (SEDGES v.1.0)

    NASA Astrophysics Data System (ADS)

    Paiewonsky, Pablo; Elison Timm, Oliver

    2018-03-01

    In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.

  16. Global patterns and climate drivers of water-use efficiency in terrestrial ecosystems deduced from satellite-based datasets and carbon cycle models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yan; Piao, Shilong; Huang, Mengtian

    Our aim is to investigate how ecosystem water-use efficiency (WUE) varies spatially under different climate conditions, and how spatial variations in WUE differ from those of transpiration-based water-use efficiency (WUE t) and transpiration-based inherent water-use efficiency (IWUE t). LocationGlobal terrestrial ecosystems. We investigated spatial patterns of WUE using two datasets of gross primary productivity (GPP) and evapotranspiration (ET) and four biosphere model estimates of GPP and ET. Spatial relationships between WUE and climate variables were further explored through regression analyses. Global WUE estimated by two satellite-based datasets is 1.9 ± 0.1 and 1.8 ± 0.6g C m -2mm -1 lowermore » than the simulations from four process-based models (2.0 ± 0.3g C m -2mm -1) but comparable within the uncertainty of both approaches. In both satellite-based datasets and process models, precipitation is more strongly associated with spatial gradients of WUE for temperate and tropical regions, but temperature dominates north of 50 degrees N. WUE also increases with increasing solar radiation at high latitudes. The values of WUE from datasets and process-based models are systematically higher in wet regions (with higher GPP) than in dry regions. WUE t shows a lower precipitation sensitivity than WUE, which is contrary to leaf- and plant-level observations. IWUE t, the product of WUE t and water vapour deficit, is found to be rather conservative with spatially increasing precipitation, in agreement with leaf- and plant-level measurements. In conclusion, WUE, WUE t and IWUE t produce different spatial relationships with climate variables. In dry ecosystems, water losses from evaporation from bare soil, uncorrelated with productivity, tend to make WUE lower than in wetter regions. Yet canopy conductance is intrinsically efficient in those ecosystems and maintains a higher IWUEt. This suggests that the responses of each component flux of evapotranspiration should be analysed separately when investigating regional gradients in WUE, its temporal variability and its trends.« less

  17. Global patterns and climate drivers of water-use efficiency in terrestrial ecosystems deduced from satellite-based datasets and carbon cycle models

    DOE PAGES

    Sun, Yan; Piao, Shilong; Huang, Mengtian; ...

    2015-12-23

    Our aim is to investigate how ecosystem water-use efficiency (WUE) varies spatially under different climate conditions, and how spatial variations in WUE differ from those of transpiration-based water-use efficiency (WUE t) and transpiration-based inherent water-use efficiency (IWUE t). LocationGlobal terrestrial ecosystems. We investigated spatial patterns of WUE using two datasets of gross primary productivity (GPP) and evapotranspiration (ET) and four biosphere model estimates of GPP and ET. Spatial relationships between WUE and climate variables were further explored through regression analyses. Global WUE estimated by two satellite-based datasets is 1.9 ± 0.1 and 1.8 ± 0.6g C m -2mm -1 lowermore » than the simulations from four process-based models (2.0 ± 0.3g C m -2mm -1) but comparable within the uncertainty of both approaches. In both satellite-based datasets and process models, precipitation is more strongly associated with spatial gradients of WUE for temperate and tropical regions, but temperature dominates north of 50 degrees N. WUE also increases with increasing solar radiation at high latitudes. The values of WUE from datasets and process-based models are systematically higher in wet regions (with higher GPP) than in dry regions. WUE t shows a lower precipitation sensitivity than WUE, which is contrary to leaf- and plant-level observations. IWUE t, the product of WUE t and water vapour deficit, is found to be rather conservative with spatially increasing precipitation, in agreement with leaf- and plant-level measurements. In conclusion, WUE, WUE t and IWUE t produce different spatial relationships with climate variables. In dry ecosystems, water losses from evaporation from bare soil, uncorrelated with productivity, tend to make WUE lower than in wetter regions. Yet canopy conductance is intrinsically efficient in those ecosystems and maintains a higher IWUEt. This suggests that the responses of each component flux of evapotranspiration should be analysed separately when investigating regional gradients in WUE, its temporal variability and its trends.« less

  18. Virus removal efficiency of Cambodian ceramic pot water purifiers.

    PubMed

    Salsali, Hamidreza; McBean, Edward; Brunsting, Joseph

    2011-06-01

    Virus removal efficiency is described for three types of silver-impregnated, ceramic water filters (CWFs) produced in Cambodia. The tests were completed using freshly scrubbed filters and de-ionized (DI) water as an evaluation of the removal efficiency of the virus in isolation with no other interacting water quality variables. Removal efficiencies between 0.21 and 0.45 log are evidenced, which is significantly lower than results obtained in testing of similar filters by other investigators utilizing surface or rain water and a less frequent cleaning regime. Other experiments generally found virus removal efficiencies greater than 1.0 log. This difference may be because of the association of viruses with suspended solids, and subsequent removal of these solids during filtration. Variability in virus removal efficiencies between pots of the same manufacturer, and observed flow rates outside the manufacturer's specifications, suggest tighter quality control and consistency may be needed during production.

  19. High particle export over the continental shelf of the west Antarctic Peninsula

    NASA Astrophysics Data System (ADS)

    Buesseler, Ken O.; McDonnell, Andrew M. P.; Schofield, Oscar M. E.; Steinberg, Deborah K.; Ducklow, Hugh W.

    2010-11-01

    Drifting cylindrical traps and the flux proxy 234Th indicate more than an order of magnitude higher sinking fluxes of particulate carbon and 234Th in January 2009 than measured by a time-series conical trap used regularly on the shelf of the west Antarctic Peninsula (WAP). The higher fluxes measured in this study have several implications for our understanding of the WAP ecosystem. Larger sinking fluxes result in a revised export efficiency of at least 10% (C flux/net primary production) and a requisite lower regeneration efficiency in surface waters. High fluxes also result in a large supply of sinking organic matter to support subsurface and benthic food webs on the continental shelf. These new findings call into question the magnitude of seasonal and interannual variability in particle flux and reaffirm the difficulty of using moored conical traps as a quantitative flux collector in shallow waters.

  20. Determination of azoxystrobin and chlorothalonil using a methacrylate-based polymer modified with gold nanoparticles as solid-phase extraction sorbent.

    PubMed

    Catalá-Icardo, Mónica; Gómez-Benito, Carmen; Simó-Alfonso, Ernesto Francisco; Herrero-Martínez, José Manuel

    2017-01-01

    This paper describes a novel and sensitive method for extraction, preconcentration, and determination of two important widely used fungicides, azoxystrobin, and chlorothalonil. The developed methodology is based on solid-phase extraction (SPE) using a polymeric material functionalized with gold nanoparticles (AuNPs) as sorbent followed by high-performance liquid chromatography (HPLC) with diode array detector (DAD). Several experimental variables that affect the extraction efficiency such as the eluent volume, sample flow rate, and salt addition were optimized. Under the optimal conditions, the sorbent provided satisfactory enrichment efficiency for both fungicides, high selectivity and excellent reusability (>120 re-uses). The proposed method allowed the detection of 0.05 μg L -1 of the fungicides and gave satisfactory recoveries (75-95 %) when it was applied to drinking and environmental water samples (river, well, tap, irrigation, spring, and sea waters).

  1. Study of Conical Pulsed Inductive Thruster with Multiple Modes of Operation

    NASA Technical Reports Server (NTRS)

    Miller, Robert; Eskridge, Richard; Martin, Adam; Rose, Frank

    2008-01-01

    An electrodeless, pulsed, inductively coupled thruster has several advantages over current electric propulsion designs. The efficiency of a pulsed inductive thruster is dependent upon the pulse characteristics of the device. Therefore, these thrusters are throttleable over a wide range of thrust levels by varying the pulse rate without affecting the thruster efficiency. In addition, by controlling the pulse energy and the mass bit together, the ISP of the thruster can also be varied with minimal efficiency loss over a wide range of ISP levels. Pulsed inductive thrusters will work with a multitude of propellants, including ammonia. Thus, a single pulsed inductive thruster could be used to handle a multitude of mission needs from high thrust to high ISP with one propulsion solution that would be variable in flight. A conical pulsed inductive lab thruster has been built to study this form of electric propulsion in detail. This thruster incorporates many advantages that are meant to enable this technology as a viable space propulsion technology. These advantages include incorporation of solid state switch technology for all switching needs of the thruster and pre-ionization of the propellant gas prior to acceleration. Pre-ionizing will significantly improve coupling efficiency between drive and bias fields and the plasma. This enables lower pulse energy levels without efficiency reduction. Pre-ionization can be accomplished at a small fraction of the drive pulse energy.

  2. Horizontally rotating disc recirculated photoreactor with TiO2-P25 nanoparticles immobilized onto a HDPE plate for photocatalytic removal of p-nitrophenol.

    PubMed

    Behnajady, Mohammad A; Dadkhah, Hojjat; Eskandarloo, Hamed

    2018-04-01

    In this study, a horizontally rotating disc recirculated (HRDR) photoreactor equipped with two UV lamps (6 W) was designed and fabricated for photocatalytic removal of p-nitrophenol (PNP). Photocatalyst (TiO 2 ) nanoparticles were immobilized onto a high-density polyethylene (HDPE) disc, and PNP containing solution was allowed to flow (flow rate of 310 mL min -1 ) in radial direction along the surface of the rotating disc illuminated with UV light. The efficiency of direct photolysis and photocatalysis and the effect of rotating speed on the removal of PNP were studied in the HRDR photoreactor. It was found that TiO 2 -P25 nanoparticles are needed for the effective removal of PNP and there was an optimum rotating speed (450 rpm) for the efficient performance of the HRDR photoreactor. Then effects of operational variables on the removal efficiency were optimized using response surface methodology. The results showed that the predicted values of removal efficiency are consistent with experimental results with an R 2 of 0.9656. Optimization results showed that maximum removal percent (82.6%) was achieved in the HRDR photoreactor at the optimum operational conditions. Finally, the reusability of the HRDR photoreactor was evaluated and the results showed high reusability and stability without any significant decrease in the photocatalytic removal efficiency.

  3. Numerical Calculation of Gravity-Capillary Interfacial Waves of Finite Amplitude,

    DTIC Science & Technology

    1980-02-26

    corresponding to n=2. The erical scheme appears to be more efficient than the numerical work of Schwartz and Vanden-Broeck shows Padd table method since the...waves are studied. A generalization of Wilton’s ripples for interfacial waves is presented. I. INTRODUCTION that all variables become dimensionless. We...then recast these series irrotational. Thus, we define stream functions # and as Padd apDroxlmants. High accuracy solutions were 02 and potential

  4. Fault and Defect Tolerant Computer Architectures: Reliable Computing with Unreliable Devices

    DTIC Science & Technology

    2006-08-31

    supply voltage, the delay of the inverter increases parabolically . 2.2.2.5 High Field Effects. A consequence of maintaining a higher Vdd than...be explained by dispro- portionate scaling of QCRIT with respect to collector efficiency. 78 Technology trends, then, indicate a moderate increase in...using clustered defects, a compounding procedure is used. Compounding considers λ as a random variable rather than a constant. Let l be this defect

  5. High Fidelity and Multiscale Algorithms for Collisional-radiative and Nonequilibrium Plasmas (Briefing Charts)

    DTIC Science & Technology

    2014-07-01

    of models for variable conditions: – Use implicit models to eliminate constraint of sequence of fast time scales: c, ve, – Price to pay: lack...collisions: – Elastic – Bragiinski terms – Inelastic – warning! Rates depend on both T and relative velocity – Multi-fluid CR model from...merge/split for particle management, efficient sampling, inelastic collisions … – Level grouping schemes of electronic states, for dynamical coarse

  6. The effects of energy concentration in roughage and allowance of concentrates on performance, health and energy efficiency of pluriparous dairy cows during early lactation.

    PubMed

    Schmitz, Rolf; Schnabel, Karina; von Soosten, Dirk; Meyer, Ulrich; Spiekers, Hubert; Rehage, Jürgen; Dänicke, Sven

    2018-04-01

    The aim of this study was to investigate the effects of different energy supplies from roughage and concentrates on performance, health and energy efficiency during early lactation. For this purpose an experiment was conducted containing 64 pluriparous German Holstein cows from 3 weeks prepartum until 16 weeks postpartum. During dry period all cows received an equal dry cow ration. After calving, cows were assigned in a 2 × 2 factorial arrangement to one of four groups, receiving either a moderate (MR, 6.0 MJ NE L ) or a high (HR, 6.4 MJ NE L ) energy concentration in roughage and furthermore moderate (MC, 150 g/kg energy-corrected milk (ECM)) or high amounts of concentrates (HC, 250 g/kg ECM) on dry matter (DM) basis, which were allocated from an automatic feeding system. Higher allocation of concentrates resulted in an increase of DM intake at expense of roughage intake. HC cows had a higher milk yield than MC cows, whereas ECM was higher in HR cows due to a decrease of milk fat yield in MR groups. Energy balance and body condition score were elevated in HC cows, but no differences occurred in development of subclinical ketosis. Furthermore, energy efficiency variables were lower in HC groups because the greater energy intake was not associated with a considerable elevation of milk yield. Consistency of faeces did not indicate digestive disorders in any of the treatment groups although the faecal manure score was significantly lower in HR groups. Our results underline the importance of a high energy uptake from roughage, which can contribute to an adequate performance and beneficial efficiency, especially at lower amounts of concentrates in ration. Feeding concentrates on an average amount of 9.4 kg/d compared to 6.4 kg/d on DM basis improved the energy balance in our trial, but without consequences for metabolic blood variables and general health of the cows.

  7. Al2O3/SiON stack layers for effective surface passivation and anti-reflection of high efficiency n-type c-Si solar cells

    NASA Astrophysics Data System (ADS)

    Thi Thanh Nguyen, Huong; Balaji, Nagarajan; Park, Cheolmin; Triet, Nguyen Minh; Le, Anh Huy Tuan; Lee, Seunghwan; Jeon, Minhan; Oh, Donhyun; Dao, Vinh Ai; Yi, Junsin

    2017-02-01

    Excellent surface passivation and anti-reflection properties of double-stack layers is a prerequisite for high efficiency of n-type c-Si solar cells. The high positive fixed charge (Q f) density of N-rich hydrogenated amorphous silicon nitride (a-SiNx:H) films plays a poor role in boron emitter passivation. The more the refractive index ( n ) of a-SiNx:H is decreased, the more the positive Q f of a-SiNx:H is increased. Hydrogenated amorphous silicon oxynitride (SiON) films possess the properties of amorphous silicon oxide (a-SiOx) and a-SiNx:H with variable n and less positive Q f compared with a-SiNx:H. In this study, we investigated the passivation and anti-reflection properties of Al2O3/SiON stacks. Initially, a SiON layer was deposited by plasma enhanced chemical vapor deposition with variable n and its chemical composition was analyzed by Fourier transform infrared spectroscopy. Then, the SiON layer was deposited as a capping layer on a 10 nm thick Al2O3 layer, and the electrical and optical properties were analyzed. The SiON capping layer with n = 1.47 and a thickness of 70 nm resulted in an interface trap density of 4.74 = 1010 cm-2 eV-1 and Q f of -2.59 = 1012 cm-2 with a substantial improvement in lifetime of 1.52 ms after industrial firing. The incorporation of an Al2O3/SiON stack on the front side of the n-type solar cells results in an energy conversion efficiency of 18.34% compared to the one with Al2O3/a-SiNx:H showing 17.55% efficiency. The short circuit current density and open circuit voltage increase by up to 0.83 mA cm-2 and 12 mV, respectively, compared to the Al2O3/a-SiNx:H stack on the front side of the n-type solar cells due to the good anti-reflection and front side surface passivation.

  8. Prediction of BP reactivity to talking using hybrid soft computing approaches.

    PubMed

    Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar

    2014-01-01

    High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.

  9. Gas engine heat pump cycle analysis. Volume 1: Model description and generic analysis

    NASA Astrophysics Data System (ADS)

    Fischer, R. D.

    1986-10-01

    The task has prepared performance and cost information to assist in evaluating the selection of high voltage alternating current components, values for component design variables, and system configurations and operating strategy. A steady-state computer model for performance simulation of engine-driven and electrically driven heat pumps was prepared and effectively used for parametric and seasonal performance analyses. Parametric analysis showed the effect of variables associated with design of recuperators, brine coils, domestic hot water heat exchanger, compressor size, engine efficiency, insulation on exhaust and brine piping. Seasonal performance data were prepared for residential and commercial units in six cities with system configurations closely related to existing or contemplated hardware of the five GRI engine contractors. Similar data were prepared for an advanced variable-speed electric unit for comparison purposes. The effect of domestic hot water production on operating costs was determined. Four fan-operating strategies and two brine loop configurations were explored.

  10. Homotopy method for optimization of variable-specific-impulse low-thrust trajectories

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng

    2017-11-01

    The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.

  11. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  12. Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong

    2017-08-01

    We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.

  13. Lognormal kriging for the assessment of reliability in groundwater quality control observation networks

    USGS Publications Warehouse

    Candela, L.; Olea, R.A.; Custodio, E.

    1988-01-01

    Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.

  14. Improving actuation efficiency through variable recruitment hydraulic McKibben muscles: modeling, orderly recruitment control, and experiments.

    PubMed

    Meller, Michael; Chipka, Jordan; Volkov, Alexander; Bryant, Matthew; Garcia, Ephrahim

    2016-11-03

    Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.

  15. Variable transmittance electrochromic windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauh, R.D.

    1983-11-01

    Electrochromic apertures based on RF sputtered thin films of WO3 are projected to have widely different sunlight attenuation properties when converted to MxWO3 (M H, Li, Na, Ag, etc.), depending on the initial preparation conditions. Amorphous WO3, prepared at low temperature, has a coloration spectrum centered in the visible, while high temperature crystalline WO3 attenuates infrared light most efficiently, but appears to become highly reflective at high values of x. The possibility therefore exists of producing variable light transmission apertures of the general form (a-MxWO3/FIC/c-WO3), where the FIC is an ion conducting thin film, such as LiAlF4 (for M Li).more » The attenuation of 90% of the solar spectrum requires an injected charge of 30 to 40 mcoul/sq cm in either amorphous or crystalline WO3, corresponding to 0.2 Whr/sq m per coloration cycle. In order to produce windows with very high solar transparency in the bleached form, new counter electrode materials must be found with complementary electrochromism to WO3.« less

  16. A FPGA-Based, Granularity-Variable Neuromorphic Processor and Its Application in a MIMO Real-Time Control System.

    PubMed

    Zhang, Zhen; Ma, Cheng; Zhu, Rong

    2017-08-23

    Artificial Neural Networks (ANNs), including Deep Neural Networks (DNNs), have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA) architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP). The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO) real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas.

  17. A FPGA-Based, Granularity-Variable Neuromorphic Processor and Its Application in a MIMO Real-Time Control System

    PubMed Central

    Zhang, Zhen; Zhu, Rong

    2017-01-01

    Artificial Neural Networks (ANNs), including Deep Neural Networks (DNNs), have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA) architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP). The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO) real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas. PMID:28832522

  18. Variable intertidal temperature explains why disease endangers black abalone

    USGS Publications Warehouse

    Ben-Horin, Tal; Lenihan, Hunter S.; Lafferty, Kevin D.

    2013-01-01

    Epidemiological theory suggests that pathogens will not cause host extinctions because agents of disease should fade out when the host population is driven below a threshold density. Nevertheless, infectious diseases have threatened species with extinction on local scales by maintaining high incidence and the ability to spread efficiently even as host populations decline. Intertidal black abalone (Haliotis cracherodii), but not other abalone species, went extinct locally throughout much of southern California following the emergence of a Rickettsiales-like pathogen in the mid-1980s. The rickettsial disease, a condition known as withering syndrome (WS), and associated mortality occur at elevated water temperatures. We measured abalone body temperatures in the field and experimentally manipulated intertidal environmental conditions in the laboratory, testing the influence of mean temperature and daily temperature variability on key epizootiological processes of WS. Daily temperature variability increased the susceptibility of black abalone to infection, but disease expression occurred only at warm water temperatures and was independent of temperature variability. These results imply that high thermal variation of the marine intertidal zone allows the pathogen to readily infect black abalone, but infected individuals remain asymptomatic until water temperatures periodically exceed thresholds modulating WS. Mass mortalities can therefore occur before pathogen transmission is limited by density-dependent factors.

  19. Hybrid stochastic simulations of intracellular reaction-diffusion systems.

    PubMed

    Kalantzis, Georgios

    2009-06-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.

  20. A Neuron-Based Screening Platform for Optimizing Genetically-Encoded Calcium Indicators

    PubMed Central

    Schreiter, Eric R.; Hasseman, Jeremy P.; Tsegaye, Getahun; Fosque, Benjamin F.; Behnam, Reza; Shields, Brenda C.; Ramirez, Melissa; Kimmel, Bruce E.; Kerr, Rex A.; Jayaraman, Vivek; Looger, Loren L.; Svoboda, Karel; Kim, Douglas S.

    2013-01-01

    Fluorescent protein-based sensors for detecting neuronal activity have been developed largely based on non-neuronal screening systems. However, the dynamics of neuronal state variables (e.g., voltage, calcium, etc.) are typically very rapid compared to those of non-excitable cells. We developed an electrical stimulation and fluorescence imaging platform based on dissociated rat primary neuronal cultures. We describe its use in testing genetically-encoded calcium indicators (GECIs). Efficient neuronal GECI expression was achieved using lentiviruses containing a neuronal-selective gene promoter. Action potentials (APs) and thus neuronal calcium levels were quantitatively controlled by electrical field stimulation, and fluorescence images were recorded. Images were segmented to extract fluorescence signals corresponding to individual GECI-expressing neurons, which improved sensitivity over full-field measurements. We demonstrate the superiority of screening GECIs in neurons compared with solution measurements. Neuronal screening was useful for efficient identification of variants with both improved response kinetics and high signal amplitudes. This platform can be used to screen many types of sensors with cellular resolution under realistic conditions where neuronal state variables are in relevant ranges with respect to timing and amplitude. PMID:24155972

  1. Root carboxylate exudation capacity under phosphorus stress does not improve grain yield in green gram.

    PubMed

    Pandey, Renu; Meena, Surendra Kumar; Krishnapriya, Vengavasi; Ahmad, Altaf; Kishora, Naval

    2014-06-01

    Genetic variability in carboxylate exudation capacity along with improved root traits was a key mechanism for P-efficient green gram genotype to cope with P-stress but it did not increase grain yield. This study evaluates genotypic variability in green gram for total root carbon exudation under low phosphorus (P) using (14)C and its relationship with root exuded carboxylates, growth and yield potential in contrasting genotypes. Forty-four genotypes grown hydroponically with low (2 μM) and sufficient (100 μM) P concentrations were exposed to (14)CO2 to screen for total root carbon exudation. Contrasting genotypes were employed to study carboxylate exudation and their performance in soil at two P levels. Based on relative (14)C exudation and biomass, genotypes were categorized. Carboxylic acids were measured in exudates and root apices of contrasting genotypes belonging to efficient and inefficient categories. Oxalic and citric acids were released into the medium under low-P. PDM-139 (efficient) was highly efficient in carboxylate exudation as compared to ML-818 (inefficient). In low soil P, the reduction in biomass was higher in ML-818 as compared to PDM-139. Total leaf area and photosynthetic rate averaged for genotypes increased by 71 and 41 %, respectively, with P fertilization. Significantly, higher root surface area and volume were observed in PDM-139 under low soil P. Though the grain yield was higher in ML-818, the total plant biomass was significantly higher in PDM-139 indicating improved P uptake and its efficient translation into biomass. The higher carboxylate exudation capacity and improved root traits in the later genotype might be the possible adaptive mechanisms to cope with P-stress. However, it is not necessary that higher root exudation would result in higher grain yield.

  2. Solar Disinfection of Pseudomonas aeruginosa in Harvested Rainwater: A Step towards Potability of Rainwater

    PubMed Central

    Amin, Muhammad T.; Nawaz, Mohsin; Amin, Muhammad N.; Han, Mooyoung

    2014-01-01

    Efficiency of solar based disinfection of Pseudomonas aeruginosa (P. aeruginosa) in rooftop harvested rainwater was evaluated aiming the potability of rainwater. The rainwater samples were exposed to direct sunlight for about 8–9 hours and the effects of water temperature (°C), sunlight irradiance (W/m2), different rear surfaces of polyethylene terephthalate bottles, variable microbial concentrations, pH and turbidity were observed on P. aeruginosa inactivation at different weathers. In simple solar disinfection (SODIS), the complete inactivation of P. aeruginosa was obtained only under sunny weather conditions (>50°C and >700 W/m2) with absorptive rear surface. Solar collector disinfection (SOCODIS) system, used to improve the efficiency of simple SODIS under mild and weak weather, completely inactivated the P. aeruginosa by enhancing the disinfection efficiency of about 20% only at mild weather. Both SODIS and SOCODIS systems, however, were found inefficient at weak weather. Different initial concentrations of P. aeruginosa and/or Escherichia coli had little effects on the disinfection efficiency except for the SODIS with highest initial concentrations. The inactivation of P. aeruginosa increased by about 10–15% by lowering the initial pH values from 10 to 3. A high initial turbidity, adjusted by adding kaolin, adversely affected the efficiency of both systems and a decrease, about 15–25%; in inactivation of P. aeruginosa was observed. The kinetics of this study was investigated by Geeraerd Model for highlighting the best disinfection system based on reaction rate constant. The unique detailed investigation of P. aeruginosa disinfection with sunlight based disinfection systems under different weather conditions and variable parameters will help researchers to understand and further improve the newly invented SOCODIS system. PMID:24595188

  3. Restriction digest screening facilitates efficient detection of site-directed mutations introduced by CRISPR in C. albicans UME6.

    PubMed

    Evans, Ben A; Smith, Olivia L; Pickerill, Ethan S; York, Mary K; Buenconsejo, Kristen J P; Chambers, Antonio E; Bernstein, Douglas A

    2018-01-01

    Introduction of point mutations to a gene of interest is a powerful tool when determining protein function. CRISPR-mediated genome editing allows for more efficient transfer of a desired mutation into a wide range of model organisms. Traditionally, PCR amplification and DNA sequencing is used to determine if isolates contain the intended mutation. However, mutation efficiency is highly variable, potentially making sequencing costly and time consuming. To more efficiently screen for correct transformants, we have identified restriction enzymes sites that encode for two identical amino acids or one or two stop codons. We used CRISPR to introduce these restriction sites directly upstream of the Candida albicans UME6 Zn 2+ -binding domain, a known regulator of C. albicans filamentation. While repair templates coding for different restriction sites were not equally successful at introducing mutations, restriction digest screening enabled us to rapidly identify isolates with the intended mutation in a cost-efficient manner. In addition, mutated isolates have clear defects in filamentation and virulence compared to wild type C. albicans . Our data suggest restriction digestion screening efficiently identifies point mutations introduced by CRISPR and streamlines the process of identifying residues important for a phenotype of interest.

  4. Evaluating Kuala Lumpur stock exchange oriented bank performance with stochastic frontiers

    NASA Astrophysics Data System (ADS)

    Baten, M. A.; Maznah, M. K.; Razamin, R.; Jastini, M. J.

    2014-12-01

    Banks play an essential role in the economic development and banks need to be efficient; otherwise, they may create blockage in the process of development in any country. The efficiency of banks in Malaysia is important and should receive greater attention. This study formulated an appropriate stochastic frontier model to investigate the efficiency of banks which are traded on Kuala Lumpur Stock Exchange (KLSE) market during the period 2005-2009. All data were analyzed to obtain the maximum likelihood method to estimate the parameters of stochastic production. Unlike the earlier studies which use balance sheet and income statements data, this study used market data as the input and output variables. It was observed that banks listed in KLSE exhibited a commendable overall efficiency level of 96.2% during 2005-2009 hence suggesting minimal input waste of 3.8%. Among the banks, the COMS (Cimb Group Holdings) bank is found to be highly efficient with a score of 0.9715 and BIMB (Bimb Holdings) bank is noted to have the lowest efficiency with a score of 0.9582. The results also show that Cobb-Douglas stochastic frontier model with truncated normal distributional assumption is preferable than Translog stochastic frontier model.

  5. Technical efficiency of women's health prevention programs in Bucaramanga, Colombia: a four-stage analysis.

    PubMed

    Ruiz-Rodriguez, Myriam; Rodriguez-Villamizar, Laura A; Heredia-Pi, Ileana

    2016-10-13

    Primary Health Care (PHC) is an efficient strategy to improve health outcomes in populations. Nevertheless, studies of technical efficiency in health care have focused on hospitals, with very little on primary health care centers. The objective of the present study was to use the Data Envelopment Analysis to estimate the technical efficiency of three women's health promotion and disease prevention programs offered by primary care centers in Bucaramanga, Colombia. Efficiency was measured using a four-stage data envelopment analysis with a series of Tobit regressions to account for the effect of quality outcomes and context variables. Input/output information was collected from the institutions' records, chart reviews and personal interviews. Information about contextual variables was obtained from databases from the primary health program in the municipality. A jackknife analysis was used to assess the robustness of the results. The analysis was based on data from 21 public primary health care centers. The average efficiency scores, after adjusting for quality and context, were 92.4 %, 97.5 % and 86.2 % for the antenatal care (ANC), early detection of cervical cancer (EDCC) and family planning (FP) programs, respectively. On each program, 12 of the 21 (57.1 %) health centers were found to be technically efficient; having had the best-practice frontiers. Adjusting for context variables changed the scores and reference rankings of the three programs offered by the health centers. The performance of the women's health prevention programs offered by the centers was found to be heterogeneous. Adjusting for context and health care quality variables had a significant effect on the technical efficiency scores and ranking. The results can serve as a guide to strengthen management and organizational and planning processes related to local primary care services operating within a market-based model such as the one in Colombia.

  6. Analysis of copy number variations in Holstein cows identify potential mechanisms contributing to differences in residual feed intake.

    PubMed

    Hou, Yali; Bickhart, Derek M; Chung, Hoyoung; Hutchison, Jana L; Norman, H Duane; Connor, Erin E; Liu, George E

    2012-11-01

    Genomic structural variation is an important and abundant source of genetic and phenotypic variation. In this study, we performed an initial analysis of copy number variations (CNVs) using BovineHD SNP genotyping data from 147 Holstein cows identified as having high or low feed efficiency as estimated by residual feed intake (RFI). We detected 443 candidate CNV regions (CNVRs) that represent 18.4 Mb (0.6 %) of the genome. To investigate the functional impacts of CNVs, we created two groups of 30 individual animals with extremely low or high estimated breeding values (EBVs) for RFI, and referred to these groups as low intake (LI; more efficient) or high intake (HI; less efficient), respectively. We identified 240 (~9.0 Mb) and 274 (~10.2 Mb) CNVRs from LI and HI groups, respectively. Approximately 30-40 % of the CNVRs were specific to the LI group or HI group of animals. The 240 LI CNVRs overlapped with 137 Ensembl genes. Network analyses indicated that the LI-specific genes were predominantly enriched for those functioning in the inflammatory response and immunity. By contrast, the 274 HI CNVRs contained 177 Ensembl genes. Network analyses indicated that the HI-specific genes were particularly involved in the cell cycle, and organ and bone development. These results relate CNVs to two key variables, namely immune response and organ and bone development. The data indicate that greater feed efficiency relates more closely to immune response, whereas cattle with reduced feed efficiency may have a greater capacity for organ and bone development.

  7. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  8. Crystallization using reverse micelles and water-in-oil microemulsion systems: the highly selective tool for the purification of organic compounds from complex mixtures.

    PubMed

    Kljajic, Alen; Bester-Rogac, Marija; Klobcar, Andrej; Zupet, Rok; Pejovnik, Stane

    2013-02-01

    The active pharmaceutical ingredient orlistat is usually manufactured using a semi-synthetic procedure, producing crude product and complex mixtures of highly related impurities with minimal side-chain structure variability. It is therefore crucial for the overall success of industrial/pharmaceutical application to develop an effective purification process. In this communication, we present the newly developed water-in-oil reversed micelles and microemulsion system-based crystallization process. Physiochemical properties of the presented crystallization media were varied through surfactants and water composition, and the impact on efficiency was measured through final variation of these two parameters. Using precisely defined properties of the dispersed water phase in crystallization media, a highly efficient separation process in terms of selectivity and yield was developed. Small-angle X-ray scattering, high-performance liquid chromatography, mass spectrometry, and scanning electron microscopy were used to monitor and analyze the separation processes and orlistat products obtained. Typical process characteristics, especially selectivity and yield in regard to reference examples, were compared and discussed. Copyright © 2012 Wiley Periodicals, Inc.

  9. Finding stability regions for preserving efficiency classification of variable returns to scale technology in data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Zamani, P.; Borzouei, M.

    2016-12-01

    This paper addresses issue of sensitivity of efficiency classification of variable returns to scale (VRS) technology for enhancing the credibility of data envelopment analysis (DEA) results in practical applications when an additional decision making unit (DMU) needs to be added to the set being considered. It also develops a structured approach to assisting practitioners in making an appropriate selection of variation range for inputs and outputs of additional DMU so that this DMU be efficient and the efficiency classification of VRS technology remains unchanged. This stability region is simply specified by the concept of defining hyperplanes of production possibility set of VRS technology and the corresponding halfspaces. Furthermore, this study determines a stability region for the additional DMU within which, in addition to efficiency classification, the efficiency score of a specific inefficient DMU is preserved and also using a simulation method, a region in which some specific efficient DMUs become inefficient is provided.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dempsey, Adam B.; Curran, Scott; Wagner, Robert M.

    Gasoline compression ignition concepts with the majority of the fuel being introduced early in the cycle are known as partially premixed combustion (PPC). Previous research on single- and multi-cylinder engines has shown that PPC has the potential for high thermal efficiency with low NOx and soot emissions. A variety of fuel injection strategies has been proposed in the literature. These injection strategies aim to create a partially stratified charge to simultaneously reduce NOx and soot emissions while maintaining some level of control over the combustion process through the fuel delivery system. The impact of the direct injection strategy to createmore » a premixed charge of fuel and air has not previously been explored, and its impact on engine efficiency and emissions is not well understood. This paper explores the effect of sweeping the direct injected pilot timing from -91° to -324° ATDC, which is just after the exhaust valve closes for the engine used in this study. During the sweep, the pilot injection consistently contained 65% of the total fuel (based on command duration ratio), and the main injection timing was adjusted slightly to maintain combustion phasing near top dead center. A modern four cylinder, 1.9 L diesel engine with a variable geometry turbocharger, high pressure common rail injection system, wide included angle injectors, and variable swirl actuation was used in this study. The pistons were modified to an open bowl configuration suitable for highly premixed combustion modes. The stock diesel injection system was unmodified, and the gasoline fuel was doped with a lubricity additive to protect the high pressure fuel pump and the injectors. The study was conducted at a fixed speed/load condition of 2000 rpm and 4.0 bar brake mean effective pressure (BMEP). The pilot injection timing sweep was conducted at different intake manifold pressures, swirl levels, and fuel injection GTP-15-1067, Dempsey 2 pressures. The gasoline used in this study has relatively high fuel reactivity with a research octane number of 68. The results of this experimental campaign indicate that the highest brake thermal efficiency and lowest emissions are achieved simultaneously with the earliest pilot injection timings (i.e., during the intake stroke).« less

  11. Exploring efficacy of residential energy efficiency programs in Florida

    NASA Astrophysics Data System (ADS)

    Taylor, Nicholas Wade

    Electric utilities, government agencies, and private interests in the U.S. have committed and continue to invest substantial resources in the pursuit of energy efficiency and conservation through demand-side management (DSM) programs. Program investments, and the demand for impact evaluations that accompany them, are projected to grow in coming years due to increased pressure from state-level energy regulation, costs and challenges of building additional production capacity, fuel costs and potential carbon or renewable energy regulation. This dissertation provides detailed analyses of ex-post energy savings from energy efficiency programs in three key sectors of residential buildings: new, single-family, detached homes; retrofits to existing single-family, detached homes; and retrofits to existing multifamily housing units. Each of the energy efficiency programs analyzed resulted in statistically significant energy savings at the full program group level, yet savings for individual participants and participant subgroups were highly variable. Even though savings estimates were statistically greater than zero, those energy savings did not always meet expectations. Results also show that high variability in energy savings among participant groups or subgroups can negatively impact overall program performance and can undermine marketing efforts for future participation. Design, implementation, and continued support of conservation programs based solely on deemed or projected savings is inherently counter to the pursuit of meaningful energy conservation and reductions in greenhouse gas emissions. To fully understand and optimize program impacts, consistent and robust measurement and verification protocols must be instituted in the design phase and maintained over time. Furthermore, marketing for program participation must target those who have the greatest opportunity for savings. In most utility territories it is not possible to gain access to the type of large scale datasets that would facilitate robust program analysis. Along with measuring and optimizing energy conservation programs, utilities should provide public access to historical consumption data. Open access to data, program optimization, consistent measurement and verification and transparency in reported savings are essential to reducing energy use and its associated environmental impacts.

  12. Thermoelectric power generator for variable thermal power source

    DOEpatents

    Bell, Lon E; Crane, Douglas Todd

    2015-04-14

    Traditional power generation systems using thermoelectric power generators are designed to operate most efficiently for a single operating condition. The present invention provides a power generation system in which the characteristics of the thermoelectrics, the flow of the thermal power, and the operational characteristics of the power generator are monitored and controlled such that higher operation efficiencies and/or higher output powers can be maintained with variably thermal power input. Such a system is particularly beneficial in variable thermal power source systems, such as recovering power from the waste heat generated in the exhaust of combustion engines.

  13. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Benton, Nathanael; Burns, Patrick

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less

  14. Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.

    PubMed

    Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe

    2018-06-02

    This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.

  15. The Cluster AgeS Experiment (CASE). Detecting Aperiodic Photometric Variability with the Friends of Friends Algorithm

    NASA Astrophysics Data System (ADS)

    Rozyczka, M.; Narloch, W.; Pietrukowicz, P.; Thompson, I. B.; Pych, W.; Poleski, R.

    2018-03-01

    We adapt the friends of friends algorithm to the analysis of light curves, and show that it can be succesfully applied to searches for transient phenomena in large photometric databases. As a test case we search OGLE-III light curves for known dwarf novae. A single combination of control parameters allows us to narrow the search to 1% of the data while reaching a ≍90% detection efficiency. A search involving ≍2% of the data and three combinations of control parameters can be significantly more effective - in our case a 100% efficiency is reached. The method can also quite efficiently detect semi-regular variability. In particular, 28 new semi-regular variables have been found in the field of the globular cluster M22, which was examined earlier with the help of periodicity-searching algorithms.

  16. A Method to Determine Supply Voltage of Permanent Magnet Motor at Optimal Design Stage

    NASA Astrophysics Data System (ADS)

    Matustomo, Shinya; Noguchi, So; Yamashita, Hideo; Tanimoto, Shigeya

    The permanent magnet motors (PM motors) are widely used in electrical machinery, such as air conditioner, refrigerator and so on. In recent years, from the point of view of energy saving, it is necessary to improve the efficiency of PM motor by optimization. However, in the efficiency optimization of PM motor, many design variables and many restrictions are required. In this paper, the efficiency optimization of PM motor with many design variables was performed by using the voltage driven finite element analysis with the rotating simulation of the motor and the genetic algorithm.

  17. The Influence of Fuel Properties on Combustion Efficiency and the Partitioning of Pyrogenic Carbon

    NASA Astrophysics Data System (ADS)

    Urbanski, S. P.; Baker, S. P.; Lincoln, E.; Richardson, M.

    2016-12-01

    The partitioning of volatized pyrogenic carbon into CO2, CO, CH4, non-methane organic carbon, and particulate organic carbon (POC) and elemental carbon (PEC) depends on the combustion characteristics of biomass fires which are influenced by the moisture content, structure and arrangement of the fuels. Flaming combustion is characterized by efficient conversion of volatized carbon into CO2. In contrast, smoldering is less efficient and produces incomplete combustion products like CH4 and carbonaceous particles. This paper presents a laboratory study that has examined the relationship between the partitioning of volatized pyrogenic carbon and specific fuel properties. The study focused on fuel beds composed of simple fuel particles — ponderosa pine needles. Ponderosa pine was selected because it contains a common wildland fuel component, conifer needles, which can be easily arranged into fuel beds of variable structure (bulk density and depth) and moisture contents that are both representative of natural conditions and are easily replicated. Modified combustion efficiency (MCE, ΔCO2/[ΔCO2+ ΔCO]) and emission factors (EF) for CO2, CO, CH4, POC, and PEC were measured over a range of needle moisture content and fuel bed bulk density and depth representative of naturally occurring fuel beds. We found that, as expected, MCE decreases as the fuel bed bulk density increases and emissions of CO, CH4, PM2.5, and POC increased. However, fuel bed depth did not appear to have an effect on how effect on MCE or emission factors. Surprisingly, a consistent relationship between the needle moisture content and emissions was not identified. At the high bulk densities, moisture content had a strong influence on MCE which explained variability in EFCH4. However, moisture content appeared to have an influence EFPOC and EFPEC that was independent of MCE. These findings may have significant implications since many models of biomass burning assume that litter fuels, such as ponderosa pine needles, burn almost exclusively via flaming combustion with a high efficiency. Our results indicate that for fuel bed properties typical of many conifer forests, pollutants generated from fires will be higher than that predicted using standard biomass burning models.

  18. Concept and implementation of the Globalstar mobile satellite system

    NASA Technical Reports Server (NTRS)

    Schindall, Joel

    1995-01-01

    Globalstar is a satellite-based mobile communications system which provides quality wireless communications (voice and/or data) anywhere in the world except the polar regions. The Globalstar system concept is based upon technological advancements in Low Earth Orbit (LEO) satellite technology and in cellular telephone technology, including the commercial application of Code Division Multiple Access (CDMA) technologies. The Globalstar system uses elements of CDMA and Frequency Division Multiple Access (FDMA), combined with satellite Multiple Beam Antenna (MBA) technology and advanced variable-rate vocoder technology to arrive at one of the most efficient modulation and multiple access systems ever proposed for a satellite communications system. The technology used in Globalstar includes the following techniques in obtaining high spectral efficiency and affordable cost per channel: (1) CDMA modulation with efficient power control; (2) high efficiency vocoder with voice activity factor; (3) spot beam antenna for increased gain and frequency reuse; (4) weighted satellite antenna gain for broad geographic coverage; (5) multisatellite user links (diversity) to enhance communications reliability; and (6) soft hand-off between beams and satellites. Initial launch is scheduled in 1997 and the system is scheduled to be operational in 1998. The Globalstar system utilizes frequencies in L-, S- and C-bands which have the potential to offer worldwide availability with authorization by the appropriate regulatory agencies.

  19. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  20. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    PubMed

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

Top