Remote sensing using MIMO systems
Bikhazi, Nicolas; Young, William F; Nguyen, Hung D
2015-04-28
A technique for sensing a moving object within a physical environment using a MIMO communication link includes generating a channel matrix based upon channel state information of the MIMO communication link. The physical environment operates as a communication medium through which communication signals of the MIMO communication link propagate between a transmitter and a receiver. A spatial information variable is generated for the MIMO communication link based on the channel matrix. The spatial information variable includes spatial information about the moving object within the physical environment. A signature for the moving object is generated based on values of the spatial information variable accumulated over time. The moving object is identified based upon the signature.
Effect of age on variability in the production of text-based global inferences.
Williams, Lynne J; Dunlop, Joseph P; Abdi, Hervé
2012-01-01
As we age, our differences in cognitive skills become more visible, an effect especially true for memory and problem solving skills (i.e., fluid intelligence). However, by contrast with fluid intelligence, few studies have examined variability in measures that rely on one's world knowledge (i.e., crystallized intelligence). The current study investigated whether age increased the variability in text based global inference generation--a measure of crystallized intelligence. Global inference generation requires the integration of textual information and world knowledge and can be expressed as a gist or lesson. Variability in generating two global inferences for a single text was examined in young-old (62 to 69 years), middle-old (70 to 76 years) and old-old (77 to 94 years) adults. The older two groups showed greater variability, with the middle elderly group being most variable. These findings suggest that variability may be a characteristic of both fluid and crystallized intelligence in aging.
FUZZY LOGIC BASED INTELLIGENT CONTROL OF A VARIABLE SPEED CAGE MACHINE WIND GENERATION SYSTEM
The paper describes a variable-speed wind generation system where fuzzy logic principles are used to optimize efficiency and enhance performance control. A squirrel cage induction generator feeds the power to a double-sided pulse width modulated converter system which either pump...
FUZZY LOGIC BASED INTELLIGENT CONTROL OF A VARIABLE SPEED CAGE MACHINE WIND GENERATION SYSTEM
The report gives results of a demonstration of the successful application of fuzzy logic to enhance the performance and control of a variable-speed wind generation system. A squirrel cage induction generator feeds the power to either a double-sided pulse-width modulation converte...
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng
2017-06-01
Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.
Optimal Solar PV Arrays Integration for Distributed Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omitaomu, Olufemi A; Li, Xueping
2012-01-01
Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introducemore » quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.« less
Parametric vs. non-parametric daily weather generator: validation and comparison
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin
2016-04-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database.
NASA Astrophysics Data System (ADS)
Jinxia, Feng; Zhenju, Wan; Yuanji, Li; Kuanshou, Zhang
2018-01-01
Continuous variable quantum entanglement at a telecommunication wavelength of 1550 nm is experimentally generated using a single nondegenerate optical parametric amplifier based on a type-II periodically poled KTiOPO4 crystal. The triply resonant of the nondegenerate optical parametric amplifier is adjusted by tuning the crystal temperature and tilting the orientation of the crystal in the optical cavity. Einstein-Podolsky-Rosen-entangled beams with quantum correlations of 8.3 dB for both the amplitude and phase quadratures are experimentally generated. This system can be used for continuous variable fibre-based quantum communication.
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Validation of two (parametric vs non-parametric) daily weather generators
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Skalak, P.
2015-12-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
A Method for Evaluation of Model-Generated Vertical Profiles of Meteorological Variables
2016-03-01
3 2.1 RAOB Soundings and WRF Output for Profile Generation 3 2.2 Height-Based Profiles 5 2.3 Pressure-Based Profiles 5 3. Comparisons 8 4...downward arrow. The blue lines represent sublayers with sublayer means indicated by red triangles. Circles indicate the observations or WRF output...9 Table 3 Sample of differences in listed variables derived from WRF and RAOB data
Capacity expansion model of wind power generation based on ELCC
NASA Astrophysics Data System (ADS)
Yuan, Bo; Zong, Jin; Wu, Shengyu
2018-02-01
Capacity expansion is an indispensable prerequisite for power system planning and construction. A reasonable, efficient and accurate capacity expansion model (CEM) is crucial to power system planning. In most current CEMs, the capacity of wind power generation is considered as boundary conditions instead of decision variables, which may lead to curtailment or over construction of flexible resource, especially at a high renewable energy penetration scenario. This paper proposed a wind power generation capacity value(CV) calculation method based on effective load-carrying capability, and a CEM that co-optimizes wind power generation and conventional power sources. Wind power generation is considered as decision variable in this model, and the model can accurately reflect the uncertainty nature of wind power.
NASA Astrophysics Data System (ADS)
Hayat, T.; Ullah, Siraj; Khan, M. Ijaz; Alsaedi, A.; Zaigham Zia, Q. M.
2018-03-01
Here modeling and computations are presented to introduce the novel concept of Darcy-Forchheimer three-dimensional flow of water-based carbon nanotubes with nonlinear thermal radiation and heat generation/absorption. Bidirectional stretching surface induces the flow. Darcy's law is commonly replace by Forchheimer relation. Xue model is implemented for nonliquid transport mechanism. Nonlinear formulation based upon conservation laws of mass, momentum and energy is first modeled and then solved by optimal homotopy analysis technique. Optimal estimations of auxiliary variables are obtained. Importance of influential variables on the velocity and thermal fields is interpreted graphically. Moreover velocity and temperature gradients are discussed and analyzed. Physical interpretation of influential variables is examined.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
For nearly a century, global power systems have focused on three key functions: generating, transmitting, and distributing electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load-despite variability in load on time scales ranging from subsecond disturbances to multiyear trends. With the increasing role of variable generation from wind and solar, the retirement of fossil-fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yuping; Zheng, Qipeng P.; Wang, Jianhui
2014-11-01
tThis paper presents a two-stage stochastic unit commitment (UC) model, which integrates non-generation resources such as demand response (DR) and energy storage (ES) while including riskconstraints to balance between cost and system reliability due to the fluctuation of variable genera-tion such as wind and solar power. This paper uses conditional value-at-risk (CVaR) measures to modelrisks associated with the decisions in a stochastic environment. In contrast to chance-constrained modelsrequiring extra binary variables, risk constraints based on CVaR only involve linear constraints and con-tinuous variables, making it more computationally attractive. The proposed models with risk constraintsare able to avoid over-conservative solutions butmore » still ensure system reliability represented by loss ofloads. Then numerical experiments are conducted to study the effects of non-generation resources ongenerator schedules and the difference of total expected generation costs with risk consideration. Sen-sitivity analysis based on reliability parameters is also performed to test the decision preferences ofconfidence levels and load-shedding loss allowances on generation cost reduction.« less
Payn, R.A.; Gooseff, M.N.; McGlynn, B.L.; Bencala, K.E.; Wondzell, S.M.
2012-01-01
Relating watershed structure to streamflow generation is a primary focus of hydrology. However, comparisons of longitudinal variability in stream discharge with adjacent valley structure have been rare, resulting in poor understanding of the distribution of the hydrologic mechanisms that cause variability in streamflow generation along valleys. This study explores detailed surveys of stream base flow across a gauged, 23 km2 mountain watershed. Research objectives were (1) to relate spatial variability in base flow to fundamental elements of watershed structure, primarily topographic contributing area, and (2) to assess temporal changes in the spatial patterns of those relationships during a seasonal base flow recession. We analyzed spatiotemporal variability in base flow using (1) summer hydrographs at the study watershed outlet and 5 subwatershed outlets and (2) longitudinal series of discharge measurements every ~100 m along the streams of the 3 largest subwatersheds (1200 to 2600 m in valley length), repeated 2 to 3 times during base flow recession. Reaches within valley segments of 300 to 1200 m in length tended to demonstrate similar streamflow generation characteristics. Locations of transitions between these segments were consistent throughout the recession, and tended to be collocated with abrupt longitudinal transitions in valley slope or hillslope-riparian characteristics. Both within and among subwatersheds, correlation between the spatial distributions of streamflow and topographic contributing area decreased during the recession, suggesting a general decrease in the influence of topography on stream base flow contributions. As topographic controls on base flow evidently decreased, multiple aspects of subsurface structure were likely to have gained influence.
An evolution based biosensor receptor DNA sequence generation algorithm.
Kim, Eungyeong; Lee, Malrey; Gatton, Thomas M; Lee, Jaewan; Zang, Yupeng
2010-01-01
A biosensor is composed of a bioreceptor, an associated recognition molecule, and a signal transducer that can selectively detect target substances for analysis. DNA based biosensors utilize receptor molecules that allow hybridization with the target analyte. However, most DNA biosensor research uses oligonucleotides as the target analytes and does not address the potential problems of real samples. The identification of recognition molecules suitable for real target analyte samples is an important step towards further development of DNA biosensors. This study examines the characteristics of DNA used as bioreceptors and proposes a hybrid evolution-based DNA sequence generating algorithm, based on DNA computing, to identify suitable DNA bioreceptor recognition molecules for stable hybridization with real target substances. The Traveling Salesman Problem (TSP) approach is applied in the proposed algorithm to evaluate the safety and fitness of the generated DNA sequences. This approach improves efficiency and stability for enhanced and variable-length DNA sequence generation and allows extension to generation of variable-length DNA sequences with diverse receptor recognition requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallo, Giulia
Integrating increasingly high levels of variable generation in U.S. electricity markets requires addressing not only power system and grid modeling challenges but also an understanding of how market participants react and adapt to them. Key elements of current and future wholesale power markets can be modeled using an agent-based approach, which may prove to be a useful paradigm for researchers studying and planning for power systems of the future.
Computing Shapes Of Cascade Diffuser Blades
NASA Technical Reports Server (NTRS)
Tran, Ken; Prueger, George H.
1993-01-01
Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.
Effect of Age on Variability in the Production of Text-Based Global Inferences
Williams, Lynne J.; Dunlop, Joseph P.; Abdi, Hervé
2012-01-01
As we age, our differences in cognitive skills become more visible, an effect especially true for memory and problem solving skills (i.e., fluid intelligence). However, by contrast with fluid intelligence, few studies have examined variability in measures that rely on one’s world knowledge (i.e., crystallized intelligence). The current study investigated whether age increased the variability in text based global inference generation–a measure of crystallized intelligence. Global inference generation requires the integration of textual information and world knowledge and can be expressed as a gist or lesson. Variability in generating two global inferences for a single text was examined in young-old (62 to 69 years), middle-old (70 to 76 years) and old-old (77 to 94 years) adults. The older two groups showed greater variability, with the middle elderly group being most variable. These findings suggest that variability may be a characteristic of both fluid and crystallized intelligence in aging. PMID:22590523
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
2017-10-17
For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Managing Financial Risk to Hydropower in Snow Dominated Systems: A Hetch Hetchy Case Study
NASA Astrophysics Data System (ADS)
Hamilton, A. L.; Characklis, G. W.; Reed, P. M.
2017-12-01
Hydropower generation in snow dominated systems is vulnerable to severe shortfalls in years with low snowpack. Meanwhile, generators are also vulnerable to variability in electricity demand and wholesale electricity prices, both of which can be impacted by factors such as temperature and natural gas price. Year-to-year variability in these underlying stochastic variables leads to financial volatility and the threat of low revenue periods, which can be highly disruptive for generators with large fixed operating costs and debt service. In this research, the Hetch Hetchy Power system is used to characterize financial risk in a snow dominated hydropower system. Owned and operated by the San Francisco Public Utilities Commission, Hetch Hetchy generates power for its own municipal operations and sells excess power to irrigation districts, as well as on the wholesale market. This investigation considers the effects of variability in snowpack, temperature, and natural gas price on Hetch Hetchy Power's yearly revenues. This information is then used to evaluate the effectiveness of various financial risk management tools for hedging against revenue variability. These tools are designed to mitigate against all three potential forms of financial risk (i.e. low hydropower generation, low electricity demand, and low/high electricity price) and include temperature-based derivative contracts, natural gas price-based derivative contracts, and a novel form of snowpack-based index insurance contract. These are incorporated into a comprehensive risk management portfolio, along with self-insurance in which the utility buffers yearly revenue volatility using a contingency fund. By adaptively managing the portfolio strategy, a utility can efficiently spread yearly risks over a multi-year time horizon. The Borg Multiobjective Evolutionary Algorithm is used to generate a set of Pareto optimal portfolio strategies, which are used to compare the tradeoffs in objectives such as expected revenues, low revenues, revenue volatility, and portfolio complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Anthony; Maclaurin, Galen; Roberts, Billy
Long-term variability of solar resource is an important factor in planning a utility-scale photovoltaic (PV) generation plant, and annual generation for a given location can vary significantly from year to year. Based on multiple years of solar irradiance data, an exceedance probability is the amount of energy that could potentially be produced by a power plant in any given year. An exceedance probability accounts for long-term variability and climate cycles (e.g., monsoons or changes in aerosols), which ultimately impact PV energy generation. Study results indicate that a significant bias could be associated with relying solely on typical meteorological year (TMY)more » resource data to capture long-term variability. While the TMY tends to under-predict annual generation overall compared to the P50, there appear to be pockets of over-prediction as well.« less
Schumann, Anja; John, Ulrich; Ulbricht, Sabina; Rüge, Jeannette; Bischof, Gallus; Meyer, Christian
2008-11-01
This study examines tailored feedback letters of a smoking cessation intervention that is conceptually based on the transtheoretical model, from a content-based perspective. Data of 2 population-based intervention studies, both randomized controlled trials, with total N=1044 were used. The procedure of the intervention, the tailoring principle for the feedback letters, and the content of the intervention materials are described in detail. Theoretical and empirical frequencies of unique feedback letters are presented. The intervention system was able to generate a total of 1040 unique letters with normative feedback only, and almost half a million unique letters with normative and ipsative feedback. Almost every single smoker in contemplation, preparation, action, and maintenance had an empirically unique combination of tailoring variables and received a unique letter. In contrast, many smokers in precontemplation shared a combination of tailoring variables and received identical letters. The transtheoretical model provides an enormous theoretical and empirical variability of tailoring. However, tailoring for a major subgroup of smokers, i.e. those who do not intend to quit, needs improvement. Conceptual ideas for additional tailoring variables are discussed.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mathematical modeling to predict residential solid waste generation.
Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de
2008-01-01
One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.
Revenue Sufficiency and Reliability in a Zero Marginal Cost Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A.
Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less
Revenue Sufficiency and Reliability in a Zero Marginal Cost Future: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A.; Milligan, Michael; Brinkman, Greg
Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less
Optimal Interpolation scheme to generate reference crop evapotranspiration
NASA Astrophysics Data System (ADS)
Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco
2018-05-01
We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.
Eyler, Lauren; Hubbard, Alan; Juillard, Catherine
2016-10-01
Low and middle-income countries (LMICs) and the world's poor bear a disproportionate share of the global burden of injury. Data regarding disparities in injury are vital to inform injury prevention and trauma systems strengthening interventions targeted towards vulnerable populations, but are limited in LMICs. We aim to facilitate injury disparities research by generating a standardized methodology for assessing economic status in resource-limited country trauma registries where complex metrics such as income, expenditures, and wealth index are infeasible to assess. To address this need, we developed a cluster analysis-based algorithm for generating simple population-specific metrics of economic status using nationally representative Demographic and Health Surveys (DHS) household assets data. For a limited number of variables, g, our algorithm performs weighted k-medoids clustering of the population using all combinations of g asset variables and selects the combination of variables and number of clusters that maximize average silhouette width (ASW). In simulated datasets containing both randomly distributed variables and "true" population clusters defined by correlated categorical variables, the algorithm selected the correct variable combination and appropriate cluster numbers unless variable correlation was very weak. When used with 2011 Cameroonian DHS data, our algorithm identified twenty economic clusters with ASW 0.80, indicating well-defined population clusters. This economic model for assessing health disparities will be used in the new Cameroonian six-hospital centralized trauma registry. By describing our standardized methodology and algorithm for generating economic clustering models, we aim to facilitate measurement of health disparities in other trauma registries in resource-limited countries. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Gao, David Wenzhong; Ibanez, Eduardo
2016-12-01
The electric power system has continuously evolved in order to accommodate new technologies and operating strategies. As the penetration of integrated variable generation in the system increases, it is beneficial to develop strategies that can help mitigate their effect on the grid. Historically, power system operators have held excess capacity during the commitment and dispatch process to allow the system to handle unforeseen load ramping events. As variable generation resources increase, sufficient flexibility scheduled in the system is required to ensure that system performance is not deteriorated in the presence of additional variability and uncertainty. This paper presents a systematicmore » comparison of various flexibility reserve strategies. Several of them are implemented and applied in a common test system, in order to evaluate their effect on the economic and reliable operations. Furthermore, a three stage reserve modifier algorithm is proposed and evaluated for its ability to improve system performance.« less
Research on grid connection control technology of double fed wind generator
NASA Astrophysics Data System (ADS)
Ling, Li
2017-01-01
The composition and working principle of variable speed constant frequency doubly fed wind power generation system is discussed in this thesis. On the basis of theoretical analysis and control on the modeling, the doubly fed wind power generation simulation control system is designed based on a TMS320F2407 digital signal processor (DSP), and has done a large amount of experimental research, which mainly include, variable speed constant frequency, constant pressure, Grid connected control experiment. The running results show that the design of simulation control system is reasonable and can meet the need of experimental research.
NASA Astrophysics Data System (ADS)
Grenier, P.
2017-12-01
Statistical post-processing techniques aim at generating plausible climate scenarios from climate simulations and observation-based reference products. These techniques are generally not physically-based, and consequently they remedy the problem of simulation biases at the risk of generating physical inconsistency (PI). Although this concern is often emphasized, it is rarely addressed quantitatively. Here, PI generated by quantile mapping (QM), a technique widely used in climatological and hydrological applications, is investigated using relative humidity (RH) and its parent variables, namely specific humidity (SH), temperature and pressure. PI is classified into two types: 1) inadequate value for an individual variable (e.g. RH > 100 %), and 2) breaking of an inter-variable relationship. Scenarios built for this study correspond to twelve sites representing a variety of climate types over North America. Data used are an ensemble of ten 3-hourly global (CMIP5) and regional (CORDEX-NAM) simulations, as well as the CFSR reanalysis. PI of type 1 is discussed in terms of frequency of occurrence and amplitude of unphysical cases for RH and SH variables. PI of type 2 is investigated with heuristic proxies designed to directly compare the physical inconsistency problem with the initial bias problem. Finally, recommendations are provided for an appropriate use of QM given the potential to generate physical inconsistency of types 1 and 2.
2014-04-01
surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital
Liu, Kui; Guo, Jun; Cai, Chunxiao; Zhang, Junxiang; Gao, Jiangrui
2016-11-15
Multipartite entanglement is used for quantum information applications, such as building multipartite quantum communications. Generally, generation of multipartite entanglement is based on a complex beam-splitter network. Here, based on the spatial freedom of light, we experimentally demonstrated spatial quadripartite continuous variable entanglement among first-order Hermite-Gaussian modes using a single type II optical parametric oscillator operating below threshold with an HG0245° pump beam. The entanglement can be scalable for larger numbers of spatial modes by changing the spatial profile of the pump beam. In addition, spatial multipartite entanglement will be useful for future spatial multichannel quantum information applications.
Universal quantum computation with temporal-mode bilayer square lattices
NASA Astrophysics Data System (ADS)
Alexander, Rafael N.; Yokoyama, Shota; Furusawa, Akira; Menicucci, Nicolas C.
2018-03-01
We propose an experimental design for universal continuous-variable quantum computation that incorporates recent innovations in linear-optics-based continuous-variable cluster state generation and cubic-phase gate teleportation. The first ingredient is a protocol for generating the bilayer-square-lattice cluster state (a universal resource state) with temporal modes of light. With this state, measurement-based implementation of Gaussian unitary gates requires only homodyne detection. Second, we describe a measurement device that implements an adaptive cubic-phase gate, up to a random phase-space displacement. It requires a two-step sequence of homodyne measurements and consumes a (non-Gaussian) cubic-phase state.
Oğuz, Yüksel; Güney, İrfan; Çalık, Hüseyin
2013-01-01
The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%. PMID:24453905
Oğuz, Yüksel; Güney, İrfan; Çalık, Hüseyin
2013-01-01
The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%.
Yao, Shuai-Lei; Luo, Jing-Jia; Huang, Gang
2016-01-01
Regional climate projections are challenging because of large uncertainty particularly stemming from unpredictable, internal variability of the climate system. Here, we examine the internal variability-induced uncertainty in precipitation and surface air temperature (SAT) trends during 2005-2055 over East Asia based on 40 member ensemble projections of the Community Climate System Model Version 3 (CCSM3). The model ensembles are generated from a suite of different atmospheric initial conditions using the same SRES A1B greenhouse gas scenario. We find that projected precipitation trends are subject to considerably larger internal uncertainty and hence have lower confidence, compared to the projected SAT trends in both the boreal winter and summer. Projected SAT trends in winter have relatively higher uncertainty than those in summer. Besides, the lower-level atmospheric circulation has larger uncertainty than that in the mid-level. Based on k-means cluster analysis, we demonstrate that a substantial portion of internally-induced precipitation and SAT trends arises from internal large-scale atmospheric circulation variability. These results highlight the importance of internal climate variability in affecting regional climate projections on multi-decadal timescales.
Operation ranges and dynamic capabilities of variable-speed pumped-storage hydropower
NASA Astrophysics Data System (ADS)
Mercier, Thomas; Olivier, Mathieu; Dejaeger, Emmanuel
2017-04-01
The development of renewable and intermittent power generation creates incentives for the development of both energy storage solutions and more flexible power generation assets. Pumped-storage hydropower (PSH) is the most established and mature energy storage technology, but recent developments in power electronics have created a renewed interest by providing PSH units with a variable-speed feature, thereby increasing their flexibility. This paper reviews technical considerations related to variable-speed PSH in link with the provision of primary frequency control, also referred to as frequency containment reserves (FCRs). Based on the detailed characteristics of a scale model pump-turbine, the variable-speed operation ranges in pump and turbine modes are precisely assessed and the implications for the provision of FCRs are highlighted. Modelling and control for power system studies are discussed, both for fixed- and variable-speed machines and simulation results are provided to illustrate the high dynamic capabilities of variable-speed PSH.
Sharma, Abhay
2015-01-01
Transgenerational epigenetic inheritance in mammals has been controversial due to inherent difficulties in its experimental demonstration. A recent report has, however, opened a new front in the ongoing debate by claiming that endocrine disrupting chemicals, contrary to previous findings, do not cause effects across generations. This claim is based on the observation that gene expression changes induced by these chemicals in the exposed and unexposed generations are mainly in the opposite direction. This analysis shows that the pattern of gene expression reported in the two generations is not expected by chance and is suggestive of transmission across generations. A meta-analysis of diverse data sets related to endocrine disruptor-induced transgenerational gene expression alterations, including the data provided in the said report, further suggests that effects of endocrine disrupting chemicals persist in unexposed generations. Based on the prior evidence of phenotypic variability and gene expression alterations in opposite direction between generations, it is argued here that calling evidence of mismatched directionality in gene expression in experiments testing potential of environmental agents in inducing epigenetic inheritance of phenotypic traits as negative is untenable. This is expected to settle the newly raised doubts over epigenetic inheritance in mammals.
New and Improved GLDAS and NLDAS Data Sets and Data Services at HDISC/NASA
NASA Technical Reports Server (NTRS)
Rui, Hualan; Beaudoing, Hiroko Kato; Mocko, David M.; Rodell, Matthew; Teng, William L.; Vollmer. Bruce
2010-01-01
Terrestrial hydrological variables are important in global hydrology, climate, and carbon cycle studies. Generating global fields of these variables, however, is still a challenge. The goal of a land data assimilation system (LDAS)is to ingest satellite-and ground-based observational data products, using advanced land surface modeling and data assimilation techniques, in order to generate optimal fields of land surface states and fluxes data and, thereby, facilitate hydrology and climate modeling, research, and forecast.
Bryce, Richard; Losada Carreno, Ignacio; Kumler, Andrew; ...
2018-04-05
The interannual variability of the solar irradiance and meteorological conditions are often ignored in favor of single-year data sets for modeling power generation and evaluating the economic value of photovoltaic (PV) power systems. Yet interannual variability significantly impacts the generation from one year to another of renewable power systems such as wind and PV. Consequently, the interannual variability of power generation corresponds to the interannual variability of capital returns on investment. The penetration of PV systems within the Hawaiian Electric Companies' portfolio has rapidly accelerated in recent years and is expected to continue to increase given the state's energy objectivesmore » laid out by the Hawaii Clean Energy Initiative. We use the National Solar Radiation Database (1998-2015) to characterize the interannual variability of the solar irradiance and meteorological conditions across the State of Hawaii. These data sets are passed to the National Renewable Energy Laboratory's System Advisory Model (SAM) to calculate an 18-year PV power generation data set to characterize the variability of PV power generation. We calculate the interannual coefficient of variability (COV) for annual average global horizontal irradiance (GHI) on the order of 2% and COV for annual capacity factor on the order of 3% across the Hawaiian archipelago. Regarding the interannual variability of seasonal trends, we calculate the COV for monthly average GHI values on the order of 5% and COV for monthly capacity factor on the order of 10%. We model residential-scale and utility-scale PV systems and calculate the economic returns of each system via the payback period and the net present value. We demonstrate that studies based on single-year data sets for economic evaluations reach conclusions that deviate from the true values realized by accounting for interannual variability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryce, Richard; Losada Carreno, Ignacio; Kumler, Andrew
The interannual variability of the solar irradiance and meteorological conditions are often ignored in favor of single-year data sets for modeling power generation and evaluating the economic value of photovoltaic (PV) power systems. Yet interannual variability significantly impacts the generation from one year to another of renewable power systems such as wind and PV. Consequently, the interannual variability of power generation corresponds to the interannual variability of capital returns on investment. The penetration of PV systems within the Hawaiian Electric Companies' portfolio has rapidly accelerated in recent years and is expected to continue to increase given the state's energy objectivesmore » laid out by the Hawaii Clean Energy Initiative. We use the National Solar Radiation Database (1998-2015) to characterize the interannual variability of the solar irradiance and meteorological conditions across the State of Hawaii. These data sets are passed to the National Renewable Energy Laboratory's System Advisory Model (SAM) to calculate an 18-year PV power generation data set to characterize the variability of PV power generation. We calculate the interannual coefficient of variability (COV) for annual average global horizontal irradiance (GHI) on the order of 2% and COV for annual capacity factor on the order of 3% across the Hawaiian archipelago. Regarding the interannual variability of seasonal trends, we calculate the COV for monthly average GHI values on the order of 5% and COV for monthly capacity factor on the order of 10%. We model residential-scale and utility-scale PV systems and calculate the economic returns of each system via the payback period and the net present value. We demonstrate that studies based on single-year data sets for economic evaluations reach conclusions that deviate from the true values realized by accounting for interannual variability.« less
Generation of noncircular gears for variable motion of the crank-slider mechanism
NASA Astrophysics Data System (ADS)
Niculescu, M.; Andrei, L.; Cristescu, A.
2016-08-01
The paper proposes a modified kinematics for the crank-slider mechanism of a nails machine. The variable rotational motion of the driven gear allows to slow down the velocity of the slider in the head forming phase and increases the period for the forming forces to be applied, improving the quality of the final product. The noncircular gears are designed based on a hybrid function for the gear transmission ratio whose parameters enable multiple variations of the noncircular driven gears and crack-slider mechanism kinematics, respectively. The AutoCAD graphical and programming facilities are used (i) to analyse and optimize the slider-crank mechanism output functions, in correlation with the predefined noncircular gears transmission ratio, (ii) to generate the noncircular centrodes using the kinematics hypothesis, (iii) to generate the variable geometry of the gear teeth profiles, based on the rolling method, and (iv) to produce the gears solid virtual models. The study highlights the benefits/limits that the noncircular gears transmission ratio defining hybrid functions have on both crank-slider mechanism kinematics and gears geometry.
Johnston, Christine; Magaret, Amalia; Roychoudhury, Pavitra; Greninger, Alexander L; Cheng, Anqi; Diem, Kurt; Fitzgibbon, Matthew P; Huang, Meei-Li; Selke, Stacy; Lingappa, Jairam R; Celum, Connie; Jerome, Keith R; Wald, Anna; Koelle, David M
2017-10-01
Understanding the variability in circulating herpes simplex virus type 2 (HSV-2) genomic sequences is critical to the development of HSV-2 vaccines. Genital lesion swabs containing ≥ 10 7 log 10 copies HSV DNA collected from Africa, the USA, and South America underwent next-generation sequencing, followed by K-mer based filtering and de novo genomic assembly. Sites of heterogeneity within coding regions in unique long and unique short (U L _U S ) regions were identified. Phylogenetic trees were created using maximum likelihood reconstruction. Among 46 samples from 38 persons, 1468 intragenic base-pair substitutions were identified. The maximum nucleotide distance between strains for concatenated U L_ U S segments was 0.4%. Phylogeny did not reveal geographic clustering. The most variable proteins had non-synonymous mutations in < 3% of amino acids. Unenriched HSV-2 DNA can undergo next-generation sequencing to identify intragenic variability. The use of clinical swabs for sequencing expands the information that can be gathered directly from these specimens. Copyright © 2017 Elsevier Inc. All rights reserved.
CONSTRAINTS ON VARIABLES IN SYNTAX.
ERIC Educational Resources Information Center
ROSS, JOHN ROBERT
IN ATTEMPTING TO DEFINE "SYNTACTIC VARIABLE," THE AUTHOR BASES HIS DISCUSSION ON THE ASSUMPTION THAT SYNTACTIC FACTS ARE A COLLECTION OF TWO TYPES OF RULES--CONTEXT-FREE PHRASE STRUCTURE RULES (GENERATING UNDERLYING OR DEEP PHRASE MARKERS) AND GRAMMATICAL TRANSFORMATIONS, WHICH MAP UNDERLYING PHRASE MARKERS ONTO SUPERFICIAL (OR SURFACE) PHRASE…
The influence of an uncertain force environment on reshaping trial-to-trial motor variability.
Izawa, Jun; Yoshioka, Toshinori; Osu, Rieko
2014-09-10
Motor memory is updated to generate ideal movements in a novel environment. When the environment changes every trial randomly, how does the brain incorporate this uncertainty into motor memory? To investigate how the brain adapts to an uncertain environment, we considered a reach adaptation protocol where individuals practiced moving in a force field where a noise was injected. After they had adapted, we measured the trial-to-trial variability in the temporal profiles of the produced hand force. We found that the motor variability was significantly magnified by the adaptation to the random force field. Temporal profiles of the motor variance were significantly dissociable between two different types of random force fields experienced. A model-based analysis suggests that the variability is generated by noise in the gains of the internal model. It further suggests that the trial-to-trial motor variability magnified by the adaptation in a random force field is generated by the uncertainty of the internal model formed in the brain as a result of the adaptation.
Forecasting generation of urban solid waste in developing countries--a case study in Mexico.
Buenrostro, O; Bocco, G; Vence, J
2001-01-01
Based on a study of the composition of urban solid waste (USW) and of socioeconomic variables in Morelia, Mexico, generation rates were estimated. In addition, the generation of residential solid waste (RSW) and nonresidential solid waste (NRSW) was forecasted by means of a multiple linear regression (MLR) analysis. For residential sources, the independent variables analyzed were monthly wages, persons per dwelling, age, and educational level of the heads of the household. For nonresidential sources, variables analyzed were number of employees, area of facilities, number of working days, and working hours per day. The forecasted values for residential waste were similar to those observed. This approach may be applied to areas in which available data are scarce, and in which there is an urgent need for the planning of adequate management of USW.
UDE-based control of variable-speed wind turbine systems
NASA Astrophysics Data System (ADS)
Ren, Beibei; Wang, Yeqin; Zhong, Qing-Chang
2017-01-01
In this paper, the control of a PMSG (permanent magnet synchronous generator)-based variable-speed wind turbine system with a back-to-back converter is considered. The uncertainty and disturbance estimator (UDE)-based control approach is applied to the regulation of the DC-link voltage and the control of the RSC (rotor-side converter) and the GSC (grid-side converter). For the rotor-side controller, the UDE-based vector control is developed for the RSC with PMSG control to facilitate the application of the MPPT (maximum power point tracking) algorithm for the maximum wind energy capture. For the grid-side controller, the UDE-based vector control is developed to control the GSC with the power reference generated by a UDE-based DC-link voltage controller. Compared with the conventional vector control, the UDE-based vector control can achieve reliable current decoupling control with fast response. Moreover, the UDE-based DC-link voltage regulation can achieve stable DC-link voltage under model uncertainties and external disturbances, e.g. wind speed variations. The effectiveness of the proposed UDE-based control approach is demonstrated through extensive simulation studies in the presence of coupled dynamics, model uncertainties and external disturbances under varying wind speeds. The UDE-based control is able to generate more energy, e.g. by 5% for the wind profile tested.
NASA Astrophysics Data System (ADS)
Hwang, Jai-Chan; Noh, Hyerim
2005-03-01
We present cosmological perturbation theory based on generalized gravity theories including string theory correction terms and a tachyonic complication. The classical evolution as well as the quantum generation processes in these varieties of gravity theories are presented in unified forms. These apply both to the scalar- and tensor-type perturbations. Analyses are made based on the curvature variable in two different gauge conditions often used in the literature in Einstein’s gravity; these are the curvature variables in the comoving (or uniform-field) gauge and the zero-shear gauge. Applications to generalized slow-roll inflation and its consequent power spectra are derived in unified forms which include a wide range of inflationary scenarios based on Einstein’s gravity and others.
ERIC Educational Resources Information Center
Blackburn, J. Joey; Robinson, J. Shane
2017-01-01
The purpose of this study was to determine if selected factors influenced the ability of students in school-based agricultural education programs to generate a correct hypothesis when troubleshooting small gasoline engines. Variables of interest included students' cognitive style, age, GPA, and content knowledge in small gasoline engines. Kirton's…
Variability in large-scale wind power generation: Variability in large-scale wind power generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiviluoma, Juha; Holttinen, Hannele; Weir, David
2015-10-25
The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1more » h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.« less
Means and extremes: building variability into community-level climate change experiments.
Thompson, Ross M; Beardall, John; Beringer, Jason; Grace, Mike; Sardina, Paula
2013-06-01
Experimental studies assessing climatic effects on ecological communities have typically applied static warming treatments. Although these studies have been informative, they have usually failed to incorporate either current or predicted future, patterns of variability. Future climates are likely to include extreme events which have greater impacts on ecological systems than changes in means alone. Here, we review the studies which have used experiments to assess impacts of temperature on marine, freshwater and terrestrial communities, and classify them into a set of 'generations' based on how they incorporate variability. The majority of studies have failed to incorporate extreme events. In terrestrial ecosystems in particular, experimental treatments have reduced temperature variability, when most climate models predict increased variability. Marine studies have tended to not concentrate on changes in variability, likely in part because the thermal mass of oceans will moderate variation. In freshwaters, climate change experiments have a much shorter history than in the other ecosystems, and have tended to take a relatively simple approach. We propose a new 'generation' of climate change experiments using down-scaled climate models which incorporate predicted changes in climatic variability, and describe a process for generating data which can be applied as experimental climate change treatments. © 2013 John Wiley & Sons Ltd/CNRS.
Dual gait generative models for human motion estimation from a single camera.
Zhang, Xin; Fan, Guoliang
2010-08-01
This paper presents a general gait representation framework for video-based human motion estimation. Specifically, we want to estimate the kinematics of an unknown gait from image sequences taken by a single camera. This approach involves two generative models, called the kinematic gait generative model (KGGM) and the visual gait generative model (VGGM), which represent the kinematics and appearances of a gait by a few latent variables, respectively. The concept of gait manifold is proposed to capture the gait variability among different individuals by which KGGM and VGGM can be integrated together, so that a new gait with unknown kinematics can be inferred from gait appearances via KGGM and VGGM. Moreover, a new particle-filtering algorithm is proposed for dynamic gait estimation, which is embedded with a segmental jump-diffusion Markov Chain Monte Carlo scheme to accommodate the gait variability in a long observed sequence. The proposed algorithm is trained from the Carnegie Mellon University (CMU) Mocap data and tested on the Brown University HumanEva data with promising results.
NASA Astrophysics Data System (ADS)
Bohra, Murtaza
Legged rovers are often considered as viable solutions for traversing unknown terrain. This work addresses the optimal locomotion reconfigurability of quadruped rovers, which consists of obtaining optimal locomotion modes, and transitioning between them. A 2D sagittal plane rover model is considered based on a domestic cat. Using a Genetic Algorithm, the gait, pose and control variables that minimize torque or maximize speed are found separately. The optimization approach takes into account the elimination of leg impact, while considering the entire variable spectrum. The optimal solutions are consistent with other works on gait optimization, and are similar to gaits found in quadruped animals as well. An online model-free gait planning framework is also implemented, that is based on Central Pattern Generators is implemented. It is used to generate joint and control trajectories for any arbitrarily varying speed profile, and shown to regulate locomotion transition and speed modulation, both endogenously and continuously.
Design of a variable width pulse generator feasible for manual or automatic control
NASA Astrophysics Data System (ADS)
Vegas, I.; Antoranz, P.; Miranda, J. M.; Franco, F. J.
2017-01-01
A variable width pulse generator featuring more than 4-V peak amplitude and less than 10-ns FWHM is described. In this design the width of the pulses is controlled by means of the control signal slope. Thus, a variable transition time control circuit (TTCC) is also developed, based on the charge and discharge of a capacitor by means of two tunable current sources. Additionally, it is possible to activate/deactivate the pulses when required, therefore allowing the creation of any desired pulse pattern. Furthermore, the implementation presented here can be electronically controlled. In conclusion, due to its versatility, compactness and low cost it can be used in a wide variety of applications.
Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016.
Wohland, Jan; Reyers, Mark; Märker, Carolin; Witthaut, Dirk
2018-01-01
Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability.
Bisson, P.A.; Dunham, J.B.; Reeves, G.H.
2009-01-01
In spite of numerous habitat restoration programs in fresh waters with an aggregate annual funding of millions of dollars, many populations of Pacific salmon remain significantly imperiled. Habitat restoration strategies that address limited environmental attributes and partial salmon life-history requirements or approaches that attempt to force aquatic habitat to conform to idealized but ecologically unsustainable conditions may partly explain this lack of response. Natural watershed processes generate highly variable environmental conditions and population responses, i.e., multiple life histories, that are often not considered in restoration. Examples from several locations underscore the importance of natural variability to the resilience of Pacific salmon. The implication is that habitat restoration efforts will be more likely to foster salmon resilience if they consider processes that generate and maintain natural variability in fresh water. We identify three specific criteria for management based on natural variability: the capacity of aquatic habitat to recover from disturbance, a range of habitats distributed across stream networks through time sufficient to fulfill the requirements of diverse salmon life histories, and ecological connectivity. In light of these considerations, we discuss current threats to habitat resilience and describe how regulatory and restoration approaches can be modified to better incorporate natural variability. ?? 2009 by the author(s).
Liu, Yong; Gracia, Jose R,; King, Jr, Thomas J.; ...
2014-05-16
The U.S. Eastern Interconnection (EI) is one of the largest electric power grids in the world and is expected to have difficulties in dealing with frequency regulation and oscillation damping issues caused by the increasing wind power. On the other side, variable-speed wind generators can actively engage in frequency regulation or oscillation damping with supplementary control loops. This paper creates a 5% wind power penetration simulation scenario based on the 16 000-bus EI system dynamic model and developed the user-defined wind electrical control model in PSS (R) E that incorporates additional frequency regulation and oscillation damping control loops. We evaluatedmore » the potential contributions of variable-speed wind generations to the EI system frequency regulation and oscillation damping, and simulation results demonstrate that current and future penetrations of wind power are promising in the EI system frequency regulation and oscillation damping.« less
Analysis on flood generation processes by means of a continuous simulation model
NASA Astrophysics Data System (ADS)
Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.
2006-03-01
In the present research, we exploited a continuous hydrological simulation to investigate on key variables responsible of flood peak formation. With this purpose, a distributed hydrological model (DREAM) is used in cascade with a rainfall generator (IRP-Iterated Random Pulse) to simulate a large number of extreme events providing insight into the main controls of flood generation mechanisms. Investigated variables are those used in theoretically derived probability distribution of floods based on the concept of partial contributing area (e.g. Iacobellis and Fiorentino, 2000). The continuous simulation model is used to investigate on the hydrological losses occurring during extreme events, the variability of the source area contributing to the flood peak and its lag-time. Results suggest interesting simplification for the theoretical probability distribution of floods according to the different climatic and geomorfologic environments. The study is applied to two basins located in Southern Italy with different climatic characteristics.
NASA Astrophysics Data System (ADS)
Pezzani, Carlos M.; Bossio, José M.; Castellino, Ariel M.; Bossio, Guillermo R.; De Angelo, Cristian H.
2017-02-01
Condition monitoring in permanent magnet synchronous machines has gained interest due to the increasing use in applications such as electric traction and power generation. Particularly in wind power generation, non-invasive condition monitoring techniques are of great importance. Usually, in such applications the access to the generator is complex and costly, while unexpected breakdowns results in high repair costs. This paper presents a technique which allows using vibration analysis for bearing fault detection in permanent magnet synchronous generators used in wind turbines. Given that in wind power applications the generator rotational speed may vary during normal operation, it is necessary to use special sampling techniques to apply spectral analysis of mechanical vibrations. In this work, a resampling technique based on order tracking without measuring the rotor position is proposed. To synchronize sampling with rotor position, an estimation of the rotor position obtained from the angle of the voltage vector is proposed. This angle is obtained from a phase-locked loop synchronized with the generator voltages. The proposed strategy is validated by laboratory experimental results obtained from a permanent magnet synchronous generator. Results with single point defects in the outer race of a bearing under variable speed and load conditions are presented.
Chakraborty, Sutirtha
2018-05-26
RNA-Seq technology has revolutionized the face of gene expression profiling by generating read count data measuring the transcript abundances for each queried gene on multiple experimental subjects. But on the downside, the underlying technical artefacts and hidden biological profiles of the samples generate a wide variety of latent effects that may potentially distort the actual transcript/gene expression signals. Standard normalization techniques fail to correct for these hidden variables and lead to flawed downstream analyses. In this work I demonstrate the use of Partial Least Squares (built as an R package 'SVAPLSseq') to correct for the traces of extraneous variability in RNA-Seq data. A novel and thorough comparative analysis of the PLS based method is presented along with some of the other popularly used approaches for latent variable correction in RNA-Seq. Overall, the method is found to achieve a substantially improved estimation of the hidden effect signatures in the RNA-Seq transcriptome expression landscape compared to other available techniques. Copyright © 2017. Published by Elsevier Inc.
Fault Diagnosis Strategies for SOFC-Based Power Generation Plants
Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea
2016-01-01
The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, A., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr; Song, J.; Hwang, H., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr
In order to obtain reliable multilevel cell (MLC) characteristics, resistance controllability between the different resistance levels is required especially in resistive random access memory (RRAM), which is prone to resistance variability mainly due to its intrinsic random nature of defect generation and filament formation. In this study, we have thoroughly investigated the multilevel resistance variability in a TaO{sub x}-based nanoscale (<30 nm) RRAM operated in MLC mode. It is found that the resistance variability not only depends on the conductive filament size but also is a strong function of oxygen vacancy concentration in it. Based on the gained insights through experimentalmore » observations and simulation, it is suggested that forming thinner but denser conductive filament may greatly improve the temporal resistance variability even at low operation current despite the inherent stochastic nature of resistance switching process.« less
NASA Astrophysics Data System (ADS)
Lewis, Jared; Bodeker, Greg E.; Kremser, Stefanie; Tait, Andrew
2017-12-01
A method, based on climate pattern scaling, has been developed to expand a small number of projections of fields of a selected climate variable (X) into an ensemble that encapsulates a wide range of indicative model structural uncertainties. The method described in this paper is referred to as the Ensemble Projections Incorporating Climate model uncertainty (EPIC) method. Each ensemble member is constructed by adding contributions from (1) a climatology derived from observations that represents the time-invariant part of the signal; (2) a contribution from forced changes in X, where those changes can be statistically related to changes in global mean surface temperature (Tglobal); and (3) a contribution from unforced variability that is generated by a stochastic weather generator. The patterns of unforced variability are also allowed to respond to changes in Tglobal. The statistical relationships between changes in X (and its patterns of variability) and Tglobal are obtained in a training
phase. Then, in an implementation
phase, 190 simulations of Tglobal are generated using a simple climate model tuned to emulate 19 different global climate models (GCMs) and 10 different carbon cycle models. Using the generated Tglobal time series and the correlation between the forced changes in X and Tglobal, obtained in the training
phase, the forced change in the X field can be generated many times using Monte Carlo analysis. A stochastic weather generator is used to generate realistic representations of weather which include spatial coherence. Because GCMs and regional climate models (RCMs) are less likely to correctly represent unforced variability compared to observations, the stochastic weather generator takes as input measures of variability derived from observations, but also responds to forced changes in climate in a way that is consistent with the RCM projections. This approach to generating a large ensemble of projections is many orders of magnitude more computationally efficient than running multiple GCM or RCM simulations. Such a large ensemble of projections permits a description of a probability density function (PDF) of future climate states rather than a small number of individual story lines within that PDF, which may not be representative of the PDF as a whole; the EPIC method largely corrects for such potential sampling biases. The method is useful for providing projections of changes in climate to users wishing to investigate the impacts and implications of climate change in a probabilistic way. A web-based tool, using the EPIC method to provide probabilistic projections of changes in daily maximum and minimum temperatures for New Zealand, has been developed and is described in this paper.
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
Realistic simulated MRI and SPECT databases. Application to SPECT/MRI registration evaluation.
Aubert-Broche, Berengere; Grova, Christophe; Reilhac, Anthonin; Evans, Alan C; Collins, D Louis
2006-01-01
This paper describes the construction of simulated SPECT and MRI databases that account for realistic anatomical and functional variability. The data is used as a gold-standard to evaluate four SPECT/MRI similarity-based registration methods. Simulation realism was accounted for using accurate physical models of data generation and acquisition. MRI and SPECT simulations were generated from three subjects to take into account inter-subject anatomical variability. Functional SPECT data were computed from six functional models of brain perfusion. Previous models of normal perfusion and ictal perfusion observed in Mesial Temporal Lobe Epilepsy (MTLE) were considered to generate functional variability. We studied the impact noise and intensity non-uniformity in MRI simulations and SPECT scatter correction may have on registration accuracy. We quantified the amount of registration error caused by anatomical and functional variability. Registration involving ictal data was less accurate than registration involving normal data. MR intensity nonuniformity was the main factor decreasing registration accuracy. The proposed simulated database is promising to evaluate many functional neuroimaging methods, involving MRI and SPECT data.
Wind Velocity and Position Sensor-less Operation for PMSG Wind Generator
NASA Astrophysics Data System (ADS)
Senjyu, Tomonobu; Tamaki, Satoshi; Urasaki, Naomitsu; Uezato, Katsumi; Funabashi, Toshihisa; Fujita, Hideki
Electric power generation using non-conventional sources is receiving considerable attention throughout the world. Wind energy is one of the available non-conventional energy sources. Electrical power generation using wind energy is possible in two ways, viz. constant speed operation and variable speed operation using power electronic converters. Variable speed power generation is attractive, because maximum electric power can be generated at all wind velocities. However, this system requires a rotor speed sensor, for vector control purpose, which increases the cost of the system. To alleviate the need of rotor speed sensor in vector control, we propose a new sensor-less control of PMSG (Permanent Magnet Synchronous Generator) based on the flux linkage. We can estimate the rotor position using the estimated flux linkage. We use a first-order lag compensator to obtain the flux linkage. Furthermore‚we estimate wind velocity and rotation speed using a observer. The effectiveness of the proposed method is demonstrated thorough simulation results.
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
Leitch, Michael; Macefield, Vaughan G
2017-08-01
Ballistic contractions are induced by brief, high-frequency (60-100 Hz) trains of action potentials in motor axons. During ramp voluntary contractions, human motoneurons exhibit significant discharge variability of ∼20% and have been shown to be advantageous to the neuromuscular system. We hypothesized that ballistic contractions incorporating discharge variability would generate greater isometric forces than regular trains with zero variability. High-impedance tungsten microelectrodes were inserted into human fibular nerve, and single motor axons were stimulated with both irregular and constant-frequency stimuli at mean frequencies ranging from 57.8 to 68.9 Hz. Irregular trains generated significantly greater isometric peak forces than regular trains over identical mean frequencies. The high forces generated by ballistic contractions are not based solely on high frequencies, but rather a combination of high firing rates and discharge irregularity. It appears that irregular ballistic trains take advantage of the "catchlike property" of muscle, allowing augmentation of force. Muscle Nerve 56: 292-297, 2017. © 2016 Wiley Periodicals, Inc.
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
The Impact of ENSO on Extratropical Low Frequency Noise in Seasonal Forecasts
NASA Technical Reports Server (NTRS)
Schubert, Siegfried D.; Suarez, Max J.; Chang, Yehui; Branstator, Grant
2000-01-01
This study examines the uncertainty in forecasts of the January-February-March (JFM) mean extratropical circulation, and how that uncertainty is modulated by the El Nino/Southern Oscillation (ENSO). The analysis is based on ensembles of hindcasts made with an Atmospheric General Circulation Model (AGCM) forced with sea surface temperatures observed during; the 1983 El Nino and 1989 La Nina events. The AGCM produces pronounced interannual differences in the magnitude of the extratropical seasonal mean noise (intra-ensemble variability). The North Pacific, in particular, shows extensive regions where the 1989 seasonal mean noise kinetic energy (SKE), which is dominated by a "PNA-like" spatial structure, is more than twice that of the 1983 forecasts. The larger SKE in 1989 is associated with a larger than normal barotropic conversion of kinetic energy from the mean Pacific jet to the seasonal mean noise. The generation of SKE due to sub-monthly transients also shows substantial interannual differences, though these are much smaller than the differences in the mean flow conversions. An analysis of the Generation of monthly mean noise kinetic energy (NIKE) and its variability suggests that the seasonal mean noise is predominantly a statistical residue of variability resulting from dynamical processes operating on monthly and shorter times scales. A stochastically-forced barotropic model (linearized about the AGCM's 1983 and 1989 base states) is used to further assess the role of the basic state, submonthly transients, and tropical forcing, in modulating the uncertainties in the seasonal AGCM forecasts. When forced globally with spatially-white noise, the linear model generates much larger variance for the 1989 base state, consistent with the AGCM results. The extratropical variability for the 1989 base state is dominanted by a single eigenmode, and is strongly coupled with forcing over tropical western Pacific and the Indian Ocean, again consistent with the AGCM results. Linear calculations that include forcing from the AGCM variance of the tropical forcing and submonthly transients show a small impact on the variability over the Pacific/North American region compared with that of the base state differences.
NASA Astrophysics Data System (ADS)
Zhang, Ya-feng; Wang, Xin-ping; Hu, Rui; Pan, Yan-xia
2016-08-01
Throughfall is known to be a critical component of the hydrological and biogeochemical cycles of forested ecosystems with inherently temporal and spatial variability. Yet little is understood concerning the throughfall variability of shrubs and the associated controlling factors in arid desert ecosystems. Here we systematically investigated the variability of throughfall of two morphological distinct xerophytic shrubs (Caragana korshinskii and Artemisia ordosica) within a re-vegetated arid desert ecosystem, and evaluated the effects of shrub structure and rainfall characteristics on throughfall based on heavily gauged throughfall measurements at the event scale. We found that morphological differences were not sufficient to generate significant difference (P < 0.05) in throughfall between two studied shrub species under the same rainfall and meteorological conditions in our study area, with a throughfall percentage of 69.7% for C. korshinskii and 64.3% for A. ordosica. We also observed a highly variable patchy pattern of throughfall beneath individual shrub canopies, but the spatial patterns appeared to be stable among rainfall events based on time stability analysis. Throughfall linearly increased with the increasing distance from the shrub base for both shrubs, and radial direction beneath shrub canopies had a pronounced impact on throughfall. Throughfall variability, expressed as the coefficient of variation (CV) of throughfall, tended to decline with the increase in rainfall amount, intensity and duration, and stabilized passing a certain threshold. Our findings highlight the great variability of throughfall beneath the canopies of xerophytic shrubs and the time stability of throughfall pattern among rainfall events. The spatially heterogeneous and temporally stable throughfall is expected to generate a dynamic patchy distribution of soil moisture beneath shrub canopies within arid desert ecosystems.
Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016
Reyers, Mark; Märker, Carolin; Witthaut, Dirk
2018-01-01
Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability. PMID:29329349
Technique for fast and efficient hierarchical clustering
Stork, Christopher
2013-10-08
A fast and efficient technique for hierarchical clustering of samples in a dataset includes compressing the dataset to reduce a number of variables within each of the samples of the dataset. A nearest neighbor matrix is generated to identify nearest neighbor pairs between the samples based on differences between the variables of the samples. The samples are arranged into a hierarchy that groups the samples based on the nearest neighbor matrix. The hierarchy is rendered to a display to graphically illustrate similarities or differences between the samples.
A Reserve-based Method for Mitigating the Impact of Renewable Energy
NASA Astrophysics Data System (ADS)
Krad, Ibrahim
The fundamental operating paradigm of today's power systems is undergoing a significant shift. This is partially motivated by the increased desire for incorporating variable renewable energy resources into generation portfolios. While these generating technologies offer clean energy at zero marginal cost, i.e. no fuel costs, they also offer unique operating challenges for system operators. Perhaps the biggest operating challenge these resources introduce is accommodating their intermittent fuel source availability. For this reason, these generators increase the system-wide variability and uncertainty. As a result, system operators are revisiting traditional operating strategies to more efficiently incorporate these generation resources to maximize the benefit they provide while minimizing the challenges they introduce. One way system operators have accounted for system variability and uncertainty is through the use of operating reserves. Operating reserves can be simplified as excess capacity kept online during real time operations to help accommodate unforeseen fluctuations in demand. With new generation resources, a new class of operating reserves has emerged that is generally known as flexibility, or ramping, reserves. This new reserve class is meant to better position systems to mitigate severe ramping in the net load profile. The best way to define this new requirement is still under investigation. Typical requirement definitions focus on the additional uncertainty introduced by variable generation and there is room for improvement regarding explicit consideration for the variability they introduce. An exogenous reserve modification method is introduced in this report that can improve system reliability with minimal impacts on total system wide production costs. Another potential solution to this problem is to formulate the problem as a stochastic programming problem. The unit commitment and economic dispatch problems are typically formulated as deterministic problems due to fast solution times and the solutions being sufficient for operations. Improvements in technical computing hardware have reignited interest in stochastic modeling. The variability of wind and solar naturally lends itself to stochastic modeling. The use of explicit reserve requirements in stochastic models is an area of interest for power system researchers. This report introduces a new reserve modification implementation based on previous results to be used in a stochastic modeling framework. With technological improvements in distributed generation technologies, microgrids are currently being researched and implemented. Microgrids are small power systems that have the ability to serve their demand with their own generation resources and may have a connection to a larger power system. As battery technologies improve, they are becoming a more viable option in these distributed power systems and research is necessary to determine the most efficient way to utilize them. This report will investigate several unique operating strategies for batteries in small power systems and analyze their benefits. These new operating strategies will help reduce operating costs and improve system reliability.
Cuq, Benoît; Blois, Shauna L; Wood, R Darren; Monteith, Gabrielle; Abrams-Ogg, Anthony C; Bédard, Christian; Wood, Geoffrey A
2018-06-01
Thrombin plays a central role in hemostasis and thrombosis. Calibrated automated thrombography (CAT), a thrombin generation assay, may be a useful test for hemostatic disorders in dogs. To describe CAT results in a group of healthy dogs, and assess preanalytical variables and biological variability. Forty healthy dogs were enrolled. Lag time (Lag), time to peak (ttpeak), peak thrombin generation (peak), and endogenous thrombin potential (ETP) were measured. Direct jugular venipuncture and winged-needle catheter-assisted saphenous venipuncture were used to collect samples from each dog, and results were compared between methods. Sample stability at -80°C was assessed over 12 months in a subset of samples. Biological variability of CAT was assessed via nested ANOVA using samples obtained weekly from a subset of 9 dogs for 4 consecutive weeks. Samples for CAT were stable at -80°C over 12 months of storage. Samples collected via winged-needle catheter venipuncture showed poor repeatability compared to direct venipuncture samples; there was also poor agreement between the 2 sampling methods. Intra-individual variability of CAT parameters was below 25%; inter-individual variability ranged from 36.9% to 78.5%. Measurement of thrombin generation using CAT appears to be repeatable in healthy dogs, and samples are stable for at least 12 months when stored at -80°C. Direct venipuncture sampling is recommended for CAT. Low indices of individuality suggest that subject-based reference intervals are more suitable when interpreting CAT results. © 2018 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Qi, Bing; Lougovski, Pavel; Pooser, Raphael; Grice, Warren; Bobrek, Miljko
2015-10-01
Continuous-variable quantum key distribution (CV-QKD) protocols based on coherent detection have been studied extensively in both theory and experiment. In all the existing implementations of CV-QKD, both the quantum signal and the local oscillator (LO) are generated from the same laser and propagate through the insecure quantum channel. This arrangement may open security loopholes and limit the potential applications of CV-QKD. In this paper, we propose and demonstrate a pilot-aided feedforward data recovery scheme that enables reliable coherent detection using a "locally" generated LO. Using two independent commercial laser sources and a spool of 25-km optical fiber, we construct a coherent communication system. The variance of the phase noise introduced by the proposed scheme is measured to be 0.04 (rad2 ), which is small enough to enable secure key distribution. This technology also opens the door for other quantum communication protocols, such as the recently proposed measurement-device-independent CV-QKD, where independent light sources are employed by different users.
[Health care innovation from a territorial perspective: a call for a new approach].
Costa, Laís Silveira; Gadelha, Carlos Augusto Grabois; Maldonado, José
2012-12-01
Innovation plays an increasingly important role in health care, partly because it is responsible for a significant share of national investment in research and development, and partly because of its industrial and service provision base, which provides a conduit to future technology. The relationship between health care and development is also strengthened as a result of the leading role of health care in generating innovation. Nevertheless, Brazil's health care production base is persistently weak, hindering both universal provision of health care services and international competitiveness. This article, based on the theoretical framework of Political Economy and innovation systems, has sought to identify variables in subnational contexts that influence the dynamic of innovation generation in health care. To this end, the theoretical approach used lies on the assumption that innovation is a contextualized social process and that the production base in healthcare will remain weak if new variables involved in the dynamic of innovation are not taken into account.
Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D
NASA Technical Reports Server (NTRS)
Carle, Alan; Fagan, Mike; Green, Lawrence L.
1998-01-01
This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.
Grazhdani, Dorina
2016-02-01
Economic development, urbanization, and improved living standards increase the quantity and complexity of generated solid waste. Comprehensive study of the variables influencing household solid waste production and recycling rate is crucial and fundamental for exploring the generation mechanism and forecasting future dynamics of household solid waste. The present study is employed in the case study of Prespa Park. A model, based on the interrelationships of economic, demographic, housing structure and waste management policy variables influencing the rate of solid waste generation and recycling is developed and employed. The empirical analysis is based on the information derived from a field questionnaire survey conducted in Prespa Park villages for the year 2014. Another feature of this study is to test whether a household's waste generation can be decoupled from its population growth. Descriptive statistics, bivariate correlation analysis and F-tests are used to know the relationship between variables. One-way and two-way fixed effects models data analysis techniques are used to identify variables that determine the effectiveness of waste generation and recycling at household level in the study area. The results reveal that households with heterogeneous characteristics, such as education level, mean building age and income, present different challenges of waste reduction goals. Numerically, an increase of 1% in education level of population corresponds to a waste reduction of 3kg on the annual per capita basis. A village with older buildings, in the case of one year older of the median building age, corresponds to a waste generation increase of 12kg. Other economic and policy incentives such as the mean household income, pay-as-you-throw, percentage of population with access to curbside recycling, the number of drop-off recycling facilities available per 1000 persons and cumulative expenditures on recycling education per capita are also found to be effective measures in waste reduction. The mean expenditure for recycling education spent on a person for years 2010 and 2014 is 12 and 14 cents, respectively and it vary from 0 to €1. For years 2010 and 2014, the mean percentage of population with access to curbside recycling services is 38.6% and 40.3%, and the mean number of drop-off recycling centers per 1000 persons in the population is 0.29 and 0.32, respectively. Empirical evidence suggests that population growth did not necessarily result in increases in waste generation. The results provided are useful when planning, changing or implementing sustainable municipal solid waste management. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Graf, Edith Aurora
2014-01-01
In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…
Systems and methods for controlling energy use during a demand limiting period
Wenzel, Michael J.; Drees, Kirk H.
2016-04-26
Systems and methods for limiting power consumption by a heating, ventilation, and air conditioning (HVAC) subsystem of a building are shown and described. A feedback controller is used to generate a manipulated variable based on an energy use setpoint and a measured energy use. The manipulated variable may be used for adjusting the operation of an HVAC device.
Moriguchi, Sachiko; Tominaga, Atsushi; Irwin, Kelly J; Freake, Michael J; Suzuki, Kazutaka; Goka, Koichi
2015-04-08
Batrachochytrium dendrobatidis (Bd) is the pathogen responsible for chytridiomycosis, a disease that is associated with a worldwide amphibian population decline. In this study, we predicted the potential distribution of Bd in East and Southeast Asia based on limited occurrence data. Our goal was to design an effective survey area where efforts to detect the pathogen can be focused. We generated ecological niche models using the maximum-entropy approach, with alleviation of multicollinearity and spatial autocorrelation. We applied eigenvector-based spatial filters as independent variables, in addition to environmental variables, to resolve spatial autocorrelation, and compared the model's accuracy and the degree of spatial autocorrelation with those of a model estimated using only environmental variables. We were able to identify areas of high suitability for Bd with accuracy. Among the environmental variables, factors related to temperature and precipitation were more effective in predicting the potential distribution of Bd than factors related to land use and cover type. Our study successfully predicted the potential distribution of Bd in East and Southeast Asia. This information should now be used to prioritize survey areas and generate a surveillance program to detect the pathogen.
Costs of solar and wind power variability for reducing CO2 emissions.
Lueken, Colleen; Cohen, Gilbert E; Apt, Jay
2012-09-04
We compare the power output from a year of electricity generation data from one solar thermal plant, two solar photovoltaic (PV) arrays, and twenty Electric Reliability Council of Texas (ERCOT) wind farms. The analysis shows that solar PV electricity generation is approximately one hundred times more variable at frequencies on the order of 10(-3) Hz than solar thermal electricity generation, and the variability of wind generation lies between that of solar PV and solar thermal. We calculate the cost of variability of the different solar power sources and wind by using the costs of ancillary services and the energy required to compensate for its variability and intermittency, and the cost of variability per unit of displaced CO(2) emissions. We show the costs of variability are highly dependent on both technology type and capacity factor. California emissions data were used to calculate the cost of variability per unit of displaced CO(2) emissions. Variability cost is greatest for solar PV generation at $8-11 per MWh. The cost of variability for solar thermal generation is $5 per MWh, while that of wind generation in ERCOT was found to be on average $4 per MWh. Variability adds ~$15/tonne CO(2) to the cost of abatement for solar thermal power, $25 for wind, and $33-$40 for PV.
Yu, Haitao; Dhingra, Rishi R; Dick, Thomas E; Galán, Roberto F
2017-01-01
Neural activity generally displays irregular firing patterns even in circuits with apparently regular outputs, such as motor pattern generators, in which the output frequency fluctuates randomly around a mean value. This "circuit noise" is inherited from the random firing of single neurons, which emerges from stochastic ion channel gating (channel noise), spontaneous neurotransmitter release, and its diffusion and binding to synaptic receptors. Here we demonstrate how to expand conductance-based network models that are originally deterministic to include realistic, physiological noise, focusing on stochastic ion channel gating. We illustrate this procedure with a well-established conductance-based model of the respiratory pattern generator, which allows us to investigate how channel noise affects neural dynamics at the circuit level and, in particular, to understand the relationship between the respiratory pattern and its breath-to-breath variability. We show that as the channel number increases, the duration of inspiration and expiration varies, and so does the coefficient of variation of the breath-to-breath interval, which attains a minimum when the mean duration of expiration slightly exceeds that of inspiration. For small channel numbers, the variability of the expiratory phase dominates over that of the inspiratory phase, and vice versa for large channel numbers. Among the four different cell types in the respiratory pattern generator, pacemaker cells exhibit the highest sensitivity to channel noise. The model shows that suppressing input from the pons leads to longer inspiratory phases, a reduction in breathing frequency, and larger breath-to-breath variability, whereas enhanced input from the raphe nucleus increases breathing frequency without changing its pattern. A major source of noise in neuronal circuits is the "flickering" of ion currents passing through the neurons' membranes (channel noise), which cannot be suppressed experimentally. Computational simulations are therefore the best way to investigate the effects of this physiological noise by manipulating its level at will. We investigate the role of noise in the respiratory pattern generator and show that endogenous, breath-to-breath variability is tightly linked to the respiratory pattern. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Gilev, S. D.; Prokopiev, V. S.
2017-07-01
A method of generation of electromagnetic energy and magnetic flux in a magnetic cumulation generator is proposed. The method is based on dynamic variation of the circuit coupling coefficient. This circuit is compared with other available circuits of magnetic energy generation with the help of magnetic cumulation (classical magnetic cumulation generator, generator with transformer coupling, and generator with a dynamic transformer). It is demonstrated that the proposed method allows obtaining high values of magnetic energy. The proposed circuit is found to be more effective than the known transformer circuit. Experiments on electromagnetic energy generation are performed, which demonstrate the efficiency of the proposed method.
Cai, Hong; Long, Christopher M.; DeRose, Christopher T.; ...
2017-01-01
We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.
Cai, Hong; Long, Christopher M; DeRose, Christopher T; Boynton, Nicholas; Urayama, Junji; Camacho, Ryan; Pomerene, Andrew; Starbuck, Andrew L; Trotter, Douglas C; Davids, Paul S; Lentine, Anthony L
2017-05-29
We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Hong; Long, Christopher M.; DeRose, Christopher T.
We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.
NASA Astrophysics Data System (ADS)
Bai, Guang-Fu; Hu, Lin; Jiang, Yang; Tian, Jing; Zi, Yue-Jiao; Wu, Ting-Wei; Huang, Feng-Qin
2017-08-01
In this paper, a photonic microwave waveform generator based on a dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. In this reported scheme, only one radio frequency signal is used to drive the dual-parallel Mach-Zehnder modulator. Meanwhile, dispersive elements or filters are not required in the proposed scheme, which make the scheme simpler and more stable. In this way, six variables can be adjusted. Through the different combinations of these variables, basic waveforms with full duty and small duty cycle can be generated. Tunability of the generator can be achieved by adjusting the frequency of the RF signal and the optical carrier. The corresponding theoretical analysis and simulation have been conducted. With guidance of theory and simulation, proof-of-concept experiments are carried out. The basic waveforms, including Gaussian, saw-up, and saw-down waveforms, with full duty and small duty cycle are generated at the repetition rate of 2 GHz. The theoretical and simulation results agree with the experimental results very well.
Rotorcraft Flight Simulation Computer Program C81 with DATAMAP interface. Volume I. User’s Manual
1981-10-01
any one of the RWAS tables to simulate the defined effect of that input, care must be exercised to assure that the table used is based on the correct... IMPROVED MANEUVER AUTOPILOT HAVE BEEN INSTALLED IN AGAPBO. A NEW LISTING OF THE CONTENTS OF THE ANALYTICAL DATA BASE WILL BE GENERATED DURING THE WEEK...of the program (Reference 1) has been improved by providing the cap- ability to generate Postprocessing Data Blocks containing selected variables
NASA Astrophysics Data System (ADS)
Kwon, So Young
Using a quasi-experimental design, the researcher investigated the comparative effects of individually-generated and collaboratively-generated computer-based concept mapping on middle school science concept learning. Qualitative data were analyzed to explain quantitative findings. One hundred sixty-one students (74 boys and 87 girls) in eight, seventh grade science classes at a middle school in Southeast Texas completed the entire study. Using prior science performance scores to assure equivalence of student achievement across groups, the researcher assigned the teacher's classes to one of the three experimental groups. The independent variable, group, consisted of three levels: 40 students in a control group, 59 students trained to individually generate concept maps on computers, and 62 students trained to collaboratively generate concept maps on computers. The dependent variables were science concept learning as demonstrated by comprehension test scores, and quality of concept maps created by students in experimental groups as demonstrated by rubric scores. Students in the experimental groups received concept mapping training and used their newly acquired concept mapping skills to individually or collaboratively construct computer-based concept maps during study time. The control group, the individually-generated concept mapping group, and the collaboratively-generated concept mapping group had equivalent learning experiences for 50 minutes during five days, excepting that students in a control group worked independently without concept mapping activities, students in the individual group worked individually to construct concept maps, and students in the collaborative group worked collaboratively to construct concept maps during their study time. Both collaboratively and individually generated computer-based concept mapping had a positive effect on seventh grade middle school science concept learning but neither strategy was more effective than the other. However, the students who collaboratively generated concept maps created significantly higher quality concept maps than those who individually generated concept maps. The researcher concluded that the concept mapping software, Inspiration(TM), fostered construction of students' concept maps individually or collaboratively for science learning and helped students capture their evolving creative ideas and organize them for meaningful learning. Students in both the individual and the collaborative concept mapping groups had positive attitudes toward concept mapping using Inspiration(TM) software.
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-06-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-03-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
Systems and methods for controlling energy use in a building management system using energy budgets
Wenzel, Michael J; Drees, Kirk H
2014-09-23
Systems and methods for limiting power consumption by a heating, ventilation, and air conditioning (HVAC) subsystem of a building are shown and described. A feedback controller is used to generate a manipulated variable based on an energy use setpoint and a measured energy use. The manipulated variable may be used for adjusting the operation of an HVAC device.
A modified priority list-based MILP method for solving large-scale unit commitment problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ke, Xinda; Lu, Ning; Wu, Di
This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less
Analysis of a Temperature-Controlled Exhaust Thermoelectric Generator During a Driving Cycle
NASA Astrophysics Data System (ADS)
Brito, F. P.; Alves, A.; Pires, J. M.; Martins, L. B.; Martins, J.; Oliveira, J.; Teixeira, J.; Goncalves, L. M.; Hall, M. J.
2016-03-01
Thermoelectric generators can be used in automotive exhaust energy recovery. As car engines operate under wide variable loads, it is a challenge to design a system for operating efficiently under these variable conditions. This means being able to avoid excessive thermal dilution under low engine loads and being able to operate under high load, high temperature events without the need to deflect the exhaust gases with bypass systems. The authors have previously proposed a thermoelectric generator (TEG) concept with temperature control based on the operating principle of the variable conductance heat pipe/thermosiphon. This strategy allows the TEG modules’ hot face to work under constant, optimized temperature. The variable engine load will only affect the number of modules exposed to the heat source, not the heat transfer temperature. This prevents module overheating under high engine loads and avoids thermal dilution under low engine loads. The present work assesses the merit of the aforementioned approach by analysing the generator output during driving cycles simulated with an energy model of a light vehicle. For the baseline evaporator and condenser configuration, the driving cycle averaged electrical power outputs were approximately 320 W and 550 W for the type-approval Worldwide harmonized light vehicles test procedure Class 3 driving cycle and for a real-world highway driving cycle, respectively.
Maximum wind energy extraction strategies using power electronic converters
NASA Astrophysics Data System (ADS)
Wang, Quincy Qing
2003-10-01
This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through continuously improving the performance of wind power generation systems. This algorithm is independent of wind power generation system characteristics, and does not need wind speed and turbine speed measurements. Therefore, it can be easily implemented into various wind energy generation systems with different turbine inertia and diverse system hardware environments. In addition to the detailed description of the proposed algorithm, computer simulation results are presented in the thesis to demonstrate the advantage of this algorithm. As a final confirmation of the algorithm feasibility, the algorithm has been implemented inside a single-phase IGBT inverter, and tested with a wind simulator system in research laboratory. Test results were found consistent with the simulation results. (Abstract shortened by UMI.)
Seasonal variability of the Canary Current: A numerical study
NASA Astrophysics Data System (ADS)
Mason, Evan; Colas, Francois; Molemaker, Jeroen; Shchepetkin, Alexander F.; Troupin, Charles; McWilliams, James C.; Sangrã, Pablo
2011-06-01
A high-resolution numerical model study of the Canary Basin in the northeast subtropical Atlantic Ocean is presented. A long-term climatological solution from the Regional Oceanic Modeling System (ROMS) reveals mesoscale variability associated with the Azores and Canary Current systems, the northwest African coastal upwelling, and the Canary Island archipelago. The primary result concerns the Canary Current (CanC) which, in the solution, transports ˜3 Sv southward in line with observations. The simulated CanC has a well-defined path with pronounced seasonal variability. This variability is shown to be mediated by the westward passage of two large annually excited counterrotating anomalous structures that originate at the African coast. The anomalies have a sea surface expression, permitting their validation using altimetry and travel at the phase speed of baroclinic planetary (Rossby) waves. The role of nearshore wind stress curl variability as a generating mechanism for the anomalies is confirmed through a sensitivity experiment forced by low-resolution winds. The resulting circulation is weak in comparison to the base run, but the propagating anomalies are still discernible, so we cannot discount a further role in their generation being played by annual reversals of the large-scale boundary flow that are known to occur along the African margin. An additional sensitivity experiment, where the Azores Current is removed by closing the Strait of Gibraltar presents the same anomalies and CanC behavior as the base run, suggesting that the CanC is rather insensitive to upstream variability from the Azores Current.
Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks
NASA Astrophysics Data System (ADS)
Karpatne, A.; Kumar, V.
2017-12-01
Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.
2016-03-30
lesson 8.4, " Wind Turbine Design Inquiry." 13 The goal of her project was to combine a1t and science in project-based learning. Although pmt of an...challenged to design, test, and redesign wind turbine blades, defining variables and measuring performance. Their goal was to optimize perfonnance through...hydroelectric. In each model there are more than one variable. For example, the wind farm activity enables the user to select number of turbines
Development of weighting value for ecodrainage implementation assessment criteria
NASA Astrophysics Data System (ADS)
Andajani, S.; Hidayat, D. P. A.; Yuwono, B. E.
2018-01-01
This research aim to generate weighting value for each factor and find out the most influential factor for identify implementation of ecodrain concept using loading factor and Cronbach Alpha. The drainage problem especially in urban areas are getting more complex and need to be handled as soon as possible. Flood and drought problem can’t be solved by the conventional paradigm of drainage (to drain runoff flow as faster as possible to the nearest drainage area). The new paradigm of drainage that based on environmental approach called “ecodrain” can solve both of flood and drought problems. For getting the optimal result, ecodrain should be applied in smallest scale (domestic scale), until the biggest scale (city areas). It is necessary to identify drainage condition based on environmental approach. This research implement ecodrain concept by a guidelines that consist of parameters and assessment criteria. It was generating the 2 variables, 7 indicators and 63 key factors from previous research and related regulations. the conclusion of the research is the most influential indicator on technical management variable is storage system, while on non-technical management variable is government role.
A conceptual framework for evaluating variable speed generator options for wind energy applications
NASA Technical Reports Server (NTRS)
Reddoch, T. W.; Lipo, T. A.; Hinrichsen, E. N.; Hudson, T. L.; Thomas, R. J.
1995-01-01
Interest in variable speed generating technology has accelerated as greater emphasis on overall efficiency and superior dynamic and control properties in wind-electric generating systems are sought. This paper reviews variable speed technology options providing advantages and disadvantages of each. Furthermore, the dynamic properties of variable speed systems are contrasted with synchronous operation. Finally, control properties of variable speed systems are examined.
Vakorin, Vasily A.; Mišić, Bratislav; Krakovska, Olga; McIntosh, Anthony Randal
2011-01-01
Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay. PMID:22131968
Competency-Based, Time-Variable Education in the Health Professions: Crossroads.
Lucey, Catherine R; Thibault, George E; Ten Cate, Olle
2018-03-01
Health care systems around the world are transforming to align with the needs of 21st-century patients and populations. Transformation must also occur in the educational systems that prepare the health professionals who deliver care, advance discovery, and educate the next generation of physicians in these evolving systems. Competency-based, time-variable education, a comprehensive educational strategy guided by the roles and responsibilities that health professionals must assume to meet the needs of contemporary patients and communities, has the potential to catalyze optimization of educational and health care delivery systems. By designing educational and assessment programs that require learners to meet specific competencies before transitioning between the stages of formal education and into practice, this framework assures the public that every physician is capable of providing high-quality care. By engaging learners as partners in assessment, competency-based, time-variable education prepares graduates for careers as lifelong learners. While the medical education community has embraced the notion of competencies as a guiding framework for educational institutions, the structure and conduct of formal educational programs remain more aligned with a time-based, competency-variable paradigm.The authors outline the rationale behind this recommended shift to a competency-based, time-variable education system. They then introduce the other articles included in this supplement to Academic Medicine, which summarize the history of, theories behind, examples demonstrating, and challenges associated with competency-based, time-variable education in the health professions.
Generating a Simulated Fluid Flow over a Surface Using Anisotropic Diffusion
NASA Technical Reports Server (NTRS)
Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)
2016-01-01
A fluid-flow simulation over a computer-generated surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using the gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and the gradient vector.
Malaria control under unstable dynamics: reactive vs. climate-based strategies.
Baeza, Andres; Bouma, Menno J; Dhiman, Ramesh; Pascual, Mercedes
2014-01-01
In areas of the world where malaria prevails under unstable conditions, attacking the adult vector population through insecticide-based Indoor Residual Spraying (IRS) is the most common method for controlling epidemics. Defined in policy guidance, the use of Annual Parasitic Incidence (API) is an important tool for assessing the effectiveness of control and for planning new interventions. To investigate the consequences that a policy based on API in previous seasons might have on the population dynamics of the disease and on control itself in regions of low and seasonal transmission, we formulate a mathematical malaria model that couples epidemiologic and vector dynamics with IRS intervention. This model is parameterized for a low transmission and semi-arid region in northwest India, where epidemics are driven by high rainfall variability. We show that this type of feedback mechanism in control strategies can generate transient cycles in malaria even in the absence of environmental variability, and that this tendency to cycle can in turn limit the effectiveness of control in the presence of such variability. Specifically, for realistic rainfall conditions and over a range of control intensities, the effectiveness of such 'reactive' intervention is compared to that of an alternative strategy based on rainfall and therefore vector variability. Results show that the efficacy of intervention is strongly influenced by rainfall variability and the type of policy implemented. In particular, under an API 'reactive' policy, high vector populations can coincide more frequently with low control coverage, and in so doing generate large unexpected epidemics and decrease the likelihood of elimination. These results highlight the importance of incorporating information on climate variability, rather than previous incidence, in planning IRS interventions in regions of unstable malaria. These findings are discussed in the more general context of elimination and other low transmission regions such as highlands. Copyright © 2013. Published by Elsevier B.V.
Generating variable and random schedules of reinforcement using Microsoft Excel macros.
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.
Robson, Andrew; Robson, Fiona
2015-01-01
To identify the combination of variables that explain nurses' continuation intention in the UK National Health Service. This alternative arena has permitted the replication of a private sector Australian study. This study provides understanding about the issues that affect nurse retention in a sector where employee attrition is a key challenge, further exacerbated by an ageing workforce. A quantitative study based on a self-completion survey questionnaire completed in 2010. Nurses employed in two UK National Health Service Foundation Trusts were surveyed and assessed using seven work-related constructs and various demographics including age generation. Through correlation, multiple regression and stepwise regression analysis, the potential combined effect of various explanatory variables on continuation intention was assessed, across the entire nursing cohort and in three age-generation groups. Three variables act in combination to explain continuation intention: work-family conflict, work attachment and importance of work to the individual. This combination of significant explanatory variables was consistent across the three generations of nursing employee. Work attachment was identified as the strongest marginal predictor of continuation intention. Work orientation has a greater impact on continuation intention compared with employer-directed interventions such as leader-member exchange, teamwork and autonomy. UK nurses are homogeneous across the three age-generations regarding explanation of continuation intention, with the significant explanatory measures being recognizably narrower in their focus and more greatly concentrated on the individual. This suggests that differentiated approaches to retention should perhaps not be pursued in this sectoral context. © 2014 John Wiley & Sons Ltd.
VARIABLE TIME-INTERVAL GENERATOR
Gross, J.E.
1959-10-31
This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.
Mechanisms of phosphene generation in ocular proton therapy as related to space radiation exposure
NASA Astrophysics Data System (ADS)
Chuard, D.; Anthonipillai, V.; Dendale, R.; Nauraye, C.; Khan, E.; Mabit, C.; De Marzi, L.; Narici, L.
2016-08-01
Particle therapy provides an opportunity to study the human response to space radiation in ground-based facilities. On this basis, a study of light flashes analogous to astronauts' phosphenes reported by patients undergoing ocular proton therapy has been undertaken. The influence of treatment parameters on phosphene generation was investigated for 430 patients treated for a choroidal melanoma at the proton therapy centre of the Institut Curie (ICPO) in Orsay, France, between 2008 and 2011. 60% of them report light flashes, which are predominantly (74%) blue. An analysis of variables describing the patient's physiology, properties of the tumour and dose distribution shows that two groups of tumour and beam variables are correlated with phosphene occurrence. Physiology is found to have no influence on flash triggering. Detailed correlation study eventually suggests a possible twofold mechanism of phosphene generation based on (i) indirect Cerenkov light in the bulk of the eye due to nuclear interactions and radioactive decay and (ii) direct excitation of the nerve fibres in the back of the eye and/or radical excess near the retina.
Qi, Bing; Lougovski, Pavel; Pooser, Raphael C.; ...
2015-10-21
Continuous-variable quantum key distribution (CV-QKD) protocols based on coherent detection have been studied extensively in both theory and experiment. In all the existing implementations of CV-QKD, both the quantum signal and the local oscillator (LO) are generated from the same laser and propagate through the insecure quantum channel. This arrangement may open security loopholes and limit the potential applications of CV-QKD. In our paper, we propose and demonstrate a pilot-aided feedforward data recovery scheme that enables reliable coherent detection using a “locally” generated LO. Using two independent commercial laser sources and a spool of 25-km optical fiber, we construct amore » coherent communication system. The variance of the phase noise introduced by the proposed scheme is measured to be 0.04 (rad 2), which is small enough to enable secure key distribution. This technology opens the door for other quantum communication protocols, such as the recently proposed measurement-device-independent CV-QKD, where independent light sources are employed by different users.« less
A global distributed basin morphometric dataset
NASA Astrophysics Data System (ADS)
Shen, Xinyi; Anagnostou, Emmanouil N.; Mei, Yiwen; Hong, Yang
2017-01-01
Basin morphometry is vital information for relating storms to hydrologic hazards, such as landslides and floods. In this paper we present the first comprehensive global dataset of distributed basin morphometry at 30 arc seconds resolution. The dataset includes nine prime morphometric variables; in addition we present formulas for generating twenty-one additional morphometric variables based on combination of the prime variables. The dataset can aid different applications including studies of land-atmosphere interaction, and modelling of floods and droughts for sustainable water management. The validity of the dataset has been consolidated by successfully repeating the Hack's law.
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibanez, E.; Milligan, M.
2014-04-01
Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less
76 FR 3625 - Sunshine Act Meeting Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... Integration of Variable Renewable Generation. ELECTRIC E-1 RM04-7-009 Market-Based Rates for Wholesale Sales of Electric Energy, Capacity and Ancillary Services by Public Utilities. E-2 RM10-20-000 Market-Based..., Eagle Creek Water Resources, LLC, Eagle Creek Land Resources, LLC. CERTIFICATES C-1 CP10-496-000 Cameron...
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-01-01
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-06-26
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.
A generator for unique quantum random numbers based on vacuum states
NASA Astrophysics Data System (ADS)
Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd
2010-10-01
Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.
Statistical Compression of Wind Speed Data
NASA Astrophysics Data System (ADS)
Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.
2017-12-01
In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.
Apelfröjd, Senad; Eriksson, Sandra
2014-01-01
Results from experiments on a tap transformer based grid connection system for a variable speed vertical axis wind turbine are presented. The tap transformer based system topology consists of a passive diode rectifier, DC-link, IGBT inverter, LCL-filter, and tap transformer. Full range variable speed operation is enabled by using the different step-up ratios of a tap transformer. Simulations using MATLAB/Simulink have been performed in order to study the behavior of the system. A full experimental set up of the system has been used in the laboratory study, where a clone of the on-site generator was driven by an induction motor and the system was connected to a resistive load to better evaluate the performance. Furthermore, the system is run and evaluated for realistic wind speeds and variable speed operation. For a more complete picture of the system performance, a case study using real site Weibull parameters is done, comparing different tap selection options. The results show high system efficiency at nominal power and an increase in overall power output for full tap operation in comparison with the base case, a standard transformer. In addition, the loss distribution at different wind speeds is shown, which highlights the dominant losses at low and high wind speeds. Finally, means for further increasing the overall system efficiency are proposed.
2014-01-01
Results from experiments on a tap transformer based grid connection system for a variable speed vertical axis wind turbine are presented. The tap transformer based system topology consists of a passive diode rectifier, DC-link, IGBT inverter, LCL-filter, and tap transformer. Full range variable speed operation is enabled by using the different step-up ratios of a tap transformer. Simulations using MATLAB/Simulink have been performed in order to study the behavior of the system. A full experimental set up of the system has been used in the laboratory study, where a clone of the on-site generator was driven by an induction motor and the system was connected to a resistive load to better evaluate the performance. Furthermore, the system is run and evaluated for realistic wind speeds and variable speed operation. For a more complete picture of the system performance, a case study using real site Weibull parameters is done, comparing different tap selection options. The results show high system efficiency at nominal power and an increase in overall power output for full tap operation in comparison with the base case, a standard transformer. In addition, the loss distribution at different wind speeds is shown, which highlights the dominant losses at low and high wind speeds. Finally, means for further increasing the overall system efficiency are proposed. PMID:25258733
Coping with Variability in Model-Based Systems Engineering: An Experience in Green Energy
NASA Astrophysics Data System (ADS)
Trujillo, Salvador; Garate, Jose Miguel; Lopez-Herrejon, Roberto Erick; Mendialdua, Xabier; Rosado, Albert; Egyed, Alexander; Krueger, Charles W.; de Sosa, Josune
Model-Based Systems Engineering (MBSE) is an emerging engineering discipline whose driving motivation is to provide support throughout the entire system life cycle. MBSE not only addresses the engineering of software systems but also their interplay with physical systems. Quite frequently, successful systems need to be customized to cater for the concrete and specific needs of customers, end-users, and other stakeholders. To effectively meet this demand, it is vital to have in place mechanisms to cope with the variability, the capacity to change, that such customization requires. In this paper we describe our experience in modeling variability using SysML, a leading MBSE language, for developing a product line of wind turbine systems used for the generation of electricity.
Design Optimization of Gas Generator Hybrid Propulsion Boosters
NASA Technical Reports Server (NTRS)
Weldon, Vincent; Phillips, Dwight; Fink, Larry
1990-01-01
A methodology used in support of a study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specific optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Turbo-generator control with variable valve actuation
Vuk, Carl T [Denver, IA
2011-02-22
An internal combustion engine incorporating a turbo-generator and one or more variably activated exhaust valves. The exhaust valves are adapted to variably release exhaust gases from a combustion cylinder during a combustion cycle to an exhaust system. The turbo-generator is adapted to receive exhaust gases from the exhaust system and rotationally harness energy therefrom to produce electrical power. A controller is adapted to command the exhaust valve to variably open in response to a desired output for the turbo-generator.
Climate variability has a stabilizing effect on the coexistence of prairie grasses
Adler, Peter B.; HilleRisLambers, Janneke; Kyriakidis, Phaedon C.; Guan, Qingfeng; Levine, Jonathan M.
2006-01-01
How expected increases in climate variability will affect species diversity depends on the role of such variability in regulating the coexistence of competing species. Despite theory linking temporal environmental fluctuations with the maintenance of diversity, the importance of climate variability for stabilizing coexistence remains unknown because of a lack of appropriate long-term observations. Here, we analyze three decades of demographic data from a Kansas prairie to demonstrate that interannual climate variability promotes the coexistence of three common grass species. Specifically, we show that (i) the dynamics of the three species satisfy all requirements of “storage effect” theory based on recruitment variability with overlapping generations, (ii) climate variables are correlated with interannual variation in species performance, and (iii) temporal variability increases low-density growth rates, buffering these species against competitive exclusion. Given that environmental fluctuations are ubiquitous in natural systems, our results suggest that coexistence based on the storage effect may be underappreciated and could provide an important alternative to recent neutral theories of diversity. Field evidence for positive effects of variability on coexistence also emphasizes the need to consider changes in both climate means and variances when forecasting the effects of global change on species diversity. PMID:16908862
Chen, Quan; Li, Yaoyu; Seem, John E
2015-09-01
This paper presents a self-optimizing robust control scheme that can maximize the power generation for a variable speed wind turbine with Doubly-Fed Induction Generator (DFIG) operated in Region 2. A dual-loop control structure is proposed to synergize the conversion from aerodynamic power to rotor power and the conversion from rotor power to the electrical power. The outer loop is an Extremum Seeking Control (ESC) based generator torque regulation via the electric power feedback. The ESC can search for the optimal generator torque constant to maximize the rotor power without wind measurement or accurate knowledge of power map. The inner loop is a vector-control based scheme that can both regulate the generator torque requested by the ESC and also maximize the conversion from the rotor power to grid power. An ℋ(∞) controller is synthesized for maximizing, with performance specifications defined based upon the spectrum of the rotor power obtained by the ESC. Also, the controller is designed to be robust against the variations of some generator parameters. The proposed control strategy is validated via simulation study based on the synergy of several software packages including the TurbSim and FAST developed by NREL, Simulink and SimPowerSystems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Variable Frequency Operations of an Offshore Wind Power Plant with HVDC-VSC: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorgian, V.; Singh, M.; Muljadi, E.
2011-12-01
In this paper, a constant Volt/Hz operation applied to the Type 1 wind turbine generator. Various control aspects of Type 1 generators at the plant level and at the turbine level will be investigated. Based on DOE study, wind power generation may reach 330 GW by 2030 at the level of penetration of 20% of the total energy production. From this amount of wind power, 54 GW of wind power will be generated at offshore wind power plants. The deployment of offshore wind power plants requires power transmission from the plant to the load center inland. Since this power transmissionmore » requires submarine cable, there is a need to use High-Voltage Direct Current (HVDC) transmission. Otherwise, if the power is transmitted via alternating current, the reactive power generated by the cable capacitance may cause an excessive over voltage in the middle of the transmission distance which requires unnecessary oversized cable voltage breakdown capability. The use of HVDC is usually required for transmission distance longer than 50 kilometers of submarine cables to be economical. The use of HVDC brings another advantage; it is capable of operating at variable frequency. The inland substation will be operated to 60 Hz synched with the grid, the offshore substation can be operated at variable frequency, thus allowing the wind power plant to be operated at constant Volt/Hz. In this paper, a constant Volt/Hz operation applied to the Type 1 wind turbine generator. Various control aspects of Type 1 generators at the plant level and at the turbine level will be investigated.« less
Characteristics of tuberculosis patients who generate secondary cases.
Rodrigo, T; Caylà, J A; García de Olalla, P; Galdós-Tangüis, H; Jansà, J M; Miranda, P; Brugal, T
1997-08-01
To determine the characteristics of smear positive tuberculosis (TB) patients who generate secondary TB cases. Those smear positive TB patients detected by the Barcelona Tuberculosis Program between 1990-1993, and for whom contact studies had been performed, were studied. We analyzed the predictive role of the variables: age, sex, intravenous drug use (IVDU), the presence of the acquired immune deficiency syndrome (AIDS), human immunodeficiency virus (HIV) infection, radiology pattern, district of residence, history of imprisonment, alcoholism, smoking, history of TB, treatment compliance and the number of secondary cases generated. Statistical analysis was based on the logistic regression model, calculating the odds ratios (OR) with 95% confidence intervals (CI). Of the 1079 patients studied, 78 (7.2%) had generated only one secondary case, and 30 (2.8%) two or more. The variables associated with generating two or more secondary cases were: IVDU (P < 0.001; OR = 4.06; CI: 1.80-9.15), cavitary radiology pattern (P = 0.002; OR = 3.69; CI: 1.62-8.43), and age (P = 0.016; OR = 0.98; CI: 0.96-0.99). When we examined those who had generated one or more secondary cases, the following variables were significant: IVDU (P = 0.043; OR = 1.75; CI: 1.02-3.02), cavitary radiology pattern (P < 0.001; OR = 3.07; CI: 1.98-4.77) and age (P < 0.001; OR = 0.98; CI: 0.97-0.99). The study of the contacts of smear positive TB patients allows us to detect an important number of secondary cases. Young adults, those with cavitary radiology pattern, and IVDU are more likely to generate secondary cases.
Investigation on the possibility of extracting wave energy from the Texas coast
NASA Astrophysics Data System (ADS)
Haces-Fernandez, Francisco
Due to the great and growing demand of energy consumption in the Texas Coast area, the generation of electricity from ocean waves is considered very important. The combination of the wave energy with offshore wind power is explored as a way to increase power output, obtain synergies, maximize the utilization of assigned marine zones and reduce variability. Previously literature has assessed the wave energy generation, combined with wind in different geographic locations such as California, Ireland and the Azores Island. In this research project, the electric power generation from ocean waves on the Texas Coast was investigated, assessing its potential from the meteorological data provided by five buoys from National Data Buoy Center of the National Oceanic and Atmospheric Administration, considering the Pelamis 750 kW Wave Energy Converter (WEC) and the Vesta V90 3 MW Wind Turbine. The power output from wave energy was calculated for the year 2006 using Matlab, and the results in several locations were considered acceptable in terms of total power output, but with a high temporal variability. To reduce its variability, wave energy was combined with wind energy, obtaining a significant reduction on the coefficient of variation on the power output. A Matlab based interface was created to calculate power output and its variability considering data from longer periods of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, Michael; Frew, Bethany A.; Bloom, Aaron
This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. We explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for longterm reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs« less
Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco
2018-05-01
Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286
Sharpening method of satellite thermal image based on the geographical statistical model
NASA Astrophysics Data System (ADS)
Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng
2016-04-01
To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.
Systematic Approach to Better Understanding Integration Costs: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Gregory B.
2015-09-28
When someone mentions integration costs, thoughts of the costs of integrating renewable generation into an existing system come to mind. We think about how variability and uncertainty can increase power system cycling costs as increasing amounts of wind or solar generation are incorporated into the generation mix. However, seldom do we think about what happens to system costs when new baseload generation is added to an existing system or when generation self-schedules. What happens when a highly flexible combined-cycle plant is added? Do system costs go up, or do they go down? Are other, non-cycling, maintenance costs impacted? In thismore » paper we investigate six technologies and operating practices--including VG, baseload generation, generation mix, gas prices, self-scheduling, and fast-start generation--and how changes in these areas can impact a system's operating costs. This paper provides a working definition of integration costs and four components of variable costs. It describes the study approach and how a production cost modeling-based method was used to determine the cost effects, and, as a part of the study approach section, it describes the test system and data used for the comparisons. Finally, it presents the research findings, and, in closing, suggests three areas for future work.« less
Smart pitch control strategy for wind generation system using doubly fed induction generator
NASA Astrophysics Data System (ADS)
Raza, Syed Ahmed
A smart pitch control strategy for a variable speed doubly fed wind generation system is presented in this thesis. A complete dynamic model of DFIG system is developed. The model consists of the generator, wind turbine, aerodynamic and the converter system. The strategy proposed includes the use of adaptive neural network to generate optimized controller gains for pitch control. This involves the generation of controller parameters of pitch controller making use of differential evolution intelligent technique. Training of the back propagation neural network has been carried out for the development of an adaptive neural network. This tunes the weights of the network according to the system states in a variable wind speed environment. Four cases have been taken to test the pitch controller which includes step and sinusoidal changes in wind speeds. The step change is composed of both step up and step down changes in wind speeds. The last case makes use of scaled wind data collected from the wind turbine installed at King Fahd University beach front. Simulation studies show that the differential evolution based adaptive neural network is capable of generating the appropriate control to deliver the maximum possible aerodynamic power available from wind to the generator in an efficient manner by minimizing the transients.
A Lyapunov based approach to energy maximization in renewable energy technologies
NASA Astrophysics Data System (ADS)
Iyasere, Erhun
This dissertation describes the design and implementation of Lyapunov-based control strategies for the maximization of the power captured by renewable energy harnessing technologies such as (i) a variable speed, variable pitch wind turbine, (ii) a variable speed wind turbine coupled to a doubly fed induction generator, and (iii) a solar power generating system charging a constant voltage battery. First, a torque control strategy is presented to maximize wind energy captured in variable speed, variable pitch wind turbines at low to medium wind speeds. The proposed strategy applies control torque to the wind turbine pitch and rotor subsystems to simultaneously control the blade pitch and tip speed ratio, via the rotor angular speed, to an optimum point at which the capture efficiency is maximum. The control method allows for aerodynamic rotor power maximization without exact knowledge of the wind turbine model. A series of numerical results show that the wind turbine can be controlled to achieve maximum energy capture. Next, a control strategy is proposed to maximize the wind energy captured in a variable speed wind turbine, with an internal induction generator, at low to medium wind speeds. The proposed strategy controls the tip speed ratio, via the rotor angular speed, to an optimum point at which the efficiency constant (or power coefficient) is maximal for a particular blade pitch angle and wind speed by using the generator rotor voltage as a control input. This control method allows for aerodynamic rotor power maximization without exact wind turbine model knowledge. Representative numerical results demonstrate that the wind turbine can be controlled to achieve near maximum energy capture. Finally, a power system consisting of a photovoltaic (PV) array panel, dc-to-dc switching converter, charging a battery is considered wherein the environmental conditions are time-varying. A backstepping PWM controller is developed to maximize the power of the solar generating system. The controller tracks a desired array voltage, designed online using an incremental conductance extremum-seeking algorithm, by varying the duty cycle of the switching converter. The stability of the control algorithm is demonstrated by means of Lyapunov analysis. Representative numerical results demonstrate that the grid power system can be controlled to track the maximum power point of the photovoltaic array panel in varying atmospheric conditions. Additionally, the performance of the proposed strategy is compared to the typical maximum power point tracking (MPPT) method of perturb and observe (P&O), where the converter dynamics are ignored, and is shown to yield better results.
Rojas, David; Kapralos, Bill; Dubrowski, Adam
2016-01-01
Next to practice, feedback is the most important variable in skill acquisition. Feedback can vary in content and the way that it is used for delivery. Health professions education research has extensively examined the different effects provided by the different feedback methodologies. In this paper we compared two different types of knowledge of performance (KP) feedback. The first type was video-based KP feedback while the second type consisted of computer generated KP feedback. Results of this study showed that computer generated performance feedback is more effective than video based performance feedback. The combination of the two feedback methodologies provides trainees with a better understanding.
Chandra, Madhavaiah; Keller, Sascha; Gloeckner, Christian; Bornemann, Benjamin; Marx, Andreas
2007-01-01
The Watson-Crick base pairing of DNA is an advantageous phenomenon that can be exploited when using DNA as a scaffold for directed self-organization of nanometer-sized objects. Several reports have appeared in the literature that describe the generation of branched DNA (bDNA) with variable numbers of arms that self-assembles into predesigned architectures. These bDNA units are generated by using cleverly designed rigid crossover DNA molecules. Alternatively, bDNA can be generated by using synthetic branch points derived from either nucleoside or non-nucleoside building blocks. Branched DNA has scarcely been explored for use in nanotechnology or from self-assembling perspectives. Herein, we wish to report our results for the synthesis, characterization, and assembling properties of asymmetrical bDNA molecules that are able to generate linear and circular bDNA constructs. Our strategy for the generation of bDNA is based on a branching point that makes use of a novel protecting-group strategy. The bDNA units were generated by means of automated DNA synthesis methods and were used to generate novel objects by employing chemical and biological techniques. The entities generated might be useful building blocks for DNA-based nanobiotechnology.
Limits and Economic Effects of Distributed PV Generation in North and South Carolina
NASA Astrophysics Data System (ADS)
Holt, Kyra Moore
The variability of renewable sources, such as wind and solar, when integrated into the electrical system must be compensated by traditional generation sources in-order to maintain the constant balance of supply and demand required for grid stability. The goal of this study is to analyze the effects of increasing large levels of solar Photovoltaic (PV) penetration (in terms of a percentage of annual energy production) on a test grid with similar characteristics to the Duke Energy Carolinas (DEC) and Progress Energy Carolinas (PEC) regions of North and South Carolina. PV production is modeled entering the system at the distribution level and regional PV capacity is based on household density. A gridded hourly global horizontal irradiance (GHI) dataset is used to capture the variable nature of PV generation. A unit commitment model (UCM) is then used determine the hourly dispatch of generators based on generator parameters and costs to supply generation to meet demand. Annual modeled results for six different scenarios are evaluated to determine technical, environmental and economic effects of varying levels of distributed PV penetration on the system. This study finds that the main limiting factor for PV integration in the DEC and PEC balancing authority regions is defined by the large generating capacity of base-load nuclear plants within the system. This threshold starts to affect system stability at integration levels of 5.7%. System errors, defined by imbalances caused by over or under generation with respect to demand, are identified in the model however the validity of these errors in real world context needs further examination due to the lack of high frequency irradiance data and modeling limitations. Operational system costs decreased as expected with PV integration although further research is needed to explore the impacts of the capital costs required to achieve the penetration levels found in this study. PV system generation was found to mainly displace coal generation creating a loss of revenue for generator owners. In all scenarios, CO 2 emissions were reduced with PV integration. This reduction could be used to meet impending EPA state-specific CO2 emissions targets.
Ouared, Abderrahmane; Montagnon, Emmanuel; Cloutier, Guy
2015-10-21
A method based on adaptive torsional shear waves (ATSW) is proposed to overcome the strong attenuation of shear waves generated by a radiation force in dynamic elastography. During the inward propagation of ATSW, the magnitude of displacements is enhanced due to the convergence of shear waves and constructive interferences. The proposed method consists in generating ATSW fields from the combination of quasi-plane shear wavefronts by considering a linear superposition of displacement maps. Adaptive torsional shear waves were experimentally generated in homogeneous and heterogeneous tissue mimicking phantoms, and compared to quasi-plane shear wave propagations. Results demonstrated that displacement magnitudes by ATSW could be up to 3 times higher than those obtained with quasi-plane shear waves, that the variability of shear wave speeds was reduced, and that the signal-to-noise ratio of displacements was improved. It was also observed that ATSW could cause mechanical inclusions to resonate in heterogeneous phantoms, which further increased the displacement contrast between the inclusion and the surrounding medium. This method opens a way for the development of new noninvasive tissue characterization strategies based on ATSW in the framework of our previously reported shear wave induced resonance elastography (SWIRE) method proposed for breast cancer diagnosis.
Generating partially correlated noise—A comparison of methods
Hartmann, William M.; Cho, Yun Jin
2011-01-01
There are three standard methods for generating two channels of partially correlated noise: the two-generator method, the three-generator method, and the symmetric-generator method. These methods allow an experimenter to specify a target cross correlation between the two channels, but actual generated noises show statistical variability around the target value. Numerical experiments were done to compare the variability for those methods as a function of the number of degrees of freedom. The results of the experiments quantify the stimulus uncertainty in diverse binaural psychoacoustical experiments: incoherence detection, perceived auditory source width, envelopment, noise localization∕lateralization, and the masking level difference. The numerical experiments found that when the elemental generators have unequal powers, the different methods all have similar variability. When the powers are constrained to be equal, the symmetric-generator method has much smaller variability than the other two. PMID:21786899
Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Green, Lawrence; Carle, Alan; Fagan, Mike
1999-01-01
Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.
Vogel, Michael W; Giorni, Andrea; Vegh, Viktor; Pellicer-Guridi, Ruben; Reutens, David C
2016-01-01
We studied the feasibility of generating the variable magnetic fields required for ultra-low field nuclear magnetic resonance relaxometry with dynamically adjustable permanent magnets. Our motivation was to substitute traditional electromagnets by distributed permanent magnets, increasing system portability. The finite element method (COMSOL®) was employed for the numerical study of a small permanent magnet array to calculate achievable magnetic field strength, homogeneity, switching time and magnetic forces. A manually operated prototype was simulated and constructed to validate the numerical approach and to verify the generated magnetic field. A concentric small permanent magnet array can be used to generate strong sample pre-polarisation and variable measurement fields for ultra-low field relaxometry via simple prescribed magnet rotations. Using the array, it is possible to achieve a pre-polarisation field strength above 100 mT and variable measurement fields ranging from 20-50 μT with 200 ppm absolute field homogeneity within a field-of-view of 5 x 5 x 5 cubic centimetres. A dynamic small permanent magnet array can generate multiple highly homogeneous magnetic fields required in ultra-low field nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) instruments. This design can significantly reduce the volume and energy requirements of traditional systems based on electromagnets, improving portability considerably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broeer, Torsten; Fuller, Jason C.; Tuffner, Francis K.
2014-01-31
Electricity generation from wind power and other renewable energy sources is increasing, and their variability introduces new challenges to the power system. The emergence of smart grid technologies in recent years has seen a paradigm shift in redefining the electrical system of the future, in which controlled response of the demand side is used to balance fluctuations and intermittencies from the generation side. This paper presents a modeling framework for an integrated electricity system where loads become an additional resource. The agent-based model represents a smart grid power system integrating generators, transmission, distribution, loads and market. The model incorporates generatormore » and load controllers, allowing suppliers and demanders to bid into a Real-Time Pricing (RTP) electricity market. The modeling framework is applied to represent a physical demonstration project conducted on the Olympic Peninsula, Washington, USA, and validation simulations are performed using actual dynamic data. Wind power is then introduced into the power generation mix illustrating the potential of demand response to mitigate the impact of wind power variability, primarily through thermostatically controlled loads. The results also indicate that effective implementation of Demand Response (DR) to assist integration of variable renewable energy resources requires a diversity of loads to ensure functionality of the overall system.« less
A new class of variable capacitance generators based on the dielectric fluid transducer
NASA Astrophysics Data System (ADS)
Duranti, Mattia; Righi, Michele; Vertechy, Rocco; Fontana, Marco
2017-11-01
This paper introduces the novel concept of dielectric fluid transducer (DFT), which is an electrostatic variable capacitance transducer made by compliant electrodes, solid dielectrics and a dielectric fluid with variable volume and/or shape. The DFT can be employed in actuator mode and generator mode. In this work, DFTs are studied as electromechanical generators able to convert oscillating mechanical energy into direct current electricity. Beside illustrating the working principle of dielectric fluid generators (DFGs), we introduce different architectural implementations and provide considerations on limitations and best practices for their design. Additionally, the proposed concept is demonstrated in a preliminary experimental test campaign conducted on a first DFG prototype. During experimental tests a maximum energy per cycle of 4.6 {mJ} and maximum power of 0.575 {mW} has been converted, with a conversion efficiency up to 30%. These figures correspond to converted energy densities of 63.8 {mJ} {{{g}}}-1 with respect to the displaced dielectric fluid and 179.0 {mJ} {{{g}}}-1 with respect to the mass of the solid dielectric. This promising performance can be largely improved through the optimization of device topology and dimensions, as well as by the adoption of more performing conductive and dielectric materials.
NASA Astrophysics Data System (ADS)
Khan, M. Ijaz; Hayat, Tasawar; Alsaedi, Ahmed
2018-02-01
This modeling and computations present the study of viscous fluid flow with variable properties by a rotating stretchable disk. Rotating flow is generated through nonlinear rotating stretching surface. Nonlinear thermal radiation and heat generation/absorption are studied. Flow is conducting for a constant applied magnetic field. No polarization is taken. Induced magnetic field is not taken into account. Attention is focused on the entropy generation rate and Bejan number. The entropy generation rate and Bejan number clearly depend on velocity and thermal fields. The von Kármán approach is utilized to convert the partial differential expressions into ordinary ones. These expressions are non-dimensionalized, and numerical results are obtained for flow variables. The effects of the magnetic parameter, Prandtl number, radiative parameter, heat generation/absorption parameter, and slip parameter on velocity and temperature fields as well as the entropy generation rate and Bejan number are discussed. Drag forces (radial and tangential) and heat transfer rates are calculated and discussed. Furthermore the entropy generation rate is a decreasing function of magnetic variable and Reynolds number. The Bejan number effect on the entropy generation rate is reverse to that of the magnetic variable. Also opposite behavior of heat transfers is observed for varying estimations of radiative and slip variables.
Correction of microplate location effects improves performance of the thrombin generation test
2013-01-01
Background Microplate-based thrombin generation test (TGT) is widely used as clinical measure of global hemostatic potential and it becomes a useful tool for control of drug potency and quality by drug manufactures. However, the convenience of the microtiter plate technology can be deceiving: microplate assays are prone to location-based variability in different parts of the microtiter plate. Methods In this report, we evaluated the well-to-well consistency of the TGT variant specifically applied to the quantitative detection of the thrombogenic substances in the immune globulin product. We also studied the utility of previously described microplate layout designs in the TGT experiment. Results Location of the sample on the microplate (location effect) contributes to the variability of TGT measurements. Use of manual pipetting techniques and applications of the TGT to the evaluation of procoagulant enzymatic substances are especially sensitive. The effects were not sensitive to temperature or choice of microplate reader. Smallest location effects were observed with automated dispenser-based calibrated thrombogram instrument. Even for an automated instrument, the use of calibration curve resulted in up to 30% bias in thrombogenic potency assignment. Conclusions Use of symmetrical version of the strip-plot layout was demonstrated to help to minimize location artifacts even under the worst-case conditions. Strip-plot layouts are required for quantitative thrombin-generation based bioassays used in the biotechnological field. PMID:23829491
Correction of microplate location effects improves performance of the thrombin generation test.
Liang, Yideng; Woodle, Samuel A; Shibeko, Alexey M; Lee, Timothy K; Ovanesov, Mikhail V
2013-07-05
Microplate-based thrombin generation test (TGT) is widely used as clinical measure of global hemostatic potential and it becomes a useful tool for control of drug potency and quality by drug manufactures. However, the convenience of the microtiter plate technology can be deceiving: microplate assays are prone to location-based variability in different parts of the microtiter plate. In this report, we evaluated the well-to-well consistency of the TGT variant specifically applied to the quantitative detection of the thrombogenic substances in the immune globulin product. We also studied the utility of previously described microplate layout designs in the TGT experiment. Location of the sample on the microplate (location effect) contributes to the variability of TGT measurements. Use of manual pipetting techniques and applications of the TGT to the evaluation of procoagulant enzymatic substances are especially sensitive. The effects were not sensitive to temperature or choice of microplate reader. Smallest location effects were observed with automated dispenser-based calibrated thrombogram instrument. Even for an automated instrument, the use of calibration curve resulted in up to 30% bias in thrombogenic potency assignment. Use of symmetrical version of the strip-plot layout was demonstrated to help to minimize location artifacts even under the worst-case conditions. Strip-plot layouts are required for quantitative thrombin-generation based bioassays used in the biotechnological field.
An Optimization-Based Approach to Injector Element Design
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)
2000-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, combining results from both elements to simulate a trade study, thrust-to-weight trends are illustrated and examined in detail.
Generating a Simulated Fluid Flow Over an Aircraft Surface Using Anisotropic Diffusion
NASA Technical Reports Server (NTRS)
Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)
2013-01-01
A fluid-flow simulation over a computer-generated aircraft surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A pressure-gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using a pressure gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and angular difference between the diffusion-path vector and the pressure-gradient vector.
Real-time fuzzy inference based robot path planning
NASA Technical Reports Server (NTRS)
Pacini, Peter J.; Teichrow, Jon S.
1990-01-01
This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.
NASA Astrophysics Data System (ADS)
Francois, Baptiste; Martino, Sara; Tofte, Lena; Hingray, Benoit; Mo, Birger; Creutin, Jean-Dominique
2017-04-01
Thanks to its huge water storage capacity, Norway has an excess of energy generation at annual scale, although significant regional disparity exists. On average, the Mid-Norway region has an energy deficit and needs to import more electricity than it exports. We show that this energy deficit can be reduced with an increase in wind generation and transmission line capacity, even in future climate scenarios where both mean annual temperature and precipitation are changed. For the considered scenarios, the deficit observed in winter disappears, i.e. when electricity consumption and prices are high. At the annual scale, the deficit behavior depends more on future changes in precipitation. Another consequence of changes in wind production and transmission capacity is the modification of electricity exchanges with neighboring regions which are also modified both in terms of average, variability and seasonality. Keywords: Variable renewable energy, Wind, Hydro, Energy balance, Energy market
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-01-07
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-04-03
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
Climate Change Impacts at Department of Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotamarthi, Rao; Wang, Jiali; Zoebel, Zach
This project is aimed at providing the U.S. Department of Defense (DoD) with a comprehensive analysis of the uncertainty associated with generating climate projections at the regional scale that can be used by stakeholders and decision makers to quantify and plan for the impacts of future climate change at specific locations. The merits and limitations of commonly used downscaling models, ranging from simple to complex, are compared, and their appropriateness for application at installation scales is evaluated. Downscaled climate projections are generated at selected DoD installations using dynamic and statistical methods with an emphasis on generating probability distributions of climatemore » variables and their associated uncertainties. The sites selection and selection of variables and parameters for downscaling was based on a comprehensive understanding of the current and projected roles that weather and climate play in operating, maintaining, and planning DoD facilities and installations.« less
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
Generating partially correlated noise--a comparison of methods.
Hartmann, William M; Cho, Yun Jin
2011-07-01
There are three standard methods for generating two channels of partially correlated noise: the two-generator method, the three-generator method, and the symmetric-generator method. These methods allow an experimenter to specify a target cross correlation between the two channels, but actual generated noises show statistical variability around the target value. Numerical experiments were done to compare the variability for those methods as a function of the number of degrees of freedom. The results of the experiments quantify the stimulus uncertainty in diverse binaural psychoacoustical experiments: incoherence detection, perceived auditory source width, envelopment, noise localization/lateralization, and the masking level difference. The numerical experiments found that when the elemental generators have unequal powers, the different methods all have similar variability. When the powers are constrained to be equal, the symmetric-generator method has much smaller variability than the other two. © 2011 Acoustical Society of America
Kupek, Emil
2006-03-15
Structural equation modelling (SEM) has been increasingly used in medical statistics for solving a system of related regression equations. However, a great obstacle for its wider use has been its difficulty in handling categorical variables within the framework of generalised linear models. A large data set with a known structure among two related outcomes and three independent variables was generated to investigate the use of Yule's transformation of odds ratio (OR) into Q-metric by (OR-1)/(OR+1) to approximate Pearson's correlation coefficients between binary variables whose covariance structure can be further analysed by SEM. Percent of correctly classified events and non-events was compared with the classification obtained by logistic regression. The performance of SEM based on Q-metric was also checked on a small (N = 100) random sample of the data generated and on a real data set. SEM successfully recovered the generated model structure. SEM of real data suggested a significant influence of a latent confounding variable which would have not been detectable by standard logistic regression. SEM classification performance was broadly similar to that of the logistic regression. The analysis of binary data can be greatly enhanced by Yule's transformation of odds ratios into estimated correlation matrix that can be further analysed by SEM. The interpretation of results is aided by expressing them as odds ratios which are the most frequently used measure of effect in medical statistics.
On-chip continuous-variable quantum entanglement
NASA Astrophysics Data System (ADS)
Masada, Genta; Furusawa, Akira
2016-09-01
Entanglement is an essential feature of quantum theory and the core of the majority of quantum information science and technologies. Quantum computing is one of the most important fruits of quantum entanglement and requires not only a bipartite entangled state but also more complicated multipartite entanglement. In previous experimental works to demonstrate various entanglement-based quantum information processing, light has been extensively used. Experiments utilizing such a complicated state need highly complex optical circuits to propagate optical beams and a high level of spatial interference between different light beams to generate quantum entanglement or to efficiently perform balanced homodyne measurement. Current experiments have been performed in conventional free-space optics with large numbers of optical components and a relatively large-sized optical setup. Therefore, they are limited in stability and scalability. Integrated photonics offer new tools and additional capabilities for manipulating light in quantum information technology. Owing to integrated waveguide circuits, it is possible to stabilize and miniaturize complex optical circuits and achieve high interference of light beams. The integrated circuits have been firstly developed for discrete-variable systems and then applied to continuous-variable systems. In this article, we review the currently developed scheme for generation and verification of continuous-variable quantum entanglement such as Einstein-Podolsky-Rosen beams using a photonic chip where waveguide circuits are integrated. This includes balanced homodyne measurement of a squeezed state of light. As a simple example, we also review an experiment for generating discrete-variable quantum entanglement using integrated waveguide circuits.
Integrated Force Method Solution to Indeterminate Structural Mechanics Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.
2004-01-01
Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.
Web-4D-QSAR: A web-based application to generate 4D-QSAR descriptors.
Ataide Martins, João Paulo; Rougeth de Oliveira, Marco Antônio; Oliveira de Queiroz, Mário Sérgio
2018-06-05
A web-based application is developed to generate 4D-QSAR descriptors using the LQTA-QSAR methodology, based on molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package. The LQTAGrid module calculates the intermolecular interaction energies at each grid point, considering probes and all aligned conformations resulting from MD simulations. These interaction energies are the independent variables or descriptors employed in a QSAR analysis. A friendly front end web interface, built using the Django framework and Python programming language, integrates all steps of the LQTA-QSAR methodology in a way that is transparent to the user, and in the backend, GROMACS and LQTAGrid are executed to generate 4D-QSAR descriptors to be used later in the process of QSAR model building. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
White, Peter A
2013-01-01
How accurate are explicit judgements about familiar forms of object motion, and how are they made? Participants judged the relations between force exerted in kicking a soccer ball and variables that define the trajectory of the ball: launch angle, maximum height attained, and maximum distance reached. Judgements tended to conform to a simple heuristic that judged force tends to increase as maximum height and maximum distance increase, with launch angle not being influential. Support was also found for the converse prediction, that judged maximum height and distance tend to increase as the amount of force described in the kick increases. The observed judgemental tendencies did not resemble the objective relations, in which force is a function of interactions between the trajectory variables. This adds to a body of research indicating that practical knowledge based on experiences of actions on objects is not available to the processes that generate judgements in higher cognition and that such judgements are generated by simple rules that do not capture the objective interactions between the physical variables.
A multiple-alignment based primer design algorithm for genetically highly variable DNA targets
2013-01-01
Background Primer design for highly variable DNA sequences is difficult, and experimental success requires attention to many interacting constraints. The advent of next-generation sequencing methods allows the investigation of rare variants otherwise hidden deep in large populations, but requires attention to population diversity and primer localization in relatively conserved regions, in addition to recognized constraints typically considered in primer design. Results Design constraints include degenerate sites to maximize population coverage, matching of melting temperatures, optimizing de novo sequence length, finding optimal bio-barcodes to allow efficient downstream analyses, and minimizing risk of dimerization. To facilitate primer design addressing these and other constraints, we created a novel computer program (PrimerDesign) that automates this complex procedure. We show its powers and limitations and give examples of successful designs for the analysis of HIV-1 populations. Conclusions PrimerDesign is useful for researchers who want to design DNA primers and probes for analyzing highly variable DNA populations. It can be used to design primers for PCR, RT-PCR, Sanger sequencing, next-generation sequencing, and other experimental protocols targeting highly variable DNA samples. PMID:23965160
Design optimization of gas generator hybrid propulsion boosters
NASA Technical Reports Server (NTRS)
Weldon, Vincent; Phillips, Dwight U.; Fink, Lawrence E.
1990-01-01
A methodology used in support of a contract study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specified optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.
Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene
2010-01-01
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602
Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; Romero-Troncoso, Rene de Jesus
2010-01-01
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node.
Western Wind and Solar Integration Study Phase 2 (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lew, D.; Brinkman, G.; Ibanez, E.
This presentation accompanies Phase 2 of the Western Wind and Solar Integration Study, a follow-on to Phase 1, which examined the operational impacts of high penetrations of variable renewable generation on the electric power system in the West and was one of the largest variable generation studies to date. High penetrations of variable generation can induce cycling of fossil-fueled generators. Cycling leads to wear-and-tear costs and changes in emissions. Phase 2 calculated these costs and emissions, and simulated grid operations for a year to investigate the detailed impact of variable generation on the fossil-fueled fleet. The presentation highlights the scopemore » of the study and results.« less
Matrix Converter Interface for a Wind Energy Conversion System: Issues and Limitations
NASA Astrophysics Data System (ADS)
Patki, Chetan; Agarwal, Vivek
2009-08-01
Variable speed grid connected wind energy systems sometimes involve AC-AC power electronic interface between the generator and the grid. Matrix converter is an attractive option for such applications. Variable speed of the wind generator demands variable voltage variable frequency at the generator terminal. Matrix converter is used in this work to generate such a supply. Also, matrix converter can be appropriately controlled to compensate the grid for non-linear, reactive loads. However, any change of power factor on the grid side reflects on the voltage magnitude on the wind generator side. It is highlighted that this may contradict the maximum power point tracking control requirements. All the results of this work are presented.
Multiple and variable speed electrical generator systems for large wind turbines
NASA Technical Reports Server (NTRS)
Andersen, T. S.; Hughes, P. S.; Kirschbaum, H. S.; Mutone, G. A.
1982-01-01
A cost effective method to achieve increased wind turbine generator energy conversion and other operational benefits through variable speed operation is presented. Earlier studies of multiple and variable speed generators in wind turbines were extended for evaluation in the context of a specific large sized conceptual design. System design and simulation have defined the costs and performance benefits which can be expected from both two speed and variable speed configurations.
A grid spacing control technique for algebraic grid generation methods
NASA Technical Reports Server (NTRS)
Smith, R. E.; Kudlinski, R. A.; Everton, E. L.
1982-01-01
A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.
NASA Astrophysics Data System (ADS)
Fatichi, S.; Ivanov, V. Y.; Caporali, E.
2013-04-01
This study extends a stochastic downscaling methodology to generation of an ensemble of hourly time series of meteorological variables that express possible future climate conditions at a point-scale. The stochastic downscaling uses general circulation model (GCM) realizations and an hourly weather generator, the Advanced WEather GENerator (AWE-GEN). Marginal distributions of factors of change are computed for several climate statistics using a Bayesian methodology that can weight GCM realizations based on the model relative performance with respect to a historical climate and a degree of disagreement in projecting future conditions. A Monte Carlo technique is used to sample the factors of change from their respective marginal distributions. As a comparison with traditional approaches, factors of change are also estimated by averaging GCM realizations. With either approach, the derived factors of change are applied to the climate statistics inferred from historical observations to re-evaluate parameters of the weather generator. The re-parameterized generator yields hourly time series of meteorological variables that can be considered to be representative of future climate conditions. In this study, the time series are generated in an ensemble mode to fully reflect the uncertainty of GCM projections, climate stochasticity, as well as uncertainties of the downscaling procedure. Applications of the methodology in reproducing future climate conditions for the periods of 2000-2009, 2046-2065 and 2081-2100, using the period of 1962-1992 as the historical baseline are discussed for the location of Firenze (Italy). The inferences of the methodology for the period of 2000-2009 are tested against observations to assess reliability of the stochastic downscaling procedure in reproducing statistics of meteorological variables at different time scales.
Vibration control of an energy regenerative seat suspension with variable external resistance
NASA Astrophysics Data System (ADS)
Ning, Donghong; Sun, Shuaishuai; Du, Haiping; Li, Weihua; Zhang, Nong
2018-06-01
In this paper, an energy regenerative seat suspension with a variable external resistance is proposed and built, and a semi-active controller for its vibration control is also designed and validated. The energy regenerative seat suspension is built with a three-phase generator and a gear reducer, which are installed in the scissors structure centre of the seat suspension, and the vibration energy is directly harvested from the rotary movement of suspension's scissors structure. The electromagnetic torque of the semi-active seat suspension actuator is controlled by an external variable resistor. An integrated model including the seat suspension's kinematics and the generator is built and proven to match the test result very well. A simplified experimental phenomenon model is also built based on the test results for the controller design. A state feedback H∞ controller is proposed for the regenerative seat suspension's semi-active vibration control. The proposed regenerative seat suspension and its controller are validated with both simulations and experiments. A well-tuned passive seat suspension is applied to evaluate the regenerative seat's performance. Based on ISO 2631-1, the frequency-weighted root mean square (FW-RMS) acceleration of the proposed seat suspension has a 22.84% reduction when compared with the passive one, which indicates the improvement of ride comfort. At the same time, the generated RMS power is 1.21 W. The proposed regenerative seat suspension can greatly improve the driver's ride comfort and has the potential to be developed to a self-powered semi-active system.
Thermoelectric power generator for variable thermal power source
Bell, Lon E; Crane, Douglas Todd
2015-04-14
Traditional power generation systems using thermoelectric power generators are designed to operate most efficiently for a single operating condition. The present invention provides a power generation system in which the characteristics of the thermoelectrics, the flow of the thermal power, and the operational characteristics of the power generator are monitored and controlled such that higher operation efficiencies and/or higher output powers can be maintained with variably thermal power input. Such a system is particularly beneficial in variable thermal power source systems, such as recovering power from the waste heat generated in the exhaust of combustion engines.
Variable Cycle Intake for Reverse Core Engine
NASA Technical Reports Server (NTRS)
Chandler, Jesse M (Inventor); Staubach, Joseph B (Inventor); Suciu, Gabriel L (Inventor)
2016-01-01
A gas generator for a reverse core engine propulsion system has a variable cycle intake for the gas generator, which variable cycle intake includes a duct system. The duct system is configured for being selectively disposed in a first position and a second position, wherein free stream air is fed to the gas generator when in the first position, and fan stream air is fed to the gas generator when in the second position.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Jing; Read, Paul W.; Baisden, Joseph M.
Purpose: To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Methods and Materials: Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA)more » from RedCAM ({epsilon}), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability ({nu}). Results: Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies ({epsilon} = -21.64% {+-} 8.23%) and lung tumor patient studies ({epsilon} = -20.31% {+-} 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly ({epsilon} = -5.13{nu} - 6.71, r{sup 2} = 0.76) with the subjects' respiratory variability. Conclusions: Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.« less
Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke
2007-11-01
To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.
An inverse approach to perturb historical rainfall data for scenario-neutral climate impact studies
NASA Astrophysics Data System (ADS)
Guo, Danlu; Westra, Seth; Maier, Holger R.
2018-01-01
Scenario-neutral approaches are being used increasingly for climate impact assessments, as they allow water resource system performance to be evaluated independently of climate change projections. An important element of these approaches is the generation of perturbed series of hydrometeorological variables that form the inputs to hydrologic and water resource assessment models, with most scenario-neutral studies to-date considering only shifts in the average and a limited number of other statistics of each climate variable. In this study, a stochastic generation approach is used to perturb not only the average of the relevant hydrometeorological variables, but also attributes such as the intermittency and extremes. An optimization-based inverse approach is developed to obtain hydrometeorological time series with uniform coverage across the possible ranges of rainfall attributes (referred to as the 'exposure space'). The approach is demonstrated on a widely used rainfall generator, WGEN, for a case study at Adelaide, Australia, and is shown to be capable of producing evenly-distributed samples over the exposure space. The inverse approach expands the applicability of the scenario-neutral approach in evaluating a water resource system's sensitivity to a wider range of plausible climate change scenarios.
USDA-ARS?s Scientific Manuscript database
Numerous soil erosion models compute concentrated flow hydraulics based on the Manning–Strickler equation (v = kSt R2/3 I1/2) even though the range of the application on rill flow is unclear. Unconfined rill morphologies generate local friction effects and consequently spatially variable rill roughn...
Rios Piedra, Edgar A; Taira, Ricky K; El-Saden, Suzie; Ellingson, Benjamin M; Bui, Alex A T; Hsu, William
2016-02-01
Brain tumor analysis is moving towards volumetric assessment of magnetic resonance imaging (MRI), providing a more precise description of disease progression to better inform clinical decision-making and treatment planning. While a multitude of segmentation approaches exist, inherent variability in the results of these algorithms may incorrectly indicate changes in tumor volume. In this work, we present a systematic approach to characterize variability in tumor boundaries that utilizes equivalence tests as a means to determine whether a tumor volume has significantly changed over time. To demonstrate these concepts, 32 MRI studies from 8 patients were segmented using four different approaches (statistical classifier, region-based, edge-based, knowledge-based) to generate different regions of interest representing tumor extent. We showed that across all studies, the average Dice coefficient for the superset of the different methods was 0.754 (95% confidence interval 0.701-0.808) when compared to a reference standard. We illustrate how variability obtained by different segmentations can be used to identify significant changes in tumor volume between sequential time points. Our study demonstrates that variability is an inherent part of interpreting tumor segmentation results and should be considered as part of the interpretation process.
Method of operating a thermoelectric generator
Reynolds, Michael G; Cowgill, Joshua D
2013-11-05
A method for operating a thermoelectric generator supplying a variable-load component includes commanding the variable-load component to operate at a first output and determining a first load current and a first load voltage to the variable-load component while operating at the commanded first output. The method also includes commanding the variable-load component to operate at a second output and determining a second load current and a second load voltage to the variable-load component while operating at the commanded second output. The method includes calculating a maximum power output of the thermoelectric generator from the determined first load current and voltage and the determined second load current and voltage, and commanding the variable-load component to operate at a third output. The commanded third output is configured to draw the calculated maximum power output from the thermoelectric generator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, Michael; Frew, Bethany A.; Bloom, Aaron
This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. Furthermore, we explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for long-term reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs.« less
Milligan, Michael; Frew, Bethany A.; Bloom, Aaron; ...
2016-03-22
This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. Furthermore, we explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for long-term reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs.« less
The experimental studies of operating modes of a diesel-generator set at variable speed
NASA Astrophysics Data System (ADS)
Obukhov, S. G.; Plotnikov, I. A.; Surkov, M. A.; Sumarokova, L. P.
2017-02-01
A diesel generator set working at variable speed to save fuel is studied. The results of experimental studies of the operating modes of an autonomous diesel generator set are presented. Areas for regulating operating modes are determined. It is demonstrated that the transfer of the diesel generator set to variable speed of the diesel engine makes it possible to improve the energy efficiency of the autonomous generator source, as well as the environmental and ergonomic performance of the equipment as compared with general industrial analogues.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A; Cole, Wesley J; Sun, Yinong
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less
Control system design for the MOD-5A 7.3 mW wind turbine generator
NASA Technical Reports Server (NTRS)
Barton, Robert S.; Hosp, Theodore J.; Schanzenbach, George P.
1995-01-01
This paper provides descriptions of the requirements analysis, hardware development and software development phases of the Control System design for the MOD-5A 7.3 mW Wind Turbine Generator. The system, designed by General Electric Company, Advanced Energy Programs Department, under contract DEN 3-153 with NASA Lewis Research Center and DOE, provides real time regulation of rotor speed by control of both generator torque and rotor torque. A variable speed generator system is used to provide both airgap torque control and reactive power control. The wind rotor is designed with segmented ailerons which are positioned to control blade torque. The central component of the control system, selected early in the design process, is a programmable controller used for sequencing, alarm monitoring, communication, and real time control. Development of requirements for use of aileron controlled blades and a variable speed generator required an analytical simulation that combined drivetrain, tower and blade elastic modes with wind disturbances and control behavior. An orderly two phase plan was used for controller software development. A microcomputer based turbine simulator was used to facilitate hardware and software integration and test.
NASA Technical Reports Server (NTRS)
Schoenman, L.
1983-01-01
A data base which relates candidate design variables, such as injector type, acoustic cavity configuration, chamber length, fuel film-cooling, etc., to operational characteristics such as combustion efficiency, combustion stability, carbon deposition, and chamber gas-side heat flux was generated.
How Adolescents Counterargue Television Beer Advertisements: Implications for Education Efforts.
ERIC Educational Resources Information Center
Slater, Michael D.; Rouner, Donna; Domenech-Rodriguez, Melanie; Beauvais, Frederick; Murphy, Kevin; Estes, Emily
1998-01-01
Examined types of counterarguments generated by Anglo and Latino adolescents exposed to television beer ads, noting counterargument differences based on demographic and behavioral variables. Questionnaires and comments from the students indicated that without any cues, they responded with counterarguments, though counterarguments represented only…
How Nano are Nanocomposites (Preprint)
2007-02-01
with λ being the wavelength of the radiation . Based on this Fourier transform, r and q are conjugate variables. Although Eqs. (1) and (2) are...generation material is produced from porcellanite, a mineral rich in amorphous silica that is found in the Negev desert in southern Israel (Dimosil
NASA Astrophysics Data System (ADS)
Sato, Daiki; Saitoh, Hiroumi
This paper proposes a new control method for reducing fluctuation of power system frequency through smoothing active power output of wind farm. The proposal is based on the modulation of rotaional kinetic energy of variable speed wind power generators through power converters between permanent magnet synchronous generators (PMSG) and transmission lines. In this paper, the proposed control is called Fluctuation Absorption by Flywheel Characteristics control (FAFC). The FAFC can be easily implemented by adding wind farm output signal to Maximum Power Point Tracking control signal through a feedback control loop. In order to verify the effectiveness of the FAFC control, a simulation study was carried out. In the study, it was assumed that the wind farm consisting of PMSG type wind power generator and induction machine type wind power generaotors is connected with a power sysem. The results of the study show that the FAFC control is a useful method for reducing the impacts of wind farm output fluctuation on system frequency without additional devices such as secondary battery.
Classification of vegetation in an open landscape using full-waveform airborne laser scanner data
NASA Astrophysics Data System (ADS)
Alexander, Cici; Deák, Balázs; Kania, Adam; Mücke, Werner; Heilmeier, Hermann
2015-09-01
Airborne laser scanning (ALS) is increasingly being used for the mapping of vegetation, although the focus so far has been on woody vegetation, and ALS data have only rarely been used for the classification of grassland vegetation. In this study, we classified the vegetation of an open alkali landscape, characterized by two Natura 2000 habitat types: Pannonic salt steppes and salt marshes and Pannonic loess steppic grasslands. We generated 18 variables from an ALS dataset collected in the growing (leaf-on) season. Elevation is a key factor determining the patterns of vegetation types in the landscape, and hence 3 additional variables were based on a digital terrain model (DTM) generated from an ALS dataset collected in the dormant (leaf-off) season. We classified the vegetation into 24 classes based on these 21 variables, at a pixel size of 1 m. Two groups of variables with and without the DTM-based variables were used in a Random Forest classifier, to estimate the influence of elevation, on the accuracy of the classification. The resulting classes at Level 4, based on associations, were aggregated at three levels - Level 3 (11 classes), Level 2 (8 classes) and Level 1 (5 classes) - based on species pool, site conditions and structure, and the accuracies were assessed. The classes were also aggregated based on Natura 2000 habitat types to assess the accuracy of the classification, and its usefulness for the monitoring of habitat quality. The vegetation could be classified into dry grasslands, wetlands, weeds, woody species and man-made features, at Level 1, with an accuracy of 0.79 (Cohen's kappa coefficient, κ). The accuracies at Levels 2-4 and the classification based on the Natura 2000 habitat types were κ: 0.76, 0.61, 0.51 and 0.69, respectively. Levels 1 and 2 provide suitable information for nature conservationists and land managers, while Levels 3 and 4 are especially useful for ecologists, geologists and soil scientists as they provide high resolution data on species distribution, vegetation patterns, soil properties and on their correlations. Including the DTM-based variables increased the accuracy (κ) from 0.73 to 0.79 for Level 1. These findings show that the structural and spectral attributes of ALS echoes can be used for the classification of open landscapes, especially those where vegetation is influenced by elevation, such as coastal salt marshes, sand dunes, karst or alluvial areas; in these cases, ALS has a distinct advantage over other remotely sensed data.
Model-Driven Approach for Body Area Network Application Development.
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-05-12
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.
Model-Driven Approach for Body Area Network Application Development
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-01-01
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394
Vegh, Viktor; Reutens, David C.
2016-01-01
Object We studied the feasibility of generating the variable magnetic fields required for ultra-low field nuclear magnetic resonance relaxometry with dynamically adjustable permanent magnets. Our motivation was to substitute traditional electromagnets by distributed permanent magnets, increasing system portability. Materials and Methods The finite element method (COMSOL®) was employed for the numerical study of a small permanent magnet array to calculate achievable magnetic field strength, homogeneity, switching time and magnetic forces. A manually operated prototype was simulated and constructed to validate the numerical approach and to verify the generated magnetic field. Results A concentric small permanent magnet array can be used to generate strong sample pre-polarisation and variable measurement fields for ultra-low field relaxometry via simple prescribed magnet rotations. Using the array, it is possible to achieve a pre-polarisation field strength above 100 mT and variable measurement fields ranging from 20–50 μT with 200 ppm absolute field homogeneity within a field-of-view of 5 x 5 x 5 cubic centimetres. Conclusions A dynamic small permanent magnet array can generate multiple highly homogeneous magnetic fields required in ultra-low field nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) instruments. This design can significantly reduce the volume and energy requirements of traditional systems based on electromagnets, improving portability considerably. PMID:27271886
A TRMM/GPM retrieval of the total mean generator current for the global electric circuit
NASA Astrophysics Data System (ADS)
Peterson, Michael; Deierling, Wiebke; Liu, Chuntao; Mach, Douglas; Kalb, Christina
2017-09-01
A specialized satellite version of the passive microwave electric field retrieval algorithm (Peterson et al., 2015) is applied to observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites to estimate the generator current for the Global Electric Circuit (GEC) and compute its temporal variability. By integrating retrieved Wilson currents from electrified clouds across the globe, we estimate a total mean current of between 1.4 kA (assuming the 7% fraction of electrified clouds producing downward currents measured by the ER-2 is representative) to 1.6 kA (assuming all electrified clouds contribute to the GEC). These current estimates come from all types of convective weather without preference, including Electrified Shower Clouds (ESCs). The diurnal distribution of the retrieved generator current is in excellent agreement with the Carnegie curve (RMS difference: 1.7%). The temporal variability of the total mean generator current ranges from 110% on semi-annual timescales (29% on an annual timescale) to 7.5% on decadal timescales with notable responses to the Madden-Julian Oscillation and El Nino Southern Oscillation. The geographical distribution of current includes significant contributions from oceanic regions in addition to the land-based tropical chimneys. The relative importance of the Americas and Asia chimneys compared to Africa is consistent with the best modern ground-based observations and further highlights the importance of ESCs for the GEC.
Forest fire risk zonation mapping using remote sensing technology
NASA Astrophysics Data System (ADS)
Chandra, Sunil; Arora, M. K.
2006-12-01
Forest fires cause major losses to forest cover and disturb the ecological balance in our region. Rise in temperature during summer season causing increased dryness, increased activity of human beings in the forest areas, and the type of forest cover in the Garhwal Himalayas are some of the reasons that lead to forest fires. Therefore, generation of forest fire risk maps becomes necessary so that preventive measures can be taken at appropriate time. These risk maps shall indicate the zonation of the areas which are in very high, high, medium and low risk zones with regard to forest fire in the region. In this paper, an attempt has been made to generate the forest fire risk maps based on remote sensing data and other geographical variables responsible for the occurrence of fire. These include altitude, temperature and soil variations. Key thematic data layers pertaining to these variables have been generated using various techniques. A rule-based approach has been used and implemented in GIS environment to estimate fuel load and fuel index leading to the derivation of fire risk zonation index and subsequently to fire risk zonation maps. The fire risk maps thus generated have been validated on the ground for forest types as well as for forest fire risk areas. These maps would help the state forest departments in prioritizing their strategy for combating forest fires particularly during the fire seasons.
Event-Based control of depth of hypnosis in anesthesia.
Merigo, Luca; Beschi, Manuel; Padula, Fabrizio; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio
2017-08-01
In this paper, we propose the use of an event-based control strategy for the closed-loop control of the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. A new event generator with high noise-filtering properties is employed in addition to a PIDPlus controller. The tuning of the parameters is performed off-line by using genetic algorithms by considering a given data set of patients. The effectiveness and robustness of the method is verified in simulation by implementing a Monte Carlo method to address the intra-patient and inter-patient variability. A comparison with a standard PID control structure shows that the event-based control system achieves a reduction of the total variation of the manipulated variable of 93% in the induction phase and of 95% in the maintenance phase. The use of event based automatic control in anesthesia yields a fast induction phase with bounded overshoot and an acceptable disturbance rejection. A comparison with a standard PID control structure shows that the technique effectively mimics the behavior of the anesthesiologist by providing a significant decrement of the total variation of the manipulated variable. Copyright © 2017 Elsevier B.V. All rights reserved.
Dooley, Helen; Flajnik, Martin F; Porter, Andrew J
2003-09-01
The novel immunoglobulin isotype novel antigen receptor (IgNAR) is found in cartilaginous fish and is composed of a heavy-chain homodimer that does not associate with light chains. The variable regions of IgNAR function as independent domains similar to those found in the heavy-chain immunoglobulins of Camelids. Here, we describe the successful cloning and generation of a phage-displayed, single-domain library based upon the variable domain of IgNAR. Selection of such a library generated from nurse sharks (Ginglymostoma cirratum) immunized with the model antigen hen egg-white lysozyme (HEL) enabled the successful isolation of intact antigen-specific binders matured in vivo. The selected variable domains were shown to be functionally expressed in Escherichia coli, extremely stable, and bind to antigen specifically with an affinity in the nanomolar range. This approach can therefore be considered as an alternative route for the isolation of minimal antigen-binding fragments with favorable characteristics.
Market-Based Indian Grid Integration Study Options: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltenberg, B.; Clark, K.; Negi, S. K.
2012-03-01
The Indian state of Gujarat is forecasting solar and wind generation expansion from 16% to 32% of installed generation capacity by 2015. Some states in India are already experiencing heavy wind power curtailment. Understanding how to integrate variable generation (VG) into the grid is of great interest to local transmission companies and India's Ministry of New and Renewable Energy. This paper describes the nature of a market-based integration study and how this approach, while new to Indian grid operation and planning, is necessary to understand how to operate and expand the grid to best accommodate the expansion of VG. Second,more » it discusses options in defining a study's scope, such as data granularity, generation modeling, and geographic scope. The paper also explores how Gujarat's method of grid operation and current system reliability will affect how an integration study can be performed.« less
Relating Solar Resource Variability to Cloud Type
NASA Astrophysics Data System (ADS)
Hinkelman, L. M.; Sengupta, M.
2012-12-01
Power production from renewable energy (RE) resources is rapidly increasing. Generation of renewable energy is quite variable since the solar and wind resources that form the inputs are, themselves, inherently variable. There is thus a need to understand the impact of renewable generation on the transmission grid. Such studies require estimates of high temporal and spatial resolution power output under various scenarios, which can be created from corresponding solar resource data. Satellite-based solar resource estimates are the best source of long-term solar irradiance data for the typically large areas covered by transmission studies. As satellite-based resource datasets are generally available at lower temporal and spatial resolution than required, there is, in turn, a need to downscale these resource data. Downscaling in both space and time requires information about solar irradiance variability, which is primarily a function of cloud types and properties. In this study, we analyze the relationship between solar resource variability and satellite-based cloud properties. One-minute resolution surface irradiance data were obtained from a number of stations operated by the National Oceanic and Atmospheric Administration (NOAA) under the Surface Radiation (SURFRAD) and Integrated Surface Irradiance Study (ISIS) networks as well as from NREL's Solar Radiation Research Laboratory (SRRL) in Golden, Colorado. Individual sites were selected so that a range of meteorological conditions would be represented. Cloud information at a nominal 4 km resolution and half hour intervals was derived from NOAA's Geostationary Operation Environmental Satellite (GOES) series of satellites. Cloud class information from the GOES data set was then used to select and composite irradiance data from the measurement sites. The irradiance variability for each cloud classification was characterized using general statistics of the fluxes themselves and their variability in time, as represented by ramps computed for time scales from 10 s to 0.5 hr. The statistical relationships derived using this method will be presented, comparing and contrasting the statistics computed for the different cloud types. The implications for downscaling irradiances from satellites or forecast models will also be discussed.
Entropy information of heart rate variability and its power spectrum during day and night
NASA Astrophysics Data System (ADS)
Jin, Li; Jun, Wang
2013-07-01
Physiologic systems generate complex fluctuations in their output signals that reflect the underlying dynamics. We employed the base-scale entropy method and the power spectral analysis to study the 24 hours heart rate variability (HRV) signals. The results show that such profound circadian-, age- and pathologic-dependent changes are accompanied by changes in base-scale entropy and power spectral distribution. Moreover, the base-scale entropy changes reflect the corresponding changes in the autonomic nerve outflow. With the suppression of the vagal tone and dominance of the sympathetic tone in congestive heart failure (CHF) subjects, there is more variability in the date fluctuation mode. So the higher base-scale entropy belongs to CHF subjects. With the decrease of the sympathetic tone and the respiratory frequency (RSA) becoming more pronounced with slower breathing during sleeping, the base-scale entropy drops in CHF subjects. The HRV series of the two healthy groups have the same diurnal/nocturnal trend as the CHF series. The fluctuation dynamics trend of data in the three groups can be described as “HF effect”.
Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.
Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J
2009-11-01
Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele
2016-06-20
Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less
Real-time forecasts of dengue epidemics
NASA Astrophysics Data System (ADS)
Yamana, T. K.; Shaman, J. L.
2015-12-01
Dengue is a mosquito-borne viral disease prevalent in the tropics and subtropics, with an estimated 2.5 billion people at risk of transmission. In many areas with endemic dengue, disease transmission is seasonal but prone to high inter-annual variability with occasional severe epidemics. Predicting and preparing for periods of higher than average transmission is a significant public health challenge. Here we present a model of dengue transmission and a framework for optimizing model simulations with real-time observational data of dengue cases and environmental variables in order to generate ensemble-based forecasts of the timing and severity of disease outbreaks. The model-inference system is validated using synthetic data and dengue outbreak records. Retrospective forecasts are generated for a number of locations and the accuracy of these forecasts is quantified.
Portable Just-in-Time Specialization of Dynamically Typed Scripting Languages
NASA Astrophysics Data System (ADS)
Williams, Kevin; McCandless, Jason; Gregg, David
In this paper, we present a portable approach to JIT compilation for dynamically typed scripting languages. At runtime we generate ANSI C code and use the system's native C compiler to compile this code. The C compiler runs on a separate thread to the interpreter allowing program execution to continue during JIT compilation. Dynamic languages have variables which may change type at any point in execution. Our interpreter profiles variable types at both whole method and partial method granularity. When a frequently executed region of code is discovered, the compilation thread generates a specialized version of the region based on the profiled types. In this paper, we evaluate the level of instruction specialization achieved by our profiling scheme as well as the overall performance of our JIT.
Feng, Jinxia; Wan, Zhenju; Li, Yuanji; Zhang, Kuanshou
2017-09-01
The distribution of continuous variable (CV) Einstein-Podolsky-Rosen (EPR)-entangled beams at a telecommunication wavelength of 1550 nm over single-mode fibers is investigated. EPR-entangled beams with quantum entanglement of 8.3 dB are generated using a single nondegenerate optical parametric amplifier based on a type-II periodically poled KTiOPO 4 crystal. When one beam of the generated EPR-entangled beams is distributed over 20 km of single-mode fiber, 1.02 dB quantum entanglement can still be measured. The degradation of CV quantum entanglement in a noisy fiber channel is theoretically analyzed considering the effect of depolarized guided acoustic wave Brillouin scattering in optical fibers. The theoretical prediction is in good agreement with the experimental results.
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Ahmed, Sohail; Muhammad, Taseer; Alsaedi, Ahmed
2017-10-01
This article examines homogeneous-heterogeneous reactions and internal heat generation in Darcy-Forchheimer flow of nanofluids with different base fluids. Flow is generated due to a nonlinear stretchable surface of variable thickness. The characteristics of nanofluid are explored using CNTs (single and multi walled carbon nanotubes). Equal diffusion coefficients are considered for both reactants and auto catalyst. The conversion of partial differential equations (PDEs) to ordinary differential equations (ODEs) is done via appropriate transformations. Optimal homotopy approach is implemented for solutions development of governing problems. Averaged square residual errors are computed. The optimal solution expressions of velocity, temperature and concentration are explored through plots by using several values of physical parameters. Further the coefficient of skin friction and local Nusselt number are examined through graphs.
Using Agent-Based Approaches to Characterize Exposure Related Behavior
The Tox21 initiative is generating data on biological activity, toxicity, and chemical properties for over 8,000 substances. One of the goals for EPA’s National Exposure Research Lab (NERL) is to assess the magnitude and variability in the public’s exposures to these ...
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables
NASA Astrophysics Data System (ADS)
Huang, Laura X.; Isaac, George A.; Sheng, Grant
2014-01-01
This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.
Optoacoustic Monitoring of Physiologic Variables
Esenaliev, Rinat O.
2017-01-01
Optoacoustic (photoacoustic) technique is a novel diagnostic platform that can be used for noninvasive measurements of physiologic variables, functional imaging, and hemodynamic monitoring. This technique is based on generation and time-resolved detection of optoacoustic (thermoelastic) waves generated in tissue by short optical pulses. This provides probing of tissues and individual blood vessels with high optical contrast and ultrasound spatial resolution. Because the optoacoustic waves carry information on tissue optical and thermophysical properties, detection, and analysis of the optoacoustic waves allow for measurements of physiologic variables with high accuracy and specificity. We proposed to use the optoacoustic technique for monitoring of a number of important physiologic variables including temperature, thermal coagulation, freezing, concentration of molecular dyes, nanoparticles, oxygenation, and hemoglobin concentration. In this review we present origin of contrast and high spatial resolution in these measurements performed with optoacoustic systems developed and built by our group. We summarize data obtained in vitro, in experimental animals, and in humans on monitoring of these physiologic variables. Our data indicate that the optoacoustic technology may be used for monitoring of cerebral blood oxygenation in patients with traumatic brain injury and in neonatal patients, central venous oxygenation monitoring, total hemoglobin concentration monitoring, hematoma detection and characterization, monitoring of temperature, and coagulation and freezing boundaries during thermotherapy. PMID:29311964
Optoacoustic Monitoring of Physiologic Variables.
Esenaliev, Rinat O
2017-01-01
Optoacoustic (photoacoustic) technique is a novel diagnostic platform that can be used for noninvasive measurements of physiologic variables, functional imaging, and hemodynamic monitoring. This technique is based on generation and time-resolved detection of optoacoustic (thermoelastic) waves generated in tissue by short optical pulses. This provides probing of tissues and individual blood vessels with high optical contrast and ultrasound spatial resolution. Because the optoacoustic waves carry information on tissue optical and thermophysical properties, detection, and analysis of the optoacoustic waves allow for measurements of physiologic variables with high accuracy and specificity. We proposed to use the optoacoustic technique for monitoring of a number of important physiologic variables including temperature, thermal coagulation, freezing, concentration of molecular dyes, nanoparticles, oxygenation, and hemoglobin concentration. In this review we present origin of contrast and high spatial resolution in these measurements performed with optoacoustic systems developed and built by our group. We summarize data obtained in vitro , in experimental animals, and in humans on monitoring of these physiologic variables. Our data indicate that the optoacoustic technology may be used for monitoring of cerebral blood oxygenation in patients with traumatic brain injury and in neonatal patients, central venous oxygenation monitoring, total hemoglobin concentration monitoring, hematoma detection and characterization, monitoring of temperature, and coagulation and freezing boundaries during thermotherapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik
2015-10-19
The recent increased interest in utilizing variable generation (VG) resources such as wind and solar in power systems has motivated investigations into new operating procedures. Although these resources provide desirable value to a system (e.g., no fuel costs or emissions), interconnecting them provides unique challenges. Their variable, non-controllable nature in particular requires significant attention, because it directly results in increased power system variability and uncertainty. One way to handle this is via new operating reserve schemes. Operating reserves provide upward and downward generation and ramping capacity to counteract uncertainty and variability prior to their realization. For instance, uncertainty and variabilitymore » in real-time dispatch can be accounted for in the hour-ahead unit commitment. New operating reserve methodologies that specifically account for the increased variability and uncertainty caused by VG are currently being investigated and developed by academia and industry. This paper examines one method inspired by the new operating reserve product being proposed by the California Independent System Operator. The method is based on examining the potential ramping requirements at any given time and enforcing those requirements via a reserve demand curve in the market-clearing optimization as an additional ancillary service product.« less
Optimization of a GO2/GH2 Impinging Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
2001-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.
Generated effect modifiers (GEM’s) in randomized clinical trials
Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R. Todd
2017-01-01
In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an “effect modifier”. Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. PMID:27465235
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, David; Collados-Lara, Antonio-Juan; Alcalá, Francisco J.
2017-04-01
This research proposes and applies a method to assess potential impacts of future climatic scenarios on aquifer rainfall recharge in wide and varied regions. The continental Spain territory was selected to show the application. The method requires to generate future series of climatic variables (precipitation, temperature) in the system to simulate them within a previously calibrated hydrological model for the historical data. In a previous work, Alcalá and Custodio (2014) used the atmospheric chloride mass balance (CMB) method for the spatial evaluation of average aquifer recharge by rainfall over the whole of continental Spain, by assuming long-term steady conditions of the balance variables. The distributed average CMB variables necessary to calculate recharge were estimated from available variable-length data series of variable quality and spatial coverage. The CMB variables were regionalized by ordinary kriging at the same 4976 nodes of a 10 km x 10 km grid. Two main sources of uncertainty affecting recharge estimates (given by the coefficient of variation, CV), induced by the inherent natural variability of the variables and from mapping were segregated. Based on these stationary results we define a simple empirical rainfall-recharge model. We consider that spatiotemporal variability of rainfall and temperature are the most important climatic feature and variables influencing potential aquifer recharge in natural regime. Changes in these variables can be important in the assessment of future potential impacts of climatic scenarios over spatiotemporal renewable groundwater resource. For instance, if temperature increases, actual evapotranspitration (EA) will increases reducing the available water for others groundwater balance components, including the recharge. For this reason, instead of defining an infiltration rate coefficient that relates precipitation (P) and recharge we propose to define a transformation function that allows estimating the spatial distribution of recharge (both average value and its uncertainty) from the difference in P and EA in each area. A complete analysis of potential short-term (2016-2045) future climate scenarios in continental Spain has been performed by considering different sources of uncertainty. It is based on the historical climatic data for the period 1976-2005 and the climatic models simulations (for the control [1976-2005] and future scenarios [2016-2045]) performed in the frame of the CORDEX EU project. The most pessimistic emission scenario (RCP8.5) has been considered. For the RCP8.5 scenario we have analyzed the time series generated by simulating with 5 Regional Climatic models (CCLM4-8-17, RCA4, HIRHAM5, RACMO22E, and WRF331F) nested to 4 different General Circulation Models (GCMs). Two different conceptual approaches (bias correction and delta change techniques) have been applied to generate potential future climate scenarios from these data. Different ensembles of obtained time series have been proposed to obtain more representative scenarios by considering all the simulations or only those providing better approximations to the historical statistics based on a multicriteria analysis. This was a step to analyze future potential impacts on the aquifer recharge by simulating them within a rainfall-recharge model. This research has been supported by the CGL2013-48424-C2-2-R (MINECO) and the PMAFI/06/14 (UCAM) projects.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F; Schnabel, Roman
2015-10-30
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein-Podolsky-Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F.; Schnabel, Roman
2015-01-01
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein–Podolsky–Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components. PMID:26514280
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
Ground-based telescope pointing and tracking optimization using a neural controller.
Mancini, D; Brescia, M; Schipani, P
2003-01-01
Neural network models (NN) have emerged as important components for applications of adaptive control theories. Their basic generalization capability, based on acquired knowledge, together with execution rapidity and correlation ability between input stimula, are basic attributes to consider NN as an extremely powerful tool for on-line control of complex systems. By a control system point of view, not only accuracy and speed, but also, in some cases, a high level of adaptation capability is required in order to match all working phases of the whole system during its lifetime. This is particularly remarkable for a new generation ground-based telescope control system. Infact, strong changes in terms of system speed and instantaneous position error tolerance are necessary, especially in case of trajectory disturb induced by wind shake. The classical control scheme adopted in such a system is based on the proportional integral (PI) filter, already applied and implemented on a large amount of new generation telescopes, considered as a standard in this technological environment. In this paper we introduce the concept of a new approach, the neural variable structure proportional integral, (NVSPI), related to the implementation of a standard multi layer perceptron network in new generation ground-based Alt-Az telescope control systems. Its main purpose is to improve adaptive capability of the Variable structure proportional integral model, an already innovative control scheme recently introduced by authors [Proc SPIE (1997)], based on a modified version of classical PI control model, in terms of flexibility and accuracy of the dynamic response range also in presence of wind noise effects. The realization of a powerful well tested and validated telescope model simulation system allowed the possibility to directly compare performances of the two control schemes on simulated tracking trajectories, revealing extremely encouraging results in terms of NVSPI control robustness and reliability.
Variable speed generator technology options for wind turbine generators
NASA Technical Reports Server (NTRS)
Lipo, T. A.
1995-01-01
The electrical system options for variable speed operation of a wind turbine generator are treated in this paper. The key operating characteristics of each system are discussed and the major advantages and disadvantages of each are identified
2017-01-01
Background Digital health social networks (DHSNs) are widespread, and the consensus is that they contribute to wellness by offering social support and knowledge sharing. The success of a DHSN is based on the number of participants and their consistent creation of externalities through the generation of new content. To promote network growth, it would be helpful to identify characteristics of superusers or actors who create value by generating positive network externalities. Objective The aim of the study was to investigate the feasibility of developing predictive models that identify potential superusers in real time. This study examined associations between posting behavior, 4 demographic variables, and 20 indication-specific variables. Methods Data were extracted from the custom structured query language (SQL) databases of 4 digital health behavior change interventions with DHSNs. Of these, 2 were designed to assist in the treatment of addictions (problem drinking and smoking cessation), and 2 for mental health (depressive disorder, panic disorder). To analyze posting behavior, 10 models were developed, and negative binomial regressions were conducted to examine associations between number of posts, and demographic and indication-specific variables. Results The DHSNs varied in number of days active (3658-5210), number of registrants (5049-52,396), number of actors (1085-8452), and number of posts (16,231-521,997). In the sample, all 10 models had low R2 values (.013-.086) with limited statistically significant demographic and indication-specific variables. Conclusions Very few variables were associated with social network engagement. Although some variables were statistically significant, they did not appear to be practically significant. Based on the large number of study participants, variation in DHSN theme, and extensive time-period, we did not find strong evidence that demographic characteristics or indication severity sufficiently explain the variability in number of posts per actor. Researchers should investigate alternative models that identify superusers or other individuals who create social network externalities. PMID:28213340
VARIABLES AFFECTING EMISSIONS OF PCDDS/FS FROM UNCONTROLLED COMBUSTION OF HOUSEHOLD WASTE IN BARRELS
The uncontrolled burning of household waste in barrels has recently been implicated as a major source of airborne emissions of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDDs/Fs). Based on the need to generate a more accurate emission factor for burn ba...
Gray correlation analysis and prediction models of living refuse generation in Shanghai city.
Liu, Gousheng; Yu, Jianguo
2007-01-01
A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Balancing Europe's wind power output through spatial deployment informed by weather regimes.
Grams, Christian M; Beerli, Remo; Pfenninger, Stefan; Staffell, Iain; Wernli, Heini
2017-08-01
As wind and solar power provide a growing share of Europe's electricity1, understanding and accommodating their variability on multiple timescales remains a critical problem. On weekly timescales, variability is related to long-lasting weather conditions, called weather regimes2-5, which can cause lulls with a loss of wind power across neighbouring countries6. Here we show that weather regimes provide a meteorological explanation for multi-day fluctuations in Europe's wind power and can help guide new deployment pathways which minimise this variability. Mean generation during different regimes currently ranges from 22 GW to 44 GW and is expected to triple by 2030 with current planning strategies. However, balancing future wind capacity across regions with contrasting inter-regime behaviour - specifically deploying in the Balkans instead of the North Sea - would almost eliminate these output variations, maintain mean generation, and increase fleet-wide minimum output. Solar photovoltaics could balance low-wind regimes locally, but only by expanding current capacity tenfold. New deployment strategies based on an understanding of continent-scale wind patterns and pan-European collaboration could enable a high share of wind energy whilst minimising the negative impacts of output variability.
A respiratory alert model for the Shenandoah Valley, Virginia, USA
NASA Astrophysics Data System (ADS)
Hondula, David M.; Davis, Robert E.; Knight, David B.; Sitka, Luke J.; Enfield, Kyle; Gawtry, Stephen B.; Stenger, Phillip J.; Deaton, Michael L.; Normile, Caroline P.; Lee, Temple R.
2013-01-01
Respiratory morbidity (particularly COPD and asthma) can be influenced by short-term weather fluctuations that affect air quality and lung function. We developed a model to evaluate meteorological conditions associated with respiratory hospital admissions in the Shenandoah Valley of Virginia, USA. We generated ensembles of classification trees based on six years of respiratory-related hospital admissions (64,620 cases) and a suite of 83 potential environmental predictor variables. As our goal was to identify short-term weather linkages to high admission periods, the dependent variable was formulated as a binary classification of five-day moving average respiratory admission departures from the seasonal mean value. Accounting for seasonality removed the long-term apparent inverse relationship between temperature and admissions. We generated eight total models specific to the northern and southern portions of the valley for each season. All eight models demonstrate predictive skill (mean odds ratio = 3.635) when evaluated using a randomization procedure. The predictor variables selected by the ensembling algorithm vary across models, and both meteorological and air quality variables are included. In general, the models indicate complex linkages between respiratory health and environmental conditions that may be difficult to identify using more traditional approaches.
Variable Stars Observed in the Galactic Disk by AST3-1 from Dome A, Antarctica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lingzhi; Ma, Bin; Hu, Yi
AST3-1 is the second-generation wide-field optical photometric telescope dedicated to time-domain astronomy at Dome A, Antarctica. Here, we present the results of an i -band images survey from AST3-1 toward one Galactic disk field. Based on time-series photometry of 92,583 stars, 560 variable stars were detected with i magnitude ≤16.5 mag during eight days of observations; 339 of these are previously unknown variables. We tentatively classify the 560 variables as 285 eclipsing binaries (EW, EB, and EA), 27 pulsating variable stars ( δ Scuti, γ Doradus, δ Cephei variable, and RR Lyrae stars), and 248 other types of variables (unclassifiedmore » periodic, multiperiodic, and aperiodic variable stars). Of the eclipsing binaries, 34 show O’Connell effects. One of the aperiodic variables shows a plateau light curve and another variable shows a secondary maximum after peak brightness. We also detected a complex binary system with an RS CVn-like light-curve morphology; this object is being followed-up spectroscopically using the Gemini South telescope.« less
Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms Trajectories
2016-04-01
ARL-TR-7642 ● APR 2016 US Army Research Laboratory Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms... Wind Profiles and Modeling Their Effects on Small-Arms Trajectories by Timothy A Fargus Weapons and Materials Research Directorate, ARL...Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms Trajectories 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM
Enhancing Heart-Beat-Based Security for mHealth Applications.
Seepers, Robert M; Strydis, Christos; Sourdis, Ioannis; De Zeeuw, Chris I
2017-01-01
In heart-beat-based security, a security key is derived from the time difference between consecutive heart beats (the inter-pulse interval, IPI), which may, subsequently, be used to enable secure communication. While heart-beat-based security holds promise in mobile health (mHealth) applications, there currently exists no work that provides a detailed characterization of the delivered security in a real system. In this paper, we evaluate the strength of IPI-based security keys in the context of entity authentication. We investigate several aspects that should be considered in practice, including subjects with reduced heart-rate variability (HRV), different sensor-sampling frequencies, intersensor variability (i.e., how accurate each entity may measure heart beats) as well as average and worst-case-authentication time. Contrary to the current state of the art, our evaluation demonstrates that authentication using multiple, less-entropic keys may actually increase the key strength by reducing the effects of intersensor variability. Moreover, we find that the maximal key strength of a 60-bit key varies between 29.2 bits and only 5.7 bits, depending on the subject's HRV. To improve security, we introduce the inter-multi-pulse interval (ImPI), a novel method of extracting entropy from the heart by considering the time difference between nonconsecutive heart beats. Given the same authentication time, using the ImPI for key generation increases key strength by up to 3.4 × (+19.2 bits) for subjects with limited HRV, at the cost of an extended key-generation time of 4.8 × (+45 s).
A calculus based on a q-deformed Heisenberg algebra
Cerchiai, B. L.; Hinterding, R.; Madore, J.; ...
1999-04-27
We show how one can construct a differential calculus over an algebra where position variables $x$ and momentum variables p have be defined. As the simplest example we consider the one-dimensional q-deformed Heisenberg algebra. This algebra has a subalgebra generated by cursive Greek chi and its inverse which we call the coordinate algebra. A physical field is considered to be an element of the completion of this algebra. We can construct a derivative which leaves invariant the coordinate algebra and so takes physical fields into physical fields. A generalized Leibniz rule for this algebra can be found. Based on thismore » derivative differential forms and an exterior differential calculus can be constructed.« less
Assessing risk based on uncertain avalanche activity patterns
NASA Astrophysics Data System (ADS)
Zeidler, Antonia; Fromm, Reinhard
2015-04-01
Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.
NASA Astrophysics Data System (ADS)
Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could be detected. The skill of the model for the dry summer season in Central Asia and the transition seasons over South Asia is found to be low. A sensitivity analysis by means on well known climate indices reveals the major large scale controlling mechanisms for the seasonal precipitation climate of each target area. For the Central Asian target areas, both, the El Nino Southern Oscillation and the North Atlantic Oscillation are identified as important controlling factors for precipitation totals during moist spring season. Drought conditions are found to be triggered by a warm ENSO phase in combination with a positive phase of the NAO. For the monsoonal summer precipitation amounts over Southern Asia, the model suggests a distinct negative response to El Nino events.
NASA Astrophysics Data System (ADS)
Ahmadian, Radin
2010-09-01
This study investigated the relationship of anthocyanin concentration from different organic fruit species and output voltage and current in a TiO2 dye-sensitized solar cell (DSSC) and hypothesized that fruits with greater anthocyanin concentration produce higher maximum power point (MPP) which would lead to higher current and voltage. Anthocyanin dye solution was made with crushing of a group of fresh fruits with different anthocyanin content in 2 mL of de-ionized water and filtration. Using these test fruit dyes, multiple DSSCs were assembled such that light enters through the TiO2 side of the cell. The full current-voltage (I-V) co-variations were measured using a 500 Ω potentiometer as a variable load. Point-by point current and voltage data pairs were measured at various incremental resistance values. The maximum power point (MPP) generated by the solar cell was defined as a dependent variable and the anthocyanin concentration in the fruit used in the DSSC as the independent variable. A regression model was used to investigate the linear relationship between study variables. Regression analysis showed a significant linear relationship between MPP and anthocyanin concentration with a p-value of 0.007. Fruits like blueberry and black raspberry with the highest anthocyanin content generated higher MPP. In a DSSC, a linear model may predict MPP based on the anthocyanin concentration. This model is the first step to find organic anthocyanin sources in the nature with the highest dye concentration to generate energy.
NASA Astrophysics Data System (ADS)
Pandey, Suraj
This study develops a spatial mapping of agro-ecological zones based on earth observation model using MODIS regional dataset as a tool to guide key areas of cropping system and targeting to climate change strategies. This tool applies to the Indo-gangetic Plains of north India to target the domains of bio-physical characteristics and socio-economics with respect to changing climate in the region. It derive on secondary data for spatially-explicit variables at the state/district level, which serve as indicators of climate variability based on sustainable livelihood approach, natural, social and human. The study details the methodology used and generates the spatial climate risk maps for composite indicators of livelihood and vulnerability index in the region.
Misut, P.
1995-01-01
Ninety shallow monitoring wells on Long Island, N.Y., were used to test the hypothesis that the correlation between the detection of volatile organic compounds (VOC's) at a well and explanatory variables representing land use, population density, and hydrogeologic conditions around the well is affected by the size and shape of the area defined as the contributing area. Explanatory variables are quantified through overlay of various specified contributing areas on 1:24 000-scale landuse and population-density geographic information system (GIS) coverages. Four methods of contributing-area delineation were used: (a) centering a circle of selected radius on the well site, (b) orienting a triangular area along the direction of horizontal ground-water flow to the well, (c) generating a shaped based on direction and magnitude of horizontal flow to the well, and (d) generating a shape based on three-dimensional particle pathlines backtracked from the well screen to the water table. The strongest correlations with VOC detections were obtained from circles of 400- to 1 000-meter radius. Improvement in correlation through delineations based on ground-water flow would require geographic overlay on more highly detailed GIS coverages than those used in the study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guba, O.; Taylor, M. A.; Ullrich, P. A.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Gu, Linlin; Krendelchtchikova, Valentina; Krendelchtchikov, Alexandre; Farrow, Anitra L; Derdeyn, Cynthia A; Matthews, Qiana L
2016-01-01
Adenoviral (Ad) vectors in combination with the "Antigen Capsid-Incorporation" strategy have been applied in developing HIV-1 vaccines, due to the vectors׳ abilities in incorporating and inducing immunity of capsid-incorporated antigens. Variable loop 2 (V2)-specific antibodies were suggested in the RV144 trial to correlate with reduced HIV-1 acquisition, which highlights the importance of developing novel HIV-1 vaccines by targeting the V2 loop. Therefore, the V2 loop of HIV-1 has been incorporated into the Ad capsid protein. We generated adenovirus serotype 5 (Ad5) vectors displaying variable loop 2 (V2) of HIV-1 gp120, with the "Antigen Capsid-Incorporation" strategy. To assess the incorporation capabilities on hexon hypervariable region1 (HVR1) and protein IX (pIX), 20aa or full length (43aa) of V2 and V1V2 (67aa) were incorporated, respectively. Immunizations with the recombinant vectors significantly generated antibodies against both linear and discontinuous V2 epitopes. The immunizations generated durable humoral immunity against V2. This study will lead to more stringent development of various serotypes of adenovirus-vectored V2 vaccine candidates, based on breakthroughs regarding the immunogenicity of V2. Copyright © 2015. Published by Elsevier Inc.
Reliability Assessment Approach for Stirling Convertors and Generators
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy
2004-01-01
Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.
Price vs. Performance: The Value of Next Generation Fighter Aircraft
2007-03-01
forms. Both the semi-log and log-log forms were plagued with heteroskedasticity (according to the Breusch - Pagan /Cook-Weisberg test ). The RDT&E models...from 1949-present were used to construct two models – one based on procurement costs and one based on research, design, test , and evaluation (RDT&E...fighter aircraft hedonic models include several different categories of variables. Aircraft procurement costs and research, design, test , and
Solar energy thermally powered electrical generating system
NASA Technical Reports Server (NTRS)
Owens, William R. (Inventor)
1989-01-01
A thermally powered electrical generating system for use in a space vehicle is disclosed. The rate of storage in a thermal energy storage medium is controlled by varying the rate of generation and dissipation of electrical energy in a thermally powered electrical generating system which is powered from heat stored in the thermal energy storage medium without exceeding a maximum quantity of heat. A control system (10) varies the rate at which electrical energy is generated by the electrical generating system and the rate at which electrical energy is consumed by a variable parasitic electrical load to cause storage of an amount of thermal energy in the thermal energy storage system at the end of a period of insolation which is sufficient to satisfy the scheduled demand for electrical power to be generated during the next period of eclipse. The control system is based upon Kalman filter theory.
Variable context Markov chains for HIV protease cleavage site prediction.
Oğul, Hasan
2009-06-01
Deciphering the knowledge of HIV protease specificity and developing computational tools for detecting its cleavage sites in protein polypeptide chain are very desirable for designing efficient and specific chemical inhibitors to prevent acquired immunodeficiency syndrome. In this study, we developed a generative model based on a generalization of variable order Markov chains (VOMC) for peptide sequences and adapted the model for prediction of their cleavability by certain proteases. The new method, called variable context Markov chains (VCMC), attempts to identify the context equivalence based on the evolutionary similarities between individual amino acids. It was applied for HIV-1 protease cleavage site prediction problem and shown to outperform existing methods in terms of prediction accuracy on a common dataset. In general, the method is a promising tool for prediction of cleavage sites of all proteases and encouraged to be used for any kind of peptide classification problem as well.
Remote creation of hybrid entanglement between particle-like and wave-like optical qubits
NASA Astrophysics Data System (ADS)
Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien
2014-07-01
The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.
Teleportation-based continuous variable quantum cryptography
NASA Astrophysics Data System (ADS)
Luiz, F. S.; Rigolin, Gustavo
2017-03-01
We present a continuous variable (CV) quantum key distribution (QKD) scheme based on the CV quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both Alice and Bob. This protocol preserves the efficient detection schemes of current CV technology (no single-photon detection techniques) and, at the same time, has efficient error correction and privacy amplification schemes due to the binary modulation of the key. We show that for a certain type of incoherent attack, it is secure for almost any value of the transmittance of the optical line used by Alice to share entangled two-mode squeezed states with Bob (no 3 dB or 50% loss limitation characteristic of beam splitting attacks). The present CVQKD protocol works deterministically (no postselection needed) with efficient direct reconciliation techniques (no reverse reconciliation) in order to generate a secure key and beyond the 50% loss case at the incoherent attack level.
Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108
Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.
A statistical shape model of the human second cervical vertebra.
Clogenson, Marine; Duff, John M; Luethi, Marcel; Levivier, Marc; Meuli, Reto; Baur, Charles; Henein, Simon
2015-07-01
Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
New-generation diabetes management: glucose sensor-augmented insulin pump therapy
Cengiz, Eda; Sherr, Jennifer L; Weinzimer, Stuart A; Tamborlane, William V
2011-01-01
Diabetes is one of the most common chronic disorders with an increasing incidence worldwide. Technologic advances in the field of diabetes have provided new tools for clinicians to manage this challenging disease. For example, the development of continuous subcutaneous insulin infusion systems have allowed for refinement in the delivery of insulin, while continuous glucose monitors provide patients and clinicians with a better understanding of the minute to minute glucose variability, leading to the titration of insulin delivery based on this variability when applicable. Merging of these devices has resulted in sensor-augmented insulin pump therapy, which became a major building block upon which the artificial pancreas (closed-loop systems) can be developed. This article summarizes the evolution of sensor-augmented insulin pump therapy until present day and its future applications in new-generation diabetes management. PMID:21728731
New-generation diabetes management: glucose sensor-augmented insulin pump therapy.
Cengiz, Eda; Sherr, Jennifer L; Weinzimer, Stuart A; Tamborlane, William V
2011-07-01
Diabetes is one of the most common chronic disorders with an increasing incidence worldwide. Technologic advances in the field of diabetes have provided new tools for clinicians to manage this challenging disease. For example, the development of continuous subcutaneous insulin infusion systems have allowed for refinement in the delivery of insulin, while continuous glucose monitors provide patients and clinicians with a better understanding of the minute to minute glucose variability, leading to the titration of insulin delivery based on this variability when applicable. Merging of these devices has resulted in sensor-augmented insulin pump therapy, which became a major building block upon which the artificial pancreas (closed-loop systems) can be developed. This article summarizes the evolution of sensor-augmented insulin pump therapy until present day and its future applications in new-generation diabetes management.
Stagnation point flow of viscoelastic nanomaterial over a stretched surface
NASA Astrophysics Data System (ADS)
Hayat, T.; Kiyani, M. Z.; Ahmad, I.; Khan, M. Ijaz; Alsaedi, A.
2018-06-01
Present communication aims to discuss magnetohydrodynamic (MHD) stagnation point flow of Jeffrey nanofluid by a stretching cylinder. Modeling is based upon Brownian motion, thermophoresis, thermal radiation and heat generation. Problem is attempted by using (HAM). Residual errors for h-curves are plotted. Convergent solutions for velocity, temperature and concentration are obtained. Skin friction coefficient, local Nusselt number and Sherwood number are studied. It is examined that velocity field decays in the presence of higher estimation of magnetic variable. Furthermore temperature and concentration fields are enhanced for larger magnetic variable.
The sixth generation robot in space
NASA Technical Reports Server (NTRS)
Butcher, A.; Das, A.; Reddy, Y. V.; Singh, H.
1990-01-01
The knowledge based simulator developed in the artificial intelligence laboratory has become a working test bed for experimenting with intelligent reasoning architectures. With this simulator, recently, small experiments have been done with an aim to simulate robot behavior to avoid colliding paths. An automatic extension of such experiments to intelligently planning robots in space demands advanced reasoning architectures. One such architecture for general purpose problem solving is explored. The robot, seen as a knowledge base machine, goes via predesigned abstraction mechanism for problem understanding and response generation. The three phases in one such abstraction scheme are: abstraction for representation, abstraction for evaluation, and abstraction for resolution. Such abstractions require multimodality. This multimodality requires the use of intensional variables to deal with beliefs in the system. Abstraction mechanisms help in synthesizing possible propagating lattices for such beliefs. The machine controller enters into a sixth generation paradigm.
Market Mechanism Design for Renewable Energy based on Risk Theory
NASA Astrophysics Data System (ADS)
Yang, Wu; Bo, Wang; Jichun, Liu; Wenjiao, Zai; Pingliang, Zeng; Haobo, Shi
2018-02-01
Generation trading between renewable energy and thermal power is an efficient market means for transforming supply structure of electric power into sustainable development pattern. But the trading is hampered by the output fluctuations of renewable energy and the cost differences between renewable energy and thermal power at present. In this paper, the external environmental cost (EEC) is defined and the EEC is introduced into the generation cost. At same time, the incentive functions of renewable energy and low-emission thermal power are designed, which are decreasing functions of EEC. On these bases, for the market risks caused by the random variability of EEC, the decision-making model of generation trading between renewable energy and thermal power is constructed according to the risk theory. The feasibility and effectiveness of the proposed model are verified by simulation results.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Potential Occupational Exposures and Health Risks Associated with Biomass-Based Power Generation.
Rohr, Annette C; Campleman, Sharan L; Long, Christopher M; Peterson, Michael K; Weatherstone, Susan; Quick, Will; Lewis, Ari
2015-07-22
Biomass is increasingly being used for power generation; however, assessment of potential occupational health and safety (OH&S) concerns related to usage of biomass fuels in combustion-based generation remains limited. We reviewed the available literature on known and potential OH&S issues associated with biomass-based fuel usage for electricity generation at the utility scale. We considered three potential exposure scenarios--pre-combustion exposure to material associated with the fuel, exposure to combustion products, and post-combustion exposure to ash and residues. Testing of dust, fungal and bacterial levels at two power stations was also undertaken. Results indicated that dust concentrations within biomass plants can be extremely variable, with peak levels in some areas exceeding occupational exposure limits for wood dust and general inhalable dust. Fungal spore types, identified as common environmental species, were higher than in outdoor air. Our review suggests that pre-combustion risks, including bioaerosols and biogenic organics, should be considered further. Combustion and post-combustion risks appear similar to current fossil-based combustion. In light of limited available information, additional studies at power plants utilizing a variety of technologies and biomass fuels are recommended.
Potential Occupational Exposures and Health Risks Associated with Biomass-Based Power Generation
Rohr, Annette C.; Campleman, Sharan L.; Long, Christopher M.; Peterson, Michael K.; Weatherstone, Susan; Quick, Will; Lewis, Ari
2015-01-01
Biomass is increasingly being used for power generation; however, assessment of potential occupational health and safety (OH&S) concerns related to usage of biomass fuels in combustion-based generation remains limited. We reviewed the available literature on known and potential OH&S issues associated with biomass-based fuel usage for electricity generation at the utility scale. We considered three potential exposure scenarios—pre-combustion exposure to material associated with the fuel, exposure to combustion products, and post-combustion exposure to ash and residues. Testing of dust, fungal and bacterial levels at two power stations was also undertaken. Results indicated that dust concentrations within biomass plants can be extremely variable, with peak levels in some areas exceeding occupational exposure limits for wood dust and general inhalable dust. Fungal spore types, identified as common environmental species, were higher than in outdoor air. Our review suggests that pre-combustion risks, including bioaerosols and biogenic organics, should be considered further. Combustion and post-combustion risks appear similar to current fossil-based combustion. In light of limited available information, additional studies at power plants utilizing a variety of technologies and biomass fuels are recommended. PMID:26206568
Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition
Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao
2017-01-01
Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943
NASA Technical Reports Server (NTRS)
Sullivan, Sylvia C.; Betancourt, Ricardo Morales; Barahona, Donifan; Nenes, Athanasios
2016-01-01
Along with minimizing parameter uncertainty, understanding the cause of temporal and spatial variability of the nucleated ice crystal number, Ni, is key to improving the representation of cirrus clouds in climate models. To this end, sensitivities of Ni to input variables like aerosol number and diameter provide valuable information about nucleation regime and efficiency for a given model formulation. Here we use the adjoint model of the adjoint of a cirrus formation parameterization (Barahona and Nenes, 2009b) to understand Ni variability for various ice-nucleating particle (INP) spectra. Inputs are generated with the Community Atmosphere Model version 5, and simulations are done with a theoretically derived spectrum, an empirical lab-based spectrum and two field-based empirical spectra that differ in the nucleation threshold for black carbon particles and in the active site density for dust. The magnitude and sign of Ni sensitivity to insoluble aerosol number can be directly linked to nucleation regime and efficiency of various INP. The lab-based spectrum calculates much higher INP efficiencies than field-based ones, which reveals a disparity in aerosol surface properties. Ni sensitivity to temperature tends to be low, due to the compensating effects of temperature on INP spectrum parameters; this low temperature sensitivity regime has been experimentally reported before but never deconstructed as done here.
Zanella, Fabian; Sheikh, Farah
2016-01-01
The generation of human induced pluripotent stem cell (hiPSC)-derived cardiomyocytes has been of utmost interest for the study of cardiac development, cardiac disease modeling, and evaluation of cardiotoxic effects of novel candidate drugs. Several protocols have been developed to guide human stem cells toward the cardiogenic path. Pioneering work used serum to promote cardiogenesis; however, low cardiogenic throughputs, lack of chemical definition, and batch-to-batch variability of serum lots constituted a considerable impediment to the implementation of those protocols to large-scale cell biology. Further work focused on the manipulation of pathways that mouse genetics indicated to be fundamental in cardiac development to promote cardiac differentiation in stem cells. Although extremely elegant, those serum-free protocols involved the use of human recombinant cytokines that tend to be quite costly and which can also be variable between lots. The latest generation of cardiogenic protocols aimed for a more cost-effective and reproducible definition of the conditions driving cardiac differentiation, using small molecules to manipulate cardiogenic pathways overriding the need for cytokines. This chapter details methods based on currently available cardiac differentiation protocols for the generation and characterization of robust numbers of hiPSC-derived cardiomyocytes under chemically defined conditions.
Discontinuity of the annuity curves. III. Two types of vital variability in Drosophila melanogaster.
Bychkovskaia, I B; Mylnikov, S V; Mozhaev, G A
2016-01-01
We confirm five-phased construction of Drosophila annuity curves established earlier. Annuity curves were composed of stable five-phase component and variable one. Variable component was due to differences in phase durations. As stable, so variable components were apparent for 60 generations. Stochastic component was described as well. Viability variance which characterize «reaction norm» was apparent for all generation as well. Thus, both types of variability seem to be inherited.
Optimal Output of Distributed Generation Based On Complex Power Increment
NASA Astrophysics Data System (ADS)
Wu, D.; Bao, H.
2017-12-01
In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.
Continuous operation of four-state continuous-variable quantum key distribution system
NASA Astrophysics Data System (ADS)
Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2016-10-01
We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.
Concurrent generation of multivariate mixed data with variables of dissimilar types.
Amatya, Anup; Demirtas, Hakan
2016-01-01
Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.
Spectral Generation from the Ames Mars GCM for the Study of Martian Clouds
NASA Astrophysics Data System (ADS)
Klassen, David R.; Kahre, Melinda A.; Wolff, Michael J.; Haberle, Robert; Hollingsworth, Jeffery L.
2017-10-01
Studies of martian clouds come from two distinct groups of researchers: those modeling the martian system from first principles and those observing Mars from ground-based and orbital platforms. The model-view begins with global circulation models (GCMs) or mesoscale models to track a multitude of state variables over a prescribed set of spatial and temporal resolutions. The state variables can then be processed into distinct maps of derived product variables, such as integrated optical depth of aerosol (e.g., water ice cloud, dust) or column integrated water vapor for comparison to observational results. The observer view begins, typically, with spectral images or imaging spectra, calibrated to some form of absolute units then run through some form of radiative transfer model to also produce distinct maps of derived product variables. Both groups of researchers work to adjust model parameters and assumptions until some level of agreement in derived product variables is achieved. While this system appears to work well, it is in some sense only an implicit confirmation of the model assumptions that attribute to the work from both sides. We have begun a project of testing the NASA Ames Mars GCM and key aerosol model assumptions more directly by taking the model output and creating synthetic TES-spectra from them for comparison to actual raw-reduced TES spectra. We will present some preliminary generated GCM spectra and TES comparisons.
Factors determining waste generation in Spanish towns and cities.
Prades, Miriam; Gallardo, Antonio; Ibàñez, Maria Victoria
2015-01-01
This paper analyzes the generation and composition of municipal solid waste in Spanish towns and cities with more than 5000 inhabitants, which altogether account for 87% of the Spanish population. To do so, the total composition and generation of municipal solid waste fractions were obtained from 135 towns and cities. Homogeneity tests revealed heterogeneity in the proportions of municipal solid waste fractions from one city to another. Statistical analyses identified significant differences in the generation of glass in cities of different sizes and in the generation of all fractions depending on the hydrographic area. Finally, linear regression models and residuals analysis were applied to analyze the effect of different demographic, geographic, and socioeconomic variables on the generation of waste fractions. The conclusions show that more densely populated towns, a hydrographic area, and cities with over 50,000 inhabitants have higher waste generation rates, while certain socioeconomic variables (people/car) decrease that generation. Other socioeconomic variables (foreigners and unemployment) show a positive and null influence on that waste generation, respectively.
Zhang, Wei-Zhuo; Xiong, Xue-Mei; Zhang, Xiu-Jie; Wan, Shi-Ming; Guan, Ning-Nan; Nie, Chun-Hong; Zhao, Bo-Wen; Hsiao, Chung-Der; Wang, Wei-Min; Gao, Ze-Xia
2016-01-01
Hybridization plays an important role in fish breeding. Bream fishes contribute a lot to aquaculture in China due to their economically valuable characteristics and the present study included five bream species, Megalobrama amblycephala, Megalobrama skolkovii, Megalobrama pellegrini, Megalobrama terminalis and Parabramis pekinensis. As maternal inheritance of mitochondrial genome (mitogenome) involves species specific regulation, we aimed to investigate in which way the inheritance of mitogenome is affected by hybridization in these fish species. With complete mitogenomes of 7 hybrid groups of bream species being firstly reported in the present study, a comparative analysis of 17 mitogenomes was conducted, including representatives of these 5 bream species, 6 first generation hybrids and 6 second generation hybrids. The results showed that these 17 mitogenomes shared the same gene arrangement, and had similar gene size and base composition. According to the phylogenetic analyses, all mitogenomes of the hybrids were consistent with a maternal inheritance. However, a certain number of variable sites were detected in all F1 hybrid groups compared to their female parents, especially in the group of M. terminalis (♀) × M. amblycephala (♂) (MT×MA), with a total of 86 variable sites between MT×MA and its female parent. Among the mitogenomes genes, the protein-coding gene nd5 displayed the highest variability. The number of variation sites was found to be related to phylogenetic relationship of the parents: the closer they are, the lower amount of variation sites their hybrids have. The second generation hybrids showed less mitogenome variation than that of first generation hybrids. The non-synonymous and synonymous substitution rates (dN/dS) were calculated between all the hybrids with their own female parents and the results indicated that most PCGs were under negative selection. PMID:27391325
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Magnetic-field-dependent shear modulus of a magnetorheological elastomer based on natural rubber
NASA Astrophysics Data System (ADS)
Yang, In-Hyung; Yoon, Ji-Hyun; Jeong, Jae-Eun; Jeong, Un-Chang; Kim, Jin-Su; Chung, Kyung Ho; Oh, Jae-Eung
2013-01-01
A magnetorheological elastomer (MRE) is a smart material that has a reversible and variable modulus in a magnetic field. Natural rubber, which has better physical properties than silicone matrices, was used as a matrix in the fabrication of the MREs used in this study. Carbonyl iron powder (CIP), which has a rapid magnetic reaction, was selected as a magnetic material to generate the magnetic-field-dependent modulus in the MREs. The MRE specimens were cured in an anisotropic mold, which could be used to induce a uniaxial magnetic field via permanent magnets, to control the orientation of the CIP, and the shear modulus of the MREs was evaluated under a magnetic field induced by using a magnetic flux generator (MFG). Because the use of a conventional evaluation system to determine the magnetic-field-dependent shear modulus of the MREs was difficult, an evaluation system based on single degree-of-freedom vibration and electromagnetics that included an MFG, which is a device that generates a magnetic field via a variable induced current, was designed. An electromagnetic finite element method (FEM) analysis and design of experiments (DoE) techniques were employed to optimize the magnetic flux density generated by the MFG. The optimized system was verified over the range to determine the magnetic flux density generated by the MFG in order to use a magnetic circuit analysis to identify the existence of magnetic saturation. A variation in the shear modulus was observed with increasing CIP volume fraction and induced current. The experimental results revealed that the maximum variation in the shear modulus was 76.3% for 40 vol% CIP at an induced current of 4 A. With these results, the appropriate CIP volume fraction, induced current, and design procedure of the MFG can be proposed as guidelines for applications of MREs based on natural rubber.
Variable-speed Generators with Flux Weakening
NASA Technical Reports Server (NTRS)
Fardoun, A. A.; Fuchs, E. F.; Carlin, P. W.
1993-01-01
A cost-competitive, permanent-magnet 20 kW generator is designed such that the following criteria are satisfied: an (over) load capability of at least 30 kW over the entire speed range of 60-120 rpm, generator weight of about 550 lbs with a maximum radial stator flux density of 0.82 T at low speed, unity power factor operation, acceptably small synchronous reactances and operation without a gear box. To justify this final design four different generator designs are investigated: the first two designs are studied to obtain a speed range from 20 to 200 rpm employing rotor field weakening, and the latter two are investigated to obtain a maximum speed range of 40 to 160 rpm based on field weakening via the stator excitation. The generator reactances and induced voltages are computed using finite element/difference solutions. Generator losses and efficiencies are presented for all four designs at rated temperature of Tr=120C.
NASA Technical Reports Server (NTRS)
Ramakumar, R.; Bahrami, K.
1981-01-01
This paper discusses the application of field modulated generator systems (FMGS) to dispersed solar-thermal-electric generation from a parabolic dish field with electric transport. Each solar generation unit is rated at 15 kWe and the power generated by an array of such units is electrically collected for insertion into an existing utility grid. Such an approach appears to be most suitable when the heat engine rotational speeds are high (greater than 6000 r/min) and, in particular, if they are operated in the variable speed mode and if utility-grade a.c. is required for direct insertion into the grid without an intermediate electric energy storage and reconversion system. Predictions of overall efficiencies based on conservative efficiency figures for the FMGS are in the range of 25 per cent and should be encouraging to those involved in the development of cost-effective dispersed solar thermal power systems.
Neural network based control of Doubly Fed Induction Generator in wind power generation
NASA Astrophysics Data System (ADS)
Barbade, Swati A.; Kasliwal, Prabha
2012-07-01
To complement the other types of pollution-free generation wind energy is a viable option. Previously wind turbines were operated at constant speed. The evolution of technology related to wind systems industry leaded to the development of a generation of variable speed wind turbines that present many advantages compared to the fixed speed wind turbines. In this paper the phasor model of DFIG is used. This paper presents a study of a doubly fed induction generator driven by a wind turbine connected to the grid, and controlled by artificial neural network ANN controller. The behaviour of the system is shown with PI control, and then as controlled by ANN. The effectiveness of the artificial neural network controller is compared to that of a PI controller. The SIMULINK/MATLAB simulation for Doubly Fed Induction Generator and corresponding results and waveforms are displayed.
Users guide for the Water Resources Division bibliographic retrieval and report generation system
Tamberg, Nora
1983-01-01
The WRDBIB Retrieval and Report-generation system has been developed by applying Multitrieve (CSD 1980, Reston) software to bibliographic data files. The WRDBIB data base includes some 9 ,000 records containing bibliographic citations and descriptors of WRD reports released for publication during 1968-1982. The data base is resident in the Reston Multics computer and may be accessed by registered Multics users in the field. The WRDBIB Users Guide provides detailed procedures on how to run retrieval programs using WRDBIB library files, and how to prepare custom bibliographic reports and author indexes. Users may search the WRDBIB data base on the following variable fields as described in the Data Dictionary: Authors, organizational source, title, citation, publication year, descriptors, and the WRSIC (accession) number. The Users Guide provides ample examples of program runs illustrating various retrieval and report generation aspects. Appendices include Multics access and file manipulation procedures; a ' Glossary of Selected Terms'; and a complete ' Retrieval Session ' with step-by-step outlines. (USGS)
Eddy energy sources and mesoscale eddies in the Sea of Okhotsk
NASA Astrophysics Data System (ADS)
Stepanov, Dmitry V.; Diansky, Nikolay A.; Fomin, Vladimir V.
2018-05-01
Based on eddy-permitting ocean circulation model outputs, the mesoscale variability is studied in the Sea of Okhotsk. We confirmed that the simulated circulation reproduces the main features of the general circulation in the Sea of Okhotsk. In particular, it reproduced a complex structure of the East-Sakhalin current and the pronounced seasonal variability of this current. We established that the maximum of mean kinetic energy was associated with the East-Sakhalin Current. In order to uncover causes and mechanisms of the mesoscale variability, we studied the budget of eddy kinetic energy (EKE) in the Sea of Okhotsk. Spatial distribution of the EKE showed that intensive mesoscale variability occurs along the western boundary of the Sea of Okhotsk, where the East-Sakhalin Current extends. We revealed a pronounced seasonal variability of EKE with its maximum intensity in winter and its minimum intensity in summer. Analysis of EKE sources and rates of energy conversion revealed a leading role of time-varying (turbulent) wind stress in the generation of mesoscale variability along the western boundary of the Sea of Okhotsk in winter and spring. We established that a contribution of baroclinic instability predominates over that of barotropic instability in the generation of mesoscale variability along the western boundary of the Sea of Okhotsk. To demonstrate the mechanism of baroclinic instability, the simulated circulation was considered along the western boundary of the Sea of Okhotsk from January to April 2005. In April, the mesoscale anticyclonic eddies are observed along the western boundary of the Sea of Okhotsk. The role of the sea ice cover in the intensification of the mesoscale variability in the Sea of Okhotsk was discussed.
The intention to continue nursing: work variables affecting three nurse generations in Australia.
Shacklock, Kate; Brunetto, Yvonne
2012-01-01
The aims of the study were to examine how seven variables impacted upon the intention of hospital nurses to continue working as nurses and to investigate whether there are generational differences in these impacts. There is a critical shortage of trained nurses working as nurses in Australia, as in many other Organisation for Economic Co-operation and Development member countries. The retention of nurses has been examined from the traditional management perspectives; however, this paper presents a different approach (Meaning of Working theory). A self-report survey of 900 nurses employed across four states of Australia was completed in 2008. The sample was hospital nurses in Australia from three generational cohorts - Baby Boomers (born in Australia between 1946 and 1964), Generation X (1965-1979) and Generation Y (1980-2000). Six variables were found to influence the combined nurses' intentions to continue working as nurses: work-family conflict, perceptions of autonomy, attachment to work, importance of working to the individual, supervisor-subordinate relationship and interpersonal relationships at work. There were differences in the variables affecting the three generations, but attachment to work was the only common variable across all generations, affecting GenYs the strongest. The shortage of nurses is conceptualized differently in this paper to assist in finding solutions. However, the results varied for the three generations, suggesting the need to tailor different retention strategies to each age group. Implications for management and policy planning are discussed. © 2011 Blackwell Publishing Ltd.
The emergence of different tail exponents in the distributions of firm size variables
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki
2013-05-01
We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.
NASA Astrophysics Data System (ADS)
Akbardin, J.; Parikesit, D.; Riyanto, B.; TMulyono, A.
2018-05-01
Zones that produce land fishery commodity and its yields have characteristics that is limited in distribution capability because infrastructure conditions availability. High demand for fishery commodities caused to a growing distribution at inefficient distribution distance. The development of the gravity theory with the limitation of movement generation from the production zone can increase the interaction inter-zones by distribution distances effectively and efficiently with shorter movement distribution distances. Regression analysis method with multiple variable of transportation infrastructure condition based on service level and quantitative capacity is determined to estimate the 'mass' of movement generation that is formed. The resulting movement distribution (Tid) model has the equation Tid = 27.04 -0.49 tid. Based on barrier function of power model with calibration value β = 0.0496. In the way of development of the movement generation 'mass' boundary at production zone will shorten the distribution distance effectively with shorter distribution distances. Shorter distribution distances will increase the accessibility inter-zones to interact according to the magnitude of the movement generation 'mass'.
Remote Entanglement by Coherent Multiplication of Concurrent Quantum Signals
NASA Astrophysics Data System (ADS)
Roy, Ananda; Jiang, Liang; Stone, A. Douglas; Devoret, Michel
2015-10-01
Concurrent remote entanglement of distant, noninteracting quantum entities is a crucial function for quantum information processing. In contrast with the existing protocols which employ the addition of signals to generate entanglement between two remote qubits, the continuous variable protocol we present is based on the multiplication of signals. This protocol can be straightforwardly implemented by a novel Josephson junction mixing circuit. Our scheme would be able to generate provable entanglement even in the presence of practical imperfections: finite quantum efficiency of detectors and undesired photon loss in current state-of-the-art devices.
Technique for ship/wake detection
Roskovensky, John K [Albuquerque, NM
2012-05-01
An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.
Site-directed nucleases: a paradigm shift in predictable, knowledge-based plant breeding.
Podevin, Nancy; Davies, Howard V; Hartung, Frank; Nogué, Fabien; Casacuberta, Josep M
2013-06-01
Conventional plant breeding exploits existing genetic variability and introduces new variability by mutagenesis. This has proven highly successful in securing food supplies for an ever-growing human population. The use of genetically modified plants is a complementary approach but all plant breeding techniques have limitations. Here, we discuss how the recent evolution of targeted mutagenesis and DNA insertion techniques based on tailor-made site-directed nucleases (SDNs) provides opportunities to overcome such limitations. Plant breeding companies are exploiting SDNs to develop a new generation of crops with new and improved traits. Nevertheless, some technical limitations as well as significant uncertainties on the regulatory status of SDNs may challenge their use for commercial plant breeding. Copyright © 2013 Elsevier Ltd. All rights reserved.
Takahashi, Yoshiaki; Seki, Hirokazu
2009-01-01
This paper proposes a novel regenerative braking control system of electric wheelchairs for senior citizen. "Electric powered wheelchair", which generates the driving force by electric motors according to the human operation, is expected to be widely used as a mobility support system for elderly people. This study focuses on the braking control to realize the safety and smooth stopping motion using the regenerative braking control technique based on fuzzy algorithm. The ride quality improvement and energy recycling can be expected by the proposed control system with stopping distance estimation and variable frequency control on the step-up/down chopper type of capacitor regenerative circuit. Some driving experiments confirm the effectiveness of the proposed control system.
Causing Factors for Extreme Precipitation in the Western Saudi-Arabian Peninsula
NASA Astrophysics Data System (ADS)
Alharbi, M. M.; Leckebusch, G. C.
2015-12-01
In the western coast of Saudi Arabia the climate is in general semi-arid but extreme precipitation events occur on a regular basis: e.g., on 26th November 2009, when 122 people were killed and 350 reported missing in Jeddah following more than 90mm in just four hours. Our investigation will a) analyse major drivers of the generation of extremes and b) investigate major responsible modes of variability for the occurrence of extremes. Firstly, we present a systematic analysis of station based observations of the most relevant extreme events (1985-2013) for 5 stations (Gizan, Makkah, Jeddah, Yenbo and Wejh). Secondly, we investigate the responsible mechanism on the synoptic to large-scale leading to the generation of extremes and will analyse factors for the time variability of extreme event occurrence. Extreme events for each station are identified in the wet season (Nov-Jan): 122 events show intensity above the respective 90th percentile. The most extreme events are systematically investigated with respect to the responsible forcing conditions which we can identify as: The influence of the Soudan Low, active Red-Sea-Trough situations established via interactions with mid-latitude tropospheric wave activity, low pressure systems over the Mediterranean, the influence of the North Africa High, the Arabian Anticyclone and the influence of the Indian monsoon trough. We investigate the role of dynamical forcing factors like the STJ and the upper-troposphere geopotential conditions and the relation to smaller local low-pressure systems. By means of an empirical orthogonal function (EOF) analysis based on MSLP we investigate the possibility to objectively quantify the influence of existing major variability modes and their role for the generation of extreme precipitation events.
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
Rughiniș, Cosima; Humă, Bogdana
2015-12-01
In this paper we argue that quantitative survey-based social research essentializes age, through specific rhetorical tools. We outline the device of 'socio-demographic variables' and we discuss its argumentative functions, looking at scientific survey-based analyses of adult scientific literacy, in the Public Understanding of Science research field. 'Socio-demographics' are virtually omnipresent in survey literature: they are, as a rule, used and discussed as bundles of independent variables, requiring little, if any, theoretical and measurement attention. 'Socio-demographics' are rhetorically effective through their common-sense richness of meaning and inferential power. We identify their main argumentation functions as 'structure building', 'pacification', and 'purification'. Socio-demographics are used to uphold causal vocabularies, supporting the transmutation of the descriptive statistical jargon of 'effects' and 'explained variance' into 'explanatory factors'. Age can also be studied statistically as a main variable of interest, through the age-period-cohort (APC) disambiguation technique. While this approach has generated interesting findings, it did not mitigate the reductionism that appears when treating age as a socio-demographic variable. By working with age as a 'socio-demographic variable', quantitative researchers convert it (inadvertently) into a quasi-biological feature, symmetrical, as regards analytical treatment, with pathogens in epidemiological research. Copyright © 2015 Elsevier Inc. All rights reserved.
Variable frequency microprocessor clock generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branson, C.N.
A microprocessor-based system is described comprising: a digital central microprocessor provided with a clock input and having a rate of operation determined by the frequency of a clock signal input thereto; memory means operably coupled to the central microprocessor for storing programs respectively including a plurality of instructions and addressable by the central microprocessor; peripheral device operably connected to the central microprocessor, the first peripheral device being addressable by the central microprocessor for control thereby; a system clock generator for generating a digital reference clock signal having a reference frequency rate; and frequency rate reduction circuit means connected between themore » clock generator and the clock input of the central microprocessor for selectively dividing the reference clock signal to generate a microprocessor clock signal as an input to the central microprocessor for clocking the central microprocessor.« less
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Beyer, Thomas; Lassen, Martin L; Boellaard, Ronald; Delso, Gaspar; Yaqub, Maqsood; Sattler, Bernhard; Quick, Harald H
2016-02-01
We assess inter- and intra-subject variability of magnetic resonance (MR)-based attenuation maps (MRμMaps) of human subjects for state-of-the-art positron emission tomography (PET)/MR imaging systems. Four healthy male subjects underwent repeated MR imaging with a Siemens Biograph mMR, Philips Ingenuity TF and GE SIGNA PET/MR system using product-specific MR sequences and image processing algorithms for generating MRμMaps. Total lung volumes and mean attenuation values in nine thoracic reference regions were calculated. Linear regression was used for comparing lung volumes on MRμMaps. Intra- and inter-system variability was investigated using a mixed effects model. Intra-system variability was seen for the lung volume of some subjects, (p = 0.29). Mean attenuation values across subjects were significantly different (p < 0.001) due to different segmentations of the trachea. Differences in the attenuation values caused noticeable intra-individual and inter-system differences that translated into a subsequent bias of the corrected PET activity values, as verified by independent simulations. Significant differences of MRμMaps generated for the same subjects but different PET/MR systems resulted in differences in attenuation correction factors, particularly in the thorax. These differences currently limit the quantitative use of PET/MR in multi-center imaging studies.
Axial and Centrifugal Compressor Mean Line Flow Analysis Method
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2009-01-01
This paper describes a method to estimate key aerodynamic parameters of single and multistage axial and centrifugal compressors. This mean-line compressor code COMDES provides the capability of sizing single and multistage compressors quickly during the conceptual design process. Based on the compressible fluid flow equations and the Euler equation, the code can estimate rotor inlet and exit blade angles when run in the design mode. The design point rotor efficiency and stator losses are inputs to the code, and are modeled at off design. When run in the off-design analysis mode, it can be used to generate performance maps based on simple models for losses due to rotor incidence and inlet guide vane reset angle. The code can provide an improved understanding of basic aerodynamic parameters such as diffusion factor, loading levels and incidence, when matching multistage compressor blade rows at design and at part-speed operation. Rotor loading levels and relative velocity ratio are correlated to the onset of compressor surge. NASA Stage 37 and the three-stage NASA 74-A axial compressors were analyzed and the results compared to test data. The code has been used to generate the performance map for the NASA 76-B three-stage axial compressor featuring variable geometry. The compressor stages were aerodynamically matched at off-design speeds by adjusting the variable inlet guide vane and variable stator geometry angles to control the rotor diffusion factor and incidence angles.
Demonstration of variable speed permanent magnet generator at small, low-head hydro site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown Kinloch, David
Small hydro developers face a limited set of bad choices when choosing a generator for a small low-head hydro site. Direct drive synchronous generators are expensive and technically complex to install. Simpler induction generators are higher speed, requiring a speed increaser, which results in inefficiencies and maintenance problems. In addition, both induction and synchronous generators turn at a fixed speed, causing the turbine to run off its peak efficiency curve whenever the available head is different than the designed optimum head.The solution to these problems is the variable speed Permanent Magnet Generators (PMG). At the Weisenberger Mill in Midway, KY,more » a variable speed Permanent Magnet Generator has been installed and demonstrated. This new PMG system replaced an existing induction generator that had a HTD belt drive speed increaser system. Data was taken from the old generator before it was removed and compared to data collected after the PMG system was installed. The new variable speed PMG system is calculated to produce over 96% more energy than the old induction generator system during an average year. This significant increase was primarily due to the PMG generator operating at the correct speed at the maximum head, and the ability for the PMG generator to reduce its speed to lower optimum speeds as the stream flow increased and the net head decreased.This demonstration showed the importance of being able to adjust the speed of fixed blade turbines. All fixed blade turbines with varying net heads could achieve higher efficiencies if the speed can be matched to the optimum speed as the head changes. In addition, this demonstration showed that there are many potential efficiencies that could be realized with variable speed technology at hydro sites where mismatched turbine and generator speeds result in lower power output, even at maximum head. Funding for this project came from the US Dept. of Energy, through Award Number DE-EE0005429.« less
Triñanes, Yolanda; González-Villar, Alberto; Gómez-Perretta, Claudio; Carrillo-de-la-Peña, María T
2014-11-01
The heterogeneity found in fibromyalgia (FM) patients has led to the investigation of disease subgroups, mainly based on clinical features. The aim of this study was to test the hypothesis that clinical FM subgroups are associated with different underlying pathophysiological mechanisms. Sixty-three FM patients were classified in type I or type II, according to the Fibromyalgia Impact Questionnaire (FIQ), and in mild/moderate versus severe FM, according to the severity of three cardinal symptoms considered in the American College of Rheumatology (ACR) 2010 criteria (unrefreshed sleep, cognitive problems and fatigue). To validate the subgroups obtained by these two classifications, we calculated the area under the receiver operating characteristic curves for various clinical variables and for two potential biomarkers of FM: Response to experimental pressure pain (algometry) and the amplitude/intensity slopes of the auditory evoked potentials (AEPs) obtained to stimuli of increasing intensity. The variables that best discriminated type I versus type II were those related to depression, while the indices of clinical or experimental pain (threshold or tolerance) did not significantly differ between them. The variables that best discriminated the mild/moderate versus severe subgroups were those related to the algometry. The AEPs did not allow discrimination among the generated subsets. The FIQ-based classification allows the identification of subgroups that differ in psychological distress, while the index based on the ACR 2010 criteria seems to be useful to characterize the severity of FM mainly based on hyperalgesia. The incorporation of potential biomarkers to generate or validate classification criteria is crucial to advance in the knowledge of FM and in the understanding of pathophysiological pathways.
Energy Storage on the Grid and the Short-term Variability of Wind
NASA Astrophysics Data System (ADS)
Hittinger, Eric Stephen
Wind generation presents variability on every time scale, which must be accommodated by the electric grid. Limited quantities of wind power can be successfully integrated by the current generation and demand-side response mix but, as deployment of variable resources increases, the resulting variability becomes increasingly difficult and costly to mitigate. In Chapter 2, we model a co-located power generation/energy storage block composed of wind generation, a gas turbine, and fast-ramping energy storage. A scenario analysis identifies system configurations that can generate power with 30% of energy from wind, a variability of less than 0.5% of the desired power level, and an average cost around $70/MWh. While energy storage technologies have existed for decades, fast-ramping grid-level storage is still an immature industry and is experiencing relatively rapid improvements in performance and cost across a variety of technologies. Decreased capital cost, increased power capability, and increased efficiency all would improve the value of an energy storage technology and each has cost implications that vary by application, but there has not yet been an investigation of the marginal rate of technical substitution between storage properties. The analysis in chapter 3 uses engineering-economic models of four emerging fast-ramping energy storage technologies to determine which storage properties have the greatest effect on cost-of-service. We find that capital cost of storage is consistently important, and identify applications for which power/energy limitations are important. In some systems with a large amount of wind power, the costs of wind integration have become significant and market rules have been slowly changing in order to internalize or control the variability of wind generation. Chapter 4 examines several potential market strategies for mitigating the effects of wind variability and estimate the effect that each strategy would have on the operation and profitability of wind farms. We find that market scenarios using existing price signals to motivate wind to reduce variability allow wind generators to participate in variability reduction when the market conditions are favorable, and can reduce short-term (30-minute) fluctuations while having little effect on wind farm revenue.
On the distribution of a product of N Gaussian random variables
NASA Astrophysics Data System (ADS)
Stojanac, Željka; Suess, Daniel; Kliesch, Martin
2017-08-01
The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.
Quantum simulation of quantum field theory using continuous variables
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...
2015-12-14
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantum simulation of quantum field theory using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Developing the formula for state subsidies for health care in Finland.
Häkkinen, Unto; Järvelin, Jutta
2004-01-01
The aim was to generate a research-based proposal for a new subsidy formula for municipal healthcare services in Finland. Small-area data on potential need variables, supply of and access to services, and age-, sex- and case-mix-standardised service utilisation per capita were used. Utilisation was regressed in order to identify need variables and the cost weights for the selected need variables were subsequently derived using various multilevel models and structural equation methods. The variables selected for the subsidy formula were as follows: age- and sex-standardised mortality (age under 65 years) and income for outpatient primary health services; age- and sex-standardised mortality (all ages) and index of overcrowded housing for elderly care and long-term inpatient care; index of disability pensions for those aged 15-55 years and migration for specialised non-psychiatric care; and index of living alone and income for psychiatric care. Decisions on the amount of state subsidies can be divided into three stages, of which the first two are mainly political and the third is based on the results of this study.
Water management in the Roman world
NASA Astrophysics Data System (ADS)
Dermody, Brian J.; van Beek, Rens L. P. H.; Meeks, Elijah; Klein Goldewijk, Kees; Bierkens, Marc F. P.; Scheidel, Walter; Wassen, Martin J.; van der Velde, Ype; Dekker, Stefan C.
2014-05-01
Climate variability can have extreme impacts on societies in regions that are water-limited for agriculture. A society's ability to manage its water resources in such environments is critical to its long-term viability. Water management can involve improving agricultural yields through in-situ irrigation or redistributing water resources through trade in food. Here, we explore how such water management strategies affected the resilience of the Roman Empire to climate variability in the water-limited region of the Mediterranean. Using the large-scale hydrological model PCR-GLOBWB and estimates of landcover based on the Historical Database of the Global Environment (HYDE) we generate potential agricultural yield maps under variable climate. HYDE maps of population density in conjunction with potential yield estimates are used to develop maps of agricultural surplus and deficit. The surplus and deficit regions are abstracted to nodes on a water redistribution network based on the Stanford Geospatial Network Model of the Roman World (ORBIS). This demand-driven, water redistribution network allows us to quantitatively explore how water management strategies such as irrigation and food trade improved the resilience of the Roman Empire to climate variability.
NASA Astrophysics Data System (ADS)
Martínez-Lucas, G.; Pérez-Díaz, J. I.; Sarasúa, J. I.; Cavazzini, G.; Pavesi, G.; Ardizzon, G.
2017-04-01
This paper presents a dynamic simulation model of a laboratory-scale pumped-storage power plant (PSPP) operating in pumping mode with variable speed. The model considers the dynamic behavior of the conduits by means of an elastic water column approach, and synthetically generates both pressure and torque pulsations that reproduce the operation of the hydraulic machine in its instability region. The pressure and torque pulsations are generated each from a different set of sinusoidal functions. These functions were calibrated from the results of a CFD model, which was in turn validated from experimental data. Simulation model results match the numerical results of the CFD model with reasonable accuracy. The pump-turbine model (the functions used to generate pressure and torque pulsations inclusive) was up-scaled by hydraulic similarity according to the design parameters of a real PSPP and included in a dynamic simulation model of the said PSPP. Preliminary conclusions on the impact of unstable operation conditions on the penstock fatigue were obtained by means of a Monte Carlo simulation-based fatigue analysis.
Generated effect modifiers (GEM's) in randomized clinical trials.
Petkova, Eva; Tarpey, Thaddeus; Su, Zhe; Ogden, R Todd
2017-01-01
In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a predictor, that predictor is called an "effect modifier". Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment based on patient measurements assessed when presenting for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. This article proposes optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate an effect modifier in an RCT setting. Several criteria are considered for generating effect modifiers and their performance is studied via simulations. An example from a RCT is provided for illustration. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Dreams Fulfilled and Shattered: Determinants of Segmented Assimilation in the Second Generation*
Haller, William; Portes, Alejandro; Lynch, Scott M.
2013-01-01
We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early adulthood on predictors measured three and six years earlier. The Children of Immigrants Longitudinal Study (CILS), used for the analysis, allows us to establish a clear temporal order among exogenous predictors and the two dependent variables. We also construct a Downward Assimilation Index (DAI), based on six indicators and regress it on the same set of predictors. Results confirm a pattern of segmented assimilation in the second generation, with a significant proportion of the sample experiencing downward assimilation. Predictors of the latter are the obverse of those of educational and occupational achievement. Significant interaction effects emerge between these predictors and early school contexts, defined by different class and racial compositions. Implications of these results for theory and policy are examined. PMID:24223437
NASA Astrophysics Data System (ADS)
Naghibolhosseini, Maryam; Long, Glenis
2011-11-01
The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.
R.A. Payn; M.N. Gooseff; B.L. McGlynn; K.E. Bencala; S.M. Wondzell
2012-01-01
Relating watershed structure to streamflow generation is a primary focus of hydrology. However, comparisons of longitudinal variability in stream discharge with adjacent valley structure have been rare, resulting in poor understanding of the distribution of the hydrologic mechanisms that cause variability in streamflow generation along valleys. This study explores...
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.
2011-01-01
A major challenge in understanding the space-time variability of continental water fluxes is the lack of accurate precipitation estimates over complex terrains. While satellite precipitation observations can be used to complement ground-based data to obtain improved estimates, space-based and ground-based estimates come with their own sets of uncertainties, which must be understood and characterized. Quantitative estimation of uncertainties in these products also provides a necessary foundation for merging satellite and ground-based precipitation measurements within a rigorous statistical framework. Global Precipitation Measurement (GPM) is an international satellite mission that will provide next-generation global precipitation data products for research and applications. It consists of a constellation of microwave sensors provided by NASA, JAXA, CNES, ISRO, EUMETSAT, DOD, NOAA, NPP, and JPSS. At the heart of the mission is the GPM Core Observatory provided by NASA and JAXA to be launched in 2013. The GPM Core, which will carry the first space-borne dual-frequency radar and a state-of-the-art multi-frequency radiometer, is designed to set new reference standards for precipitation measurements from space, which can then be used to unify and refine precipitation retrievals from all constellation sensors. The next-generation constellation-based satellite precipitation estimates will be characterized by intercalibrated radiometric measurements and physical-based retrievals using a common observation-derived hydrometeor database. For pre-launch algorithm development and post-launch product evaluation, NASA supports an extensive ground validation (GV) program in cooperation with domestic and international partners to improve (1) physics of remote-sensing algorithms through a series of focused field campaigns, (2) characterization of uncertainties in satellite and ground-based precipitation products over selected GV testbeds, and (3) modeling of atmospheric processes and land surface hydrology through simulation, downscaling, and data assimilation. An overview of the GPM mission, science status, and synergies with HyMex activities will be presented
NASA Astrophysics Data System (ADS)
Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.
2017-12-01
Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.
Variability of Power from Large-Scale Solar Photovoltaic Scenarios in the State of Gujarat: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, B.; Hummon, M.; Cochran, J.
2014-04-01
India has ambitious goals for high utilization of variable renewable power from wind and solar, and deployment has been proceeding at a rapid pace. The western state of Gujarat currently has the largest amount of solar generation of any Indian state, with over 855 Megawatts direct current (MWDC). Combined with over 3,240 MW of wind, variable generation renewables comprise nearly 18% of the electric-generating capacity in the state. A new historic 10-kilometer (km) gridded solar radiation data set capturing hourly insolation values for 2002-2011 is available for India. We apply an established method for downscaling hourly irradiance data to one-minutemore » irradiance data at potential PV power production locations for one year, 2006. The objective of this report is to characterize the intra-hour variability of existing and planned photovoltaic solar power generation in the state of Gujarat (a total of 1.9 gigawatts direct current (GWDC)), and of five possible expansion scenarios of solar generation that reflect a range of geographic diversity (each scenario totals 500-1,000 MW of additional solar capacity). The report statistically analyzes one year's worth of power variability data, applied to both the baseline and expansion scenarios, to evaluate diurnal and seasonal power fluctuations, different timescales of variability (e.g., from one to 15 minutes), the magnitude of variability (both total megawatts and relative to installed solar capacity), and the extent to which the variability can be anticipated in advance. The paper also examines how Gujarat Energy Transmission Corporation (GETCO) and the Gujarat State Load Dispatch Centre (SLDC) could make use of the solar variability profiles in grid operations and planning.« less
Generation of three-dimensional optical cusp beams with ultrathin metasurfaces.
Liu, Weiwei; Zhang, Yuchao; Gao, Jie; Yang, Xiaodong
2018-06-22
Cusp beams are one type of complex structured beams with unique multiple self-accelerating channels and needle-like field structures owning great potentials to advance applications such as particle micromanipulation and super-resolution imaging. The traditional method to generate optical catastrophe is based on cumbrous reflective diffraction optical elements, which makes optical system complicated and hinders the nanophotonics integration. Here we design geometric phase based ultrathin plasmonic metasurfaces made of nanoslit antennas to produce three-dimensional (3D) optical cusp beams with variable numbers of self-accelerating channels in a broadband wavelength range. The entire beam propagation profiles of the cusp beams generated from the metasurfaces are mapped theoretically and experimentally. The special self-accelerating behavior and caustics concentration property of the cups beams are also demonstrated. Our results provide great potentials for promoting metasurface-enabled compact photonic devices used in wide applications of light-matter interactions.
Dutra Vieira, Thainá; Pegoraro de Macedo, Marcia Raquel; Fedatto Bernardon, Fabiana; Müller, Gertrud
2017-10-01
The nematode Diplotriaena bargusinica is a bird air sac parasite, and its taxonomy is based mainly on morphological and morphometric characteristics. Increasing knowledge of genetic information variability has spurred the use of DNA markers in conjunction with morphological data for inferring phylogenetic relationships in different taxa. Considering the potential of molecular biology in taxonomy, this study presents the morphological and molecular characterization of D. bargusinica, and establishes the phylogenetic position of the nematode in Spirurina. Twenty partial sequences of the 18S region of D. bargusinica rDNA were generated. Phylogenetic trees were obtained through the Maximum Likelihood and Bayesian Inference methods where both had similar topology. The group Diplotriaenoidea is monophyletic and the topologies generated corroborate the phylogenetic studies based on traditional and previously performed molecular taxonomy. This study is the first to generate molecular data associated with the morphology of the species. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xin; Zeng, Mingjian; Wang, Yuan; Wang, Wenlan; Wu, Haiying; Mei, Haixia
2016-10-01
Different choices of control variables in variational assimilation can bring about different influences on the analyzed atmospheric state. Based on the WRF model's three-dimensional variational assimilation system, this study compares the behavior of two momentum control variable options—streamfunction velocity potential ( ψ-χ) and horizontal wind components ( U-V)—in radar wind data assimilation for a squall line case that occurred in Jiangsu Province on 24 August 2014. The wind increment from the single observation test shows that the ψ-χ control variable scheme produces negative increments in the neighborhood around the observation point because streamfunction and velocity potential preserve integrals of velocity. On the contrary, the U-V control variable scheme objectively reflects the information of the observation itself. Furthermore, radial velocity data from 17 Doppler radars in eastern China are assimilated. As compared to the impact of conventional observation, the assimilation of radar radial velocity based on the U-V control variable scheme significantly improves the mesoscale dynamic field in the initial condition. The enhanced low-level jet stream, water vapor convergence and low-level wind shear result in better squall line forecasting. However, the ψ-χ control variable scheme generates a discontinuous wind field and unrealistic convergence/divergence in the analyzed field, which lead to a degraded precipitation forecast.
Luo, Mingzhang; Li, Weijie; Wang, Junming; Chen, Xuemin; Song, Gangbing
2018-01-01
As a common approach to nondestructive testing and evaluation, guided wave-based methods have attracted much attention because of their wide detection range and high detection efficiency. It is highly desirable to develop a portable guided wave testing system with high actuating energy and variable frequency. In this paper, a novel giant magnetostrictive actuator with high actuation power is designed and implemented, based on the giant magnetostrictive (GMS) effect. The novel GMS actuator design involves a conical energy-focusing head that can focus the amplified mechanical energy generated by the GMS actuator. This design enables the generation of stress waves with high energy, and the focusing of the generated stress waves on the test object. The guided wave generation system enables two kinds of output modes: the coded pulse signal and the sweep signal. The functionality and the advantages of the developed system are validated through laboratory testing in the quality assessment of rock bolt-reinforced structures. In addition, the developed GMS actuator and the supporting system are successfully implemented and applied in field tests. The device can also be used in other nondestructive testing and evaluation applications that require high-power stress wave generation. PMID:29510540
Luo, Mingzhang; Li, Weijie; Wang, Junming; Wang, Ning; Chen, Xuemin; Song, Gangbing
2018-03-04
As a common approach to nondestructive testing and evaluation, guided wave-based methods have attracted much attention because of their wide detection range and high detection efficiency. It is highly desirable to develop a portable guided wave testing system with high actuating energy and variable frequency. In this paper, a novel giant magnetostrictive actuator with high actuation power is designed and implemented, based on the giant magnetostrictive (GMS) effect. The novel GMS actuator design involves a conical energy-focusing head that can focus the amplified mechanical energy generated by the GMS actuator. This design enables the generation of stress waves with high energy, and the focusing of the generated stress waves on the test object. The guided wave generation system enables two kinds of output modes: the coded pulse signal and the sweep signal. The functionality and the advantages of the developed system are validated through laboratory testing in the quality assessment of rock bolt-reinforced structures. In addition, the developed GMS actuator and the supporting system are successfully implemented and applied in field tests. The device can also be used in other nondestructive testing and evaluation applications that require high-power stress wave generation.
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
Quantum random bit generation using energy fluctuations in stimulated Raman scattering.
Bustard, Philip J; England, Duncan G; Nunn, Josh; Moffatt, Doug; Spanner, Michael; Lausten, Rune; Sussman, Benjamin J
2013-12-02
Random number sequences are a critical resource in modern information processing systems, with applications in cryptography, numerical simulation, and data sampling. We introduce a quantum random number generator based on the measurement of pulse energy quantum fluctuations in Stokes light generated by spontaneously-initiated stimulated Raman scattering. Bright Stokes pulse energy fluctuations up to five times the mean energy are measured with fast photodiodes and converted to unbiased random binary strings. Since the pulse energy is a continuous variable, multiple bits can be extracted from a single measurement. Our approach can be generalized to a wide range of Raman active materials; here we demonstrate a prototype using the optical phonon line in bulk diamond.
Ratio index variables or ANCOVA? Fisher's cats revisited.
Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S
2010-01-01
Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.
Using the Quantile Mapping to improve a weather generator
NASA Astrophysics Data System (ADS)
Chen, Y.; Themessl, M.; Gobiet, A.
2012-04-01
We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.
DI Pietro, Tammie L; Doran, Diane M; McArthur, Gregory
2010-01-01
Variations in nursing care have been observed, affecting patient outcomes and quality of care. Case-based reasoners that benchmark for patient indicators can reduce variation through decision support. This study evaluated and validated a case-based reasoning application to establish benchmarks for nursing-sensitive patient outcomes of pain, fatigue, and toilet use, using patient characteristic variables for generating similar cases. Three graduate nursing students participated. Each ranked 25 patient cases using demographics of age, sex, diagnosis, and comorbidities against 10 patients from a database. Participant judgments of case similarity were compared with the case-based reasoning system. Feature weights for each indicator were adjusted to make the case-based reasoning system's similarity ranking correspond more closely to participant judgment. Small differences were noted between initial weights and weights generated from participants. For example, initial weight for comorbidities was 0.35, whereas weights generated by participants for pain, fatigue, and toilet use were 0.49, 0.42, and 0.48, respectively. For the same outcomes, the initial weight for sex was 0.15, but weights generated by the participants were 0.025, 0.002, and 0.000, respectively. Refinement of the case-based reasoning tool established valid benchmarks for patient outcomes in relation to participants and assisted in point-of-care decision making.
NASA Astrophysics Data System (ADS)
Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.
2016-12-01
State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.
Strength validation and fire endurance of glued-laminated timber beams
E. L. Schaffer; C. M. Marx; D. A. Bender; F. E. Woeste
A previous paper presented a reliability-based model to predict the strength of glued-laminated timber beams at both room temperature and during fire exposure. This Monte Carlo simulation procedure generates strength and fire endurance (time-to-failure, TTF) data for glued- laminated beams that allow assessment of mean strength and TTF as well as their variability....
ERIC Educational Resources Information Center
Eaton, Karen M.; Messer, Stephen C.; Garvey Wilson, Abigail L.; Hoge, Charles W.
2006-01-01
The objectives of this study were to generate precise estimates of suicide rates in the military while controlling for factors contributing to rate variability such as demographic differences and classification bias, and to develop a simple methodology for the determination of statistically derived thresholds for detecting significant rate…
ERIC Educational Resources Information Center
Albert, Jennifer L.
2014-01-01
A cohort of 15 participants was chosen to engage in a study to identify a relationship between personality type and the propensity to choose professions in student affairs. Participants were chosen based upon specific demographic criteria in an effort to highlight inconsistencies in traditionally accepted variables, including gender, ethnicity,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasir, M. N. M.; Saharuddin, N. Z.; Sulaima, M. F.
This paper presents the performance evaluation of standalone hybrid system on Photovoltaic (PV)-Wind generator at Faculty of Electrical Engineering (FKE), UTeM. The hybrid PV-Wind in UTeM system is combining wind turbine system with the solar system and the energy capacity of this hybrid system can generate up to charge the battery and supply the LED street lighting load. The purpose of this project is to evaluate the performance of PV-Wind hybrid generator. Solar radiation meter has been used to measure the solar radiation and anemometer has been used to measure the wind speed. The effectiveness of the PV-Wind system ismore » based on the various data that has been collected and compared between them. The result shows that hybrid system has greater reliability. Based on the solar result, the correlation coefficient shows strong relationship between the two variables of radiation and current. The reading output current followed by fluctuate of solar radiation. However, the correlation coefficient is shows moderate relationship between the two variables of wind speed and voltage. Hence, the wind turbine system in FKE show does not operate consistently to produce energy source for this hybrid system compare to PV system. When the wind system does not fully operate due to inconsistent energy source, the other system which is PV will operate and supply the load for equilibrate the extra load demand.« less
Towards a New Generation of Time-Series Visualization Tools in the ESA Heliophysics Science Archives
NASA Astrophysics Data System (ADS)
Perez, H.; Martinez, B.; Cook, J. P.; Herment, D.; Fernandez, M.; De Teodoro, P.; Arnaud, M.; Middleton, H. R.; Osuna, P.; Arviset, C.
2017-12-01
During the last decades a varied set of Heliophysics missions have allowed the scientific community to gain a better knowledge on the solar atmosphere and activity. The remote sensing images of missions such as SOHO have paved the ground for Helio-based spatial data visualization software such as JHelioViewer/Helioviewer. On the other hand, the huge amount of in-situ measurements provided by other missions such as Cluster provide a wide base for plot visualization software whose reach is still far from being fully exploited. The Heliophysics Science Archives within the ESAC Science Data Center (ESDC) already provide a first generation of tools for time-series visualization focusing on each mission's needs: visualization of quicklook plots, cross-calibration time series, pre-generated/on-demand multi-plot stacks (Cluster), basic plot zoom in/out options (Ulysses) and easy navigation through the plots in time (Ulysses, Cluster, ISS-Solaces). However, as the needs evolve and the scientists involved in new missions require to plot multi-variable data, heat maps stacks interactive synchronization and axis variable selection among other improvements. The new Heliophysics archives (such as Solar Orbiter) and the evolution of existing ones (Cluster) intend to address these new challenges. This paper provides an overview of the different approaches for visualizing time-series followed within the ESA Heliophysics Archives and their foreseen evolution.
Systematic Approach to Better Understanding Integration Costs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Gregory B.
2015-09-01
This research presents a systematic approach to evaluating the costs of integrating new generation and operational procedures into an existing power system, and the methodology is independent of the type of change or nature of the generation. The work was commissioned by the U.S. Department of Energy and performed by the National Renewable Energy Laboratory to investigate three integration cost-related questions: (1) How does the addition of new generation affect a system's operational costs, (2) How do generation mix and operating parameters and procedures affect costs, and (3) How does the amount of variable generation (non-dispatchable wind and solar) impactmore » the accuracy of natural gas orders? A detailed operational analysis was performed for seven sets of experiments: variable generation, large conventional generation, generation mix, gas prices, fast-start generation, self-scheduling, and gas supply constraints. For each experiment, four components of integration costs were examined: cycling costs, non-cycling VO&M costs, fuel costs, and reserves provisioning costs. The investigation was conducted with PLEXOS production cost modeling software utilizing an updated version of the Institute of Electrical and Electronics Engineers 118-bus test system overlaid with projected operating loads from the Western Electricity Coordinating Council for the Sacramento Municipal Utility District, Puget Sound Energy, and Public Service Colorado in the year 2020. The test system was selected in consultation with an industry-based technical review committee to be a reasonable approximation of an interconnection yet small enough to allow the research team to investigate a large number of scenarios and sensitivity combinations. The research should prove useful to market designers, regulators, utilities, and others who want to better understand how system changes can affect production costs.« less
Self-referenced continuous-variable quantum key distribution
Soh, Daniel B. S.; Sarovar, Mohan; Camacho, Ryan
2017-01-24
Various technologies for continuous-variable quantum key distribution without transmitting a transmitter's local oscillator are described herein. A receiver on an optical transmission channel uses an oscillator signal generated by a light source at the receiver's location to perform interferometric detection on received signals. An optical reference pulse is sent by the transmitter on the transmission channel and the receiver computes a phase offset of the transmission based on quadrature measurements of the reference pulse. The receiver can then compensate for the phase offset between the transmitter's reference and the receiver's reference when measuring quadratures of received data pulses.
Spectrum sensitivity, energy yield, and revenue prediction of PV and CPV modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinsey, Geoffrey S., E-mail: Geoffrey.kinsey@ee.doe.gov
2015-09-28
Impact on module performance of spectral irradiance variation has been determined for III-V multijunctions compared against the four most common flat-plate module types (cadmium telluride, multicrystalline silicon, copper indium gallium selenide, and monocrystalline silicon. Hour-by-hour representative spectra were generated using atmospheric variables for Albuquerque, New Mexico, USA. Convolution with published values for external quantum efficiency gave the predicted current output. When combined with specifications of commercial PV modules, energy yield and revenue were predicted. This approach provides a means for optimizing PV module design based on various site-specific temporal variables.
Spectrum sensitivity, energy yield, and revenue prediction of PV and CPV modules
NASA Astrophysics Data System (ADS)
Kinsey, Geoffrey S.
2015-09-01
Impact on module performance of spectral irradiance variation has been determined for III-V multijunctions compared against the four most common flat-plate module types (cadmium telluride, multicrystalline silicon, copper indium gallium selenide, and monocrystalline silicon. Hour-by-hour representative spectra were generated using atmospheric variables for Albuquerque, New Mexico, USA. Convolution with published values for external quantum efficiency gave the predicted current output. When combined with specifications of commercial PV modules, energy yield and revenue were predicted. This approach provides a means for optimizing PV module design based on various site-specific temporal variables.
A browser-based tool for conversion between Fortran NAMELIST and XML/HTML
NASA Astrophysics Data System (ADS)
Naito, O.
A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.
Zhang, Yiming; Jin, Quan; Wang, Shuting; Ren, Ren
2011-05-01
The mobile behavior of 1481 peptides in ion mobility spectrometry (IMS), which are generated by protease digestion of the Drosophila melanogaster proteome, is modeled and predicted based on two different types of characterization methods, i.e. sequence-based approach and structure-based approach. In this procedure, the sequence-based approach considers both the amino acid composition of a peptide and the local environment profile of each amino acid in the peptide; the structure-based approach is performed with the CODESSA protocol, which regards a peptide as a common organic compound and generates more than 200 statistically significant variables to characterize the whole structure profile of a peptide molecule. Subsequently, the nonlinear support vector machine (SVM) and Gaussian process (GP) as well as linear partial least squares (PLS) regression is employed to correlate the structural parameters of the characterizations with the IMS drift times of these peptides. The obtained quantitative structure-spectrum relationship (QSSR) models are evaluated rigorously and investigated systematically via both one-deep and two-deep cross-validations as well as the rigorous Monte Carlo cross-validation (MCCV). We also give a comprehensive comparison on the resulting statistics arising from the different combinations of variable types with modeling methods and find that the sequence-based approach can give the QSSR models with better fitting ability and predictive power but worse interpretability than the structure-based approach. In addition, though the QSSR modeling using sequence-based approach is not needed for the preparation of the minimization structures of peptides before the modeling, it would be considerably efficient as compared to that using structure-based approach. Copyright © 2011 Elsevier Ltd. All rights reserved.
Forecasting Medicaid Expenditures for Antipsychotic Medications.
Slade, Eric P; Simoni-Wastila, Linda
2015-07-01
The ongoing transition from use of mostly branded to mostly generic second-generation antipsychotic medications could bring about a substantial reduction in Medicaid expenditures for antipsychotic medications, a change with critical implications for formulary restrictions on second-generation antipsychotics in Medicaid. This study provided a forecast of the impact of generics on Medicaid expenditures for antipsychotic medications. Quarterly (N=816) state-level aggregate data on outpatient antipsychotic prescriptions in Medicaid between 2008 and 2011 were drawn from the Medicaid state drug utilization database. Annual numbers of prescriptions, expenditures, and cost per prescription were constructed for each antipsychotic medication. Forecasts of antipsychotic expenditures in calendar years 2016 and 2019 were developed on the basis of the estimated percentage reduction in Medicaid expenditures for risperidone, the only second-generation antipsychotic available generically throughout the study period. Two models of savings from generic risperidone use were estimated, one based on constant risperidone prices and the other based on variable risperidone prices. The sensitivity of the expenditure forecast to expected changes in Medicaid enrollment was also examined. In the main model, annual Medicaid expenditures for antipsychotics were forecasted to decrease by $1,794 million (48.8%) by 2016 and by $2,814 million (76.5%) by 2019. Adjustment for variable prices of branded medications and changes in Medicaid enrollment only moderately affected the magnitude of these reductions. Within five years, antipsychotic expenditures in Medicaid may decline to less than half their current levels. Such a spending reduction warrants a reassessment of the continued necessity of formulary restrictions for second-generation antipsychotics in Medicaid.
van Mierlo, Trevor; Li, Xinlong; Hyatt, Douglas; Ching, Andrew T
2017-02-17
Digital health social networks (DHSNs) are widespread, and the consensus is that they contribute to wellness by offering social support and knowledge sharing. The success of a DHSN is based on the number of participants and their consistent creation of externalities through the generation of new content. To promote network growth, it would be helpful to identify characteristics of superusers or actors who create value by generating positive network externalities. The aim of the study was to investigate the feasibility of developing predictive models that identify potential superusers in real time. This study examined associations between posting behavior, 4 demographic variables, and 20 indication-specific variables. Data were extracted from the custom structured query language (SQL) databases of 4 digital health behavior change interventions with DHSNs. Of these, 2 were designed to assist in the treatment of addictions (problem drinking and smoking cessation), and 2 for mental health (depressive disorder, panic disorder). To analyze posting behavior, 10 models were developed, and negative binomial regressions were conducted to examine associations between number of posts, and demographic and indication-specific variables. The DHSNs varied in number of days active (3658-5210), number of registrants (5049-52,396), number of actors (1085-8452), and number of posts (16,231-521,997). In the sample, all 10 models had low R 2 values (.013-.086) with limited statistically significant demographic and indication-specific variables. Very few variables were associated with social network engagement. Although some variables were statistically significant, they did not appear to be practically significant. Based on the large number of study participants, variation in DHSN theme, and extensive time-period, we did not find strong evidence that demographic characteristics or indication severity sufficiently explain the variability in number of posts per actor. Researchers should investigate alternative models that identify superusers or other individuals who create social network externalities. ©Trevor van Mierlo, Xinlong Li, Douglas Hyatt, Andrew T Ching. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.02.2017.
Digital relief generation from 3D models
NASA Astrophysics Data System (ADS)
Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian
2016-09-01
It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Sullivan, Sylvia C.; Morales Betancourt, Ricardo; Barahona, Donifan; ...
2016-03-03
Along with minimizing parameter uncertainty, understanding the cause of temporal and spatial variability of the nucleated ice crystal number, N i, is key to improving the representation of cirrus clouds in climate models. To this end, sensitivities of N i to input variables like aerosol number and diameter provide valuable information about nucleation regime and efficiency for a given model formulation. Here we use the adjoint model of the adjoint of a cirrus formation parameterization (Barahona and Nenes, 2009b) to understand N i variability for various ice-nucleating particle (INP) spectra. Inputs are generated with the Community Atmosphere Model version 5, andmore » simulations are done with a theoretically derived spectrum, an empirical lab-based spectrum and two field-based empirical spectra that differ in the nucleation threshold for black carbon particles and in the active site density for dust. The magnitude and sign of N i sensitivity to insoluble aerosol number can be directly linked to nucleation regime and efficiency of various INP. The lab-based spectrum calculates much higher INP efficiencies than field-based ones, which reveals a disparity in aerosol surface properties. In conclusion, N i sensitivity to temperature tends to be low, due to the compensating effects of temperature on INP spectrum parameters; this low temperature sensitivity regime has been experimentally reported before but never deconstructed as done here.« less
van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne
2016-06-01
Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.
NASA Astrophysics Data System (ADS)
Yüksel, Kivilcim; Yilmaz, Anil
2018-07-01
We present the analysis of a remote sensor based on fiber Cavity Ring-Down (CRD) loop interrogated by an Optical Time Domain Reflectometer (OTDR) taking into account both practical limitations and the related signal processing. A commercial OTDR is used for both pulse generation and sensor output detection. This allows obtaining a compact and simple design for intensity-based sensor applications. This novel sensor interrogation approach is experimentally demonstrated by placing a variable attenuator inside the fiber loop that mimics a sensor head.
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
Analyzing the responses of species assemblages to climate change across the Great Basin, USA.
NASA Astrophysics Data System (ADS)
Henareh Khalyani, A.; Falkowski, M. J.; Crookston, N.; Yousef, F.
2016-12-01
The potential impacts of climate change on the future distribution of tree species in not well understood. Climate driven changes in tree species distribution could cause significant changes in realized species niches, potentially resulting in the loss of ecotonal species as well as the formation on novel assemblages of overlapping tree species. In an effort to gain a better understating of how the geographic distribution of tree species may respond to climate change, we model the potential future distribution of 50 different tree species across 70 million ha in the Great Basin, USA. This is achieved by leveraging a species realized niche model based on non-parametric analysis of species occurrences across climatic, topographic, and edaphic variables. Spatially explicit, high spatial resolution (30 m) climate variables (e.g., precipitation, and minimum, maximum, and mean temperature) and associated climate indices were generated on an annual basis between 1981-2010 by integrating climate station data with digital elevation data (Shuttle Radar Topographic Mission (SRTM) data) in a thin plate spline interpolation algorithm (ANUSPLIN). Bioclimate models of species niches in in the cotemporary period and three following 30 year periods were then generated by integrating the climate variables, soil data, and CMIP 5 general circulation model projections. Our results suggest that local scale contemporary variations in species realized niches across space are influenced by edaphic and topographic variables as well as climatic variables. The local variability in soil properties and topographic variability across space also affect the species responses to climate change through time and potential formation of species assemblages in future. The results presented here in will aid in the development of adaptive forest management techniques aimed at mitigating negative impacts of climate change on forest composition, structure, and function.
Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany; Gallo, Giulia; Brinkman, Gregory
Revenue insufficiency, or the missing money problem, occurs when the revenues that generators earn from the market are not sufficient to cover both fixed and variable costs to remain in the market and/or justify investments in new capacity, which may be needed for reliability. The near-zero marginal cost of variable renewable generators further exacerbates these revenue challenges. Estimating the extent of the missing money problem in current electricity markets is an important, nontrivial task that requires representing both how the power system operates and how market participants behave. This paper explores the missing money problem using a production cost modelmore » that represented a simplified version of the Electric Reliability Council of Texas (ERCOT) energy-only market for the years 2012-2014. We evaluate how various market structures -- including market behavior, ancillary services, and changing fleet compositions -- affect net revenues in this ERCOT-like system. In most production cost modeling exercises, resources are assumed to offer their marginal capabilities at marginal costs. Although this assumption is reasonable for feasibility studies and long-term planning, it does not adequately consider the market behaviors that impact revenue sufficiency. In this work, we simulate a limited set of market participant strategic bidding behaviors by means of different sets of markups; these markups are applied to the true production costs of all gas generators, which are the most prominent generators in ERCOT. Results show that markups can help generators increase their net revenues overall, although net revenues may increase or decrease depending on the technology and the year under study. Results also confirm that conventional, variable-cost-based production cost simulations do not capture prices accurately, and this particular feature calls for proxies for strategic behaviors (e.g., markups) and more accurate representations of how electricity markets work. The analysis also shows that generators face revenue sufficiency challenges in this ERCOT-like energy-only market model; net revenues provided by the market in all base markup cases and sensitivity scenarios (except when a large fraction of the existing coal fleet is retired) are not sufficient to justify investments in new capacity for thermal and nuclear power units. Overall, the work described in this paper points to the need for improved behavioral models of electricity markets to more accurately study current and potential market design issues that could arise in systems with high penetrations of renewable generation.« less
Multi objective decision making in hybrid energy system design
NASA Astrophysics Data System (ADS)
Merino, Gabriel Guillermo
The design of grid-connected photovoltaic wind generator system supplying a farmstead in Nebraska has been undertaken in this dissertation. The design process took into account competing criteria that motivate the use of different sources of energy for electric generation. The criteria considered were 'Financial', 'Environmental', and 'User/System compatibility'. A distance based multi-objective decision making methodology was developed to rank design alternatives. The method is based upon a precedence order imposed upon the design objectives and a distance metric describing the performance of each alternative. This methodology advances previous work by combining ambiguous information about the alternatives with a decision-maker imposed precedence order in the objectives. Design alternatives, defined by the photovoltaic array and wind generator installed capacities, were analyzed using the multi-objective decision making approach. The performance of the design alternatives was determined by simulating the system using hourly data for an electric load for a farmstead and hourly averages of solar irradiation, temperature and wind speed from eight wind-solar energy monitoring sites in Nebraska. The spatial variability of the solar energy resource within the region was assessed by determining semivariogram models to krige hourly and daily solar radiation data. No significant difference was found in the predicted performance of the system when using kriged solar radiation data, with the models generated vs. using actual data. The spatial variability of the combined wind and solar energy resources was included in the design analysis by using fuzzy numbers and arithmetic. The best alternative was dependent upon the precedence order assumed for the main criteria. Alternatives with no PV array or wind generator dominated when the 'Financial' criteria preceded the others. In contrast, alternatives with a nil component of PV array but a high wind generator component, dominated when the 'Environment' objective or the 'User/System compatibility' objectives were more important than the 'Financial' objectives and they also dominated when the three criteria were considered equally important.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.
1991-01-01
In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.
Mwogi, Thomas S.; Biondich, Paul G.; Grannis, Shaun J.
2014-01-01
Motivated by the need for readily available data for testing an open-source health information exchange platform, we developed and evaluated two methods for generating synthetic messages. The methods used HL7 version 2 messages obtained from the Indiana Network for Patient Care. Data from both methods were analyzed to assess how effectively the output reflected original ‘real-world’ data. The Markov Chain method (MCM) used an algorithm based on transitional probability matrix while the Music Box model (MBM) randomly selected messages of particular trigger type from the original data to generate new messages. The MBM was faster, generated shorter messages and exhibited less variation in message length. The MCM required more computational power, generated longer messages with more message length variability. Both methods exhibited adequate coverage, producing a high proportion of messages consistent with original messages. Both methods yielded similar rates of valid messages. PMID:25954458
MLP based LOGSIG transfer function for solar generation monitoring
NASA Astrophysics Data System (ADS)
Hashim, Fakroul Ridzuan; Din, Muhammad Faiz Md; Ahmad, Shahril; Arif, Farah Khairunnisa; Rizman, Zairi Ismael
2018-02-01
Solar panel is one of the renewable energy that can reduce the environmental pollution and have a wide potential of application. The exact solar prediction model will give a big impact on the management of solar power plants and the design of solar energy systems. This paper attempts to use Multilayer Perceptron (MLP) neural network based transfer function. The MLP network can be used to calculate the temperature module (TM) in Malaysia. This can be done by simulating the collected data of four weather variables which are the ambient temperature (TA), local wind speed (VW), solar radiation flux (GT) and the relative humidity (RH) as the input into the neural network. The transfer function will be applied to the 14 types of training. Finally, an equation from the best training algorithm will be deduced to calculate the temperature module based on the input of weather variables in Malaysia.
Enhancing Multimedia Imbalanced Concept Detection Using VIMP in Random Forests.
Sadiq, Saad; Yan, Yilin; Shyu, Mei-Ling; Chen, Shu-Ching; Ishwaran, Hemant
2016-07-01
Recent developments in social media and cloud storage lead to an exponential growth in the amount of multimedia data, which increases the complexity of managing, storing, indexing, and retrieving information from such big data. Many current content-based concept detection approaches lag from successfully bridging the semantic gap. To solve this problem, a multi-stage random forest framework is proposed to generate predictor variables based on multivariate regressions using variable importance (VIMP). By fine tuning the forests and significantly reducing the predictor variables, the concept detection scores are evaluated when the concept of interest is rare and imbalanced, i.e., having little collaboration with other high level concepts. Using classical multivariate statistics, estimating the value of one coordinate using other coordinates standardizes the covariates and it depends upon the variance of the correlations instead of the mean. Thus, conditional dependence on the data being normally distributed is eliminated. Experimental results demonstrate that the proposed framework outperforms those approaches in the comparison in terms of the Mean Average Precision (MAP) values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keser, Saniye; Duzgun, Sebnem; Department of Geodetic and Geographic Information Technologies, Middle East Technical University, 06800 Ankara
Highlights: Black-Right-Pointing-Pointer Spatial autocorrelation exists in municipal solid waste generation rates for different provinces in Turkey. Black-Right-Pointing-Pointer Traditional non-spatial regression models may not provide sufficient information for better solid waste management. Black-Right-Pointing-Pointer Unemployment rate is a global variable that significantly impacts the waste generation rates in Turkey. Black-Right-Pointing-Pointer Significances of global parameters may diminish at local scale for some provinces. Black-Right-Pointing-Pointer GWR model can be used to create clusters of cities for solid waste management. - Abstract: In studies focusing on the factors that impact solid waste generation habits and rates, the potential spatial dependency in solid waste generation datamore » is not considered in relating the waste generation rates to its determinants. In this study, spatial dependency is taken into account in determination of the significant socio-economic and climatic factors that may be of importance for the municipal solid waste (MSW) generation rates in different provinces of Turkey. Simultaneous spatial autoregression (SAR) and geographically weighted regression (GWR) models are used for the spatial data analyses. Similar to ordinary least squares regression (OLSR), regression coefficients are global in SAR model. In other words, the effect of a given independent variable on a dependent variable is valid for the whole country. Unlike OLSR or SAR, GWR reveals the local impact of a given factor (or independent variable) on the waste generation rates of different provinces. Results show that provinces within closer neighborhoods have similar MSW generation rates. On the other hand, this spatial autocorrelation is not very high for the exploratory variables considered in the study. OLSR and SAR models have similar regression coefficients. GWR is useful to indicate the local determinants of MSW generation rates. GWR model can be utilized to plan waste management activities at local scale including waste minimization, collection, treatment, and disposal. At global scale, the MSW generation rates in Turkey are significantly related to unemployment rate and asphalt-paved roads ratio. Yet, significances of these variables may diminish at local scale for some provinces. At local scale, different factors may be important in affecting MSW generation rates.« less
NASA Astrophysics Data System (ADS)
Teh, R. Y.; Reid, M. D.
2014-12-01
Following previous work, we distinguish between genuine N -partite entanglement and full N -partite inseparability. Accordingly, we derive criteria to detect genuine multipartite entanglement using continuous-variable (position and momentum) measurements. Our criteria are similar but different to those based on the van Loock-Furusawa inequalities, which detect full N -partite inseparability. We explain how the criteria can be used to detect the genuine N -partite entanglement of continuous variable states generated from squeezed and vacuum state inputs, including the continuous-variable Greenberger-Horne-Zeilinger state, with explicit predictions for up to N =9 . This makes our work accessible to experiment. For N =3 , we also present criteria for tripartite Einstein-Podolsky-Rosen (EPR) steering. These criteria provide a means to demonstrate a genuine three-party EPR paradox, in which any single party is steerable by the remaining two parties.
Apparatus and method for controlling autotroph cultivation
Fuxman, Adrian M; Tixier, Sebastien; Stewart, Gregory E; Haran, Frank M; Backstrom, Johan U; Gerbrandt, Kelsey
2013-07-02
A method includes receiving at least one measurement of a dissolved carbon dioxide concentration of a mixture of fluid containing an autotrophic organism. The method also includes determining an adjustment to one or more manipulated variables using the at least one measurement. The method further includes generating one or more signals to modify the one or more manipulated variables based on the determined adjustment. The one or more manipulated variables could include a carbon dioxide flow rate, an air flow rate, a water temperature, and an agitation level for the mixture. At least one model relates the dissolved carbon dioxide concentration to one or more manipulated variables, and the adjustment could be determined by using the at least one model to drive the dissolved carbon dioxide concentration to at least one target that optimize a goal function. The goal function could be to optimize biomass growth rate, nutrient removal and/or lipid production.
Jastram, John D.; Moyer, Douglas; Hyer, Kenneth
2009-01-01
Fluvial transport of sediment into the Chesapeake Bay estuary is a persistent water-quality issue with major implications for the overall health of the bay ecosystem. Accurately and precisely estimating the suspended-sediment concentrations (SSC) and loads that are delivered to the bay, however, remains challenging. Although manual sampling of SSC produces an accurate series of point-in-time measurements, robust extrapolation to unmeasured periods (especially highflow periods) has proven to be difficult. Sediment concentrations typically have been estimated using regression relations between individual SSC values and associated streamflow values; however, suspended-sediment transport during storm events is extremely variable, and it is often difficult to relate a unique SSC to a given streamflow. With this limitation for estimating SSC, innovative approaches for generating detailed records of suspended-sediment transport are needed. One effective method for improved suspended-sediment determination involves the continuous monitoring of turbidity as a surrogate for SSC. Turbidity measurements are theoretically well correlated to SSC because turbidity represents a measure of water clarity that is directly influenced by suspended sediments; thus, turbidity-based estimation models typically are effective tools for generating SSC data. The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency Chesapeake Bay Program and Virginia Department of Environmental Quality, initiated continuous turbidity monitoring on three major tributaries of the bay - the James, Rappahannock, and North Fork Shenandoah Rivers - to evaluate the use of turbidity as a sediment surrogate in rivers that deliver sediment to the bay. Results of this surrogate approach were compared to the traditionally applied streamflow-based approach for estimating SSC. Additionally, evaluation and comparison of these two approaches were conducted for nutrient estimations. Results demonstrate that the application of turbidity-based estimation models provides an improved method for generating a continuous record of SSC, relative to the classical approach that uses streamflow as a surrogate for SSC. Turbidity-based estimates of SSC were found to be more accurate and precise than SSC estimates from streamflow-based approaches. The turbidity-based SSC estimation models explained 92 to 98 percent of the variability in SSC, while streamflow-based models explained 74 to 88 percent of the variability in SSC. Furthermore, the mean absolute error of turbidity-based SSC estimates was 50 to 87 percent less than the corresponding values from the streamflow-based models. Statistically significant differences were detected between the distributions of residual errors and estimates from the two approaches, indicating that the turbidity-based approach yields estimates of SSC with greater precision than the streamflow-based approach. Similar improvements were identified for turbidity-based estimates of total phosphorus, which is strongly related to turbidity because total phosphorus occurs predominantly in particulate form. Total nitrogen estimation models based on turbidity and streamflow generated estimates of similar quality, with the turbidity-based models providing slight improvements in the quality of estimations. This result is attributed to the understanding that nitrogen transport is dominated by dissolved forms that relate less directly to streamflow and turbidity. Improvements in concentration estimation resulted in improved estimates of load. Turbidity-based suspended-sediment loads estimated for the James River at Cartersville, VA, monitoring station exhibited tighter confidence interval bounds and a coefficient of variation of 12 percent, compared with a coefficient of variation of 38 percent for the streamflow-based load.
NASA Astrophysics Data System (ADS)
Herrera, J. I.; Reddoch, T. W.
1988-02-01
Variable speed electric generating technology can enhance the general use of wind energy in electric utility applications. This enhancement results from two characteristic properties of variable speed wind turbine generators: an improvement in drive train damping characteristics, which results in reduced structural loading on the entire wind turbine system, and an improvement in the overall efficiency by using a more sophisticated electrical generator. Electronic converter systems are the focus of this investigation -- in particular, the properties of a wound-rotor induction generator with the slip recovery system and direct-current link converter. Experience with solid-state converter systems in large wind turbines is extremely limited. This report presents measurements of electrical performances of the slip recovery system and is limited to the terminal characteristics of the system. Variable speed generating systems working effectively in utility applications will require a satisfactory interface between the turbine/generator pair and the utility network. The electrical testing described herein focuses largely on the interface characteristics of the generating system. A MOD-O wind turbine was connected to a very strong system; thus, the voltage distortion was low and the total harmonic distortion in the utility voltage was less than 3 percent (within the 5 percent limit required by most utilities). The largest voltage component of a frequency below 60 Hz was 40 dB down from the 60-Hz less than component.
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Qayyum, Sajid; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This article addresses the magnetohydrodynamic (MHD) stagnation point flow of third grade fluid towards a nonlinear stretching sheet. Energy expression is based through involvement of variable thermal conductivity. Heat and mass transfer aspects are described within the frame of double stratification effects. Boundary layer partial differential systems are deduced. Governing systems are then converted into ordinary differential systems by invoking appropriate variables. The transformed expressions are solved through homotopic technique. Impact of embedded variables on velocity, thermal and concentration fields are displayed and argued. Numerical computations are presented to obtain the results of skin friction coefficient and local Nusselt and Sherwood numbers. It is revealed that larger values of magnetic parameter reduces the velocity field while reverse situation is noticed due to wall thickness variable. Temperature field and local Nusselt number are quite reverse for heat generation/absorption parameter. Moreover qualitative behaviors of concentration field and local Sherwood number are similar for solutal stratification parameter.
How to Integrate Variable Power Source into a Power Grid
NASA Astrophysics Data System (ADS)
Asano, Hiroshi
This paper discusses how to integrate variable power source such as wind power and photovoltaic generation into a power grid. The intermittent renewable generation is expected to penetrate for less carbon intensive power supply system, but it causes voltage control problem in the distribution system, and supply-demand imbalance problem in a whole power system. Cooperative control of customers' energy storage equipment such as water heater with storage tank for reducing inverse power flow from the roof-top PV system, the operation technique using a battery system and the solar radiation forecast for stabilizing output of variable generation, smart charging of plug-in hybrid electric vehicles for load frequency control (LFC), and other methods to integrate variable power source with improving social benefits are surveyed.
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
NASA Astrophysics Data System (ADS)
Mulyani, Sri; Andriyana, Yudhie; Sudartianto
2017-03-01
Mean regression is a statistical method to explain the relationship between the response variable and the predictor variable based on the central tendency of the data (mean) of the response variable. The parameter estimation in mean regression (with Ordinary Least Square or OLS) generates a problem if we apply it to the data with a symmetric, fat-tailed, or containing outlier. Hence, an alternative method is necessary to be used to that kind of data, for example quantile regression method. The quantile regression is a robust technique to the outlier. This model can explain the relationship between the response variable and the predictor variable, not only on the central tendency of the data (median) but also on various quantile, in order to obtain complete information about that relationship. In this study, a quantile regression is developed with a nonparametric approach such as smoothing spline. Nonparametric approach is used if the prespecification model is difficult to determine, the relation between two variables follow the unknown function. We will apply that proposed method to poverty data. Here, we want to estimate the Percentage of Poor People as the response variable involving the Human Development Index (HDI) as the predictor variable.
1990-09-01
array. LTHPER Length of the MPPERS array. LTHQPA Length of the QPA array. LTHXRT Length of the XROOT array. MAXACN Maximum number of aircraft that can...3 Time remaining until the ready-to-fly time at time of report Number of XROOT Array Entries (GENERATED) NROOT (MAXT) The total number of entries in...the XROOT array for each aircraft type. AIS Station Status NSTAT (NOSTAT, I, MAXB) I = 1 Total number of stations of each type on base = 2 Number in
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Anthony
Presentation at ASHRAE about the spatial and temporal variability of gridded TMYs, discussing advanced GIS and Web services that allow for direct access to data, surface-based observations for thousands of stations, climate reanalysis data, and products derived from satellite data; new developments in NREL's solar databases based on both observed data and satellite-derived gridded data, status of TMY3 weather files, and NREL's plans for the next-generation TMY weather files; and also covers what is new and different in the Climatic Design Conditions Table in the 2013 ASHRAE Handbook of Fundamentals.
ERIC Educational Resources Information Center
Ben, Camilus Bassey
2012-01-01
The main purpose of this study is to investigate leadership among secondary school Agricultural Science teachers and their job performance in Akwa Ibom State. To achieve the aim of this study, three research hypotheses were generated to direct the study. Literature was reviewed based on the variables derived from the postulated hypotheses. Survey…
The concept and use of elasticity in population viability models [Exercise 13
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
As you have seen in exercise 12, plants, such as the western prairie fringed orchid, typically have distinct life stages and complex life cycles that require the matrix analyses associated with a stage-based population model. Some statistics that can be generated from such matrix analyses can be very informative in determining which variables in the model have the...
USDA-ARS?s Scientific Manuscript database
An industry survey and an animal experiment were conducted to evaluate compositional variability and DE and ME content of animal protein by-products, and to generate equations to predict DE and ME content based on chemical analysis. For the 220 samples collected, the greatest concentration of CP was...
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-01-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600
Cellular mechanisms underlying spatiotemporal features of cholinergic retinal waves
Ford, Kevin J.; Félix, Aude L.; Feller, Marla B.
2012-01-01
Prior to vision, a transient network of recurrently connected cholinergic interneurons, called starburst amacrine cells (SACs), generates spontaneous retinal waves. Despite an absence of robust inhibition, cholinergic retinal waves initiate infrequently and propagate within finite boundaries. Here we combine a variety of electrophysiological and imaging techniques and computational modeling to elucidate the mechanisms underlying these spatial and temporal properties of waves in developing mouse retina. Waves initiate via rare spontaneous depolarizations of SACs. Waves propagate through recurrent cholinergic connections between SACs and volume release of ACh as demonstrated using paired recordings and a cell-based ACh optical sensor. Perforated patch recordings and two-photon calcium imaging reveal that individual SACs have slow afterhyperpolarizations that induce SACs to have variable depolarizations during sequential waves. Using a computational model in which the properties of SACs are based on these physiological measurements, we reproduce the slow frequency, speed, and finite size of recorded waves. This study represents a detailed description of the circuit that mediates cholinergic retinal waves and indicates that variability of the interneurons that generate this network activity may be critical for the robustness of waves across different species and stages of development. PMID:22262883
Chen, C L; Kaber, D B; Dempsey, P G
2000-06-01
A new and improved method to feedforward neural network (FNN) development for application to data classification problems, such as the prediction of levels of low-back disorder (LBD) risk associated with industrial jobs, is presented. Background on FNN development for data classification is provided along with discussions of previous research and neighborhood (local) solution search methods for hard combinatorial problems. An analytical study is presented which compared prediction accuracy of a FNN based on an error-back propagation (EBP) algorithm with the accuracy of a FNN developed by considering results of local solution search (simulated annealing) for classifying industrial jobs as posing low or high risk for LBDs. The comparison demonstrated superior performance of the FNN generated using the new method. The architecture of this FNN included fewer input (predictor) variables and hidden neurons than the FNN developed based on the EBP algorithm. Independent variable selection methods and the phenomenon of 'overfitting' in FNN (and statistical model) generation for data classification are discussed. The results are supportive of the use of the new approach to FNN development for applications to musculoskeletal disorders and risk forecasting in other domains.
Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.
Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T
2014-06-01
Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo
2017-03-01
The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the total flow variability in the response of the urban drainage systems to heavy rainfall events.
Lin, Steve; Morrison, Laurie J; Brooks, Steven C
2011-04-01
The widely accepted Utstein style has standardized data collection and analysis in resuscitation and post resuscitation research. However, collection of many of these variables poses significant practical challenges. In addition, several important variables in post resuscitation research are missing. Our aim was to develop a comprehensive data dictionary and web-based data collection tool as part of the Strategies for Post Arrest Resuscitation Care (SPARC) Network project, which implemented a knowledge translation program for post cardiac arrest therapeutic hypothermia in 37 Ontario hospitals. A list of data variables was generated based on the current Utstein style, previous studies and expert opinion within our group of investigators. We developed a data dictionary by creating clear definitions and establishing abstraction instructions for each variable. The data dictionary was integrated into a web-based collection form allowing for interactive data entry. Two blinded investigators piloted the data collection tool, by performing a retrospective chart review. A total of 454 variables were included of which 400 were Utstein, 2 were adapted from existing studies and 52 were added to address missing elements. Kappa statistics for two outcome variables, survival to discharge and induction of therapeutic hypothermia were 0.86 and 0.64, respectively. This is the first attempt in the literature to develop a data dictionary as part of a standardized, pragmatic data collection tool for post cardiac arrest research patients. In addition, our dataset defined important variables that were previously missing. This data collection tool can serve as a reference for future trials in post cardiac arrest care. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Farda, A.; Huth, R.
2012-12-01
The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series and then modified (in case of simulations for the future climate) according to the GCM- or RCM-based climate change scenarios. The present contribution uses the parametric daily weather generator M&Rfi to follow two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate/CZ (v.2) Regional Climate Model at 25 km resolution. The WG parameters will be derived from the RCM-simulated surface weather series and compared to those derived from observational data in the Czech meteorological stations. The set of WG parameters will include selected statistics of the surface temperature and precipitation (characteristics of the mean, variability, interdiurnal variability and extremes). (2) Testing a potential of RCM output for calibration of the WG for the ungauged locations. The methodology being examined will consist in using the WG, whose parameters are interpolated from the surrounding stations and then corrected based on a RCM-simulated spatial variability. The quality of the weather series produced by the WG calibrated in this way will be assessed in terms of selected climatic characteristics focusing on extreme precipitation and temperature characteristics (including characteristics of dry/wet/hot/cold spells). Acknowledgements: The present experiment is made within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports) and VALUE (COST ES 1102 action).
Recent Trends in Variable Generation Forecasting and Its Value to the Power System
Orwig, Kirsten D.; Ahlstrom, Mark L.; Banunarayanan, Venkat; ...
2014-12-23
We report that the rapid deployment of wind and solar energy generation systems has resulted in a need to better understand, predict, and manage variable generation. The uncertainty around wind and solar power forecasts is still viewed by the power industry as being quite high, and many barriers to forecast adoption by power system operators still remain. In response, the U.S. Department of Energy has sponsored, in partnership with the National Oceanic and Atmospheric Administration, public, private, and academic organizations, two projects to advance wind and solar power forecasts. Additionally, several utilities and grid operators have recognized the value ofmore » adopting variable generation forecasting and have taken great strides to enhance their usage of forecasting. In parallel, power system markets and operations are evolving to integrate greater amounts of variable generation. This paper will discuss the recent trends in wind and solar power forecasting technologies in the U.S., the role of forecasting in an evolving power system framework, and the benefits to intended forecast users.« less
Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter
2018-01-01
Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930
Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter
2018-02-17
Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes.
NASA Technical Reports Server (NTRS)
Herrera, J. I.; Reddoch, T. W.; Lawler, J. S.
1985-01-01
As efforts are accelerated to improve the overall capability and performance of wind electric systems, increased attention to variable speed configurations has developed. A number of potentially viable configurations have emerged. Various attributes of variable speed systems need to be carefully tested to evaluate their performance from the utility points of view. With this purpose, the NASA experimental variable speed constant frequency (VSCF) system has been tested. In order to determine the usefulness of these systems in utility applications, tests are required to resolve issues fundamental to electric utility systems. Legitimate questions exist regarding how variable speed generators will influence the performance of electric utility systems; therefore, tests from a utility perspective, have been performed on the VSCF system and an induction generator at an operating power level of 30 kW on a system rated at 200 kVA and 0.8 power factor.
Heterogeneity: The key to failure forecasting
Vasseur, Jérémie; Wadsworth, Fabian B.; Lavallée, Yan; Bell, Andrew F.; Main, Ian G.; Dingwell, Donald B.
2015-01-01
Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power. PMID:26307196
Heterogeneity: The key to failure forecasting.
Vasseur, Jérémie; Wadsworth, Fabian B; Lavallée, Yan; Bell, Andrew F; Main, Ian G; Dingwell, Donald B
2015-08-26
Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power.
Heterogeneity: The key to failure forecasting
NASA Astrophysics Data System (ADS)
Vasseur, Jérémie; Wadsworth, Fabian B.; Lavallée, Yan; Bell, Andrew F.; Main, Ian G.; Dingwell, Donald B.
2015-08-01
Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power.
Rms-flux relation and fast optical variability simulations of the nova-like system MV Lyr
NASA Astrophysics Data System (ADS)
Dobrotka, A.; Mineshige, S.; Ness, J.-U.
2015-03-01
The stochastic variability (flickering) of the nova-like system (subclass of cataclysmic variable) MV Lyr yields a complicated power density spectrum with four break frequencies. Scaringi et al. analysed high-cadence Kepler data of MV Lyr, taken almost continuously over 600 d, giving the unique opportunity to study multicomponent Power Density Spectra (PDS) over a wide frequency range. We modelled this variability with our statistical model based on disc angular momentum transport via discrete turbulent bodies with an exponential distribution of the dimension scale. Two different models were used, a full disc (developed from the white dwarf to the outer radius of ˜1010 cm) and a radially thin disc (a ring at a distance of ˜1010 cm from the white dwarf) that imitates an outer disc rim. We succeed in explaining the two lowest observed break frequencies assuming typical values for a disc radius of 0.5 and 0.9 times the primary Roche lobe and an α parameter of 0.1-0.4. The highest observed break frequency was also modelled, but with a rather small accretion disc with a radius of 0.3 times the primary Roche lobe and a high α value of 0.9 consistent with previous findings by Scaringi. Furthermore, the simulated light curves exhibit the typical linear rms-flux proportionality linear relation and the typical log-normal flux distribution. As the turbulent process is generating fluctuations in mass accretion that propagate through the disc, this confirms the general knowledge that the typical rms-flux relation is mainly generated by these fluctuations. In general, a higher rms is generated by a larger amount of superposed flares which is compatible with a higher mass accretion rate expressed by a larger flux.
Battery Energy Storage Systems to Mitigate the Variability of Photovoltaic Power Generation
NASA Astrophysics Data System (ADS)
Gurganus, Heath Alan
Methods of generating renewable energy such as through solar photovoltaic (PV) cells and wind turbines offer great promise in terms of a reduced carbon footprint and overall impact on the environment. However, these methods also share the attribute of being highly stochastic, meaning they are variable in such a way that is difficult to forecast with sufficient accuracy. While solar power currently constitutes a small amount of generating potential in most regions, the cost of photovoltaics continues to decline and a trend has emerged to build larger PV plants than was once feasible. This has brought the matter of increased variability to the forefront of research in the industry. Energy storage has been proposed as a means of mitigating this increased variability --- and thus reducing the need to utilize traditional spinning reserves --- as well as offering auxiliary grid services such as peak-shifting and frequency control. This thesis addresses the feasibility of using electrochemical storage methods (i.e. batteries) to decrease the ramp rates of PV power plants. By building a simulation of a grid-connected PV array and a typical Battery Energy Storage System (BESS) in the NetLogo simulation environment, I have created a parameterized tool that can be tailored to describe almost any potential PV setup. This thesis describes the design and function of this model, and makes a case for the accuracy of its measurements by comparing its simulated output to that of well-documented real world sites. Finally, a set of recommendations for the design and operational parameters of such a system are then put forth based on the results of several experiments performed using this model.
Time-delay signature of chaos in 1550 nm VCSELs with variable-polarization FBG feedback.
Li, Yan; Wu, Zheng-Mao; Zhong, Zhu-Qiang; Yang, Xian-Jie; Mao, Song; Xia, Guang-Qiong
2014-08-11
Based on the framework of spin-flip model (SFM), the output characteristics of a 1550 nm vertical-cavity surface-emitting laser (VCSEL) subject to variable-polarization fiber Bragg grating (FBG) feedback (VPFBGF) have been investigated. With the aid of the self-correlation function (SF) and the permutation entropy (PE) function, the time-delay signature (TDS) of chaos in the VPFBGF-VCSEL is evaluated, and then the influences of the operation parameters on the TDS of chaos are analyzed. The results show that the TDS of chaos can be suppressed efficiently through selecting suitable coupling coefficient and feedback rate of the FBG, and is weaker than that of chaos generated by traditional variable-polarization mirror feedback VCSELs (VPMF-VCSELs) or polarization-preserved FBG feedback VCSELs (PPFBGF-VCSELs).
Design and test of a 10kW ORC supersonic turbine generator
NASA Astrophysics Data System (ADS)
Seume, J. R.; Peters, M.; Kunte, H.
2017-03-01
Manufactures are searching for possibilities to increase the efficiency of combustion engines by using the remaining energy of the exhaust gas. One possibility to recover some of this thermal energy is an organic Rankine cycle (ORC). For such an ORC running with ethanol, the aerothermodynamic design and test of a supersonic axial, single stage impulse turbine generator unit is described. The blade design as well as the regulation by variable partial admission is shown. Additionally the mechanical design of the directly coupled turbine generator unit including the aerodynamic sealing and the test facility is presented. Finally the results of CFD-based computations are compared to the experimental measurements. The comparison shows a remarkably good agreement between the numerical computations and the test data.
The effects of demand uncertainty on strategic gaming in the merit-order electricity pool market
NASA Astrophysics Data System (ADS)
Frem, Bassam
In a merit-order electricity pool market, generating companies (Gencos) game with their offered incremental cost to meet the electricity demand and earn bigger market shares and higher profits. However when the demand is treated as a random variable instead of as a known constant, these Genco gaming strategies become more complex. After a brief introduction of electricity markets and gaming, the effects of demand uncertainty on strategic gaming are studied in two parts: (1) Demand modelled as a discrete random variable (2) Demand modelled as a continuous random variable. In the first part, we proposed an algorithm, the discrete stochastic strategy (DSS) algorithm that generates a strategic set of offers from the perspective of the Gencos' profits. The DSS offers were tested and compared to the deterministic Nash equilibrium (NE) offers based on the predicted demand. This comparison, based on the expected Genco profits, showed the DSS to be a better strategy in a probabilistic sense than the deterministic NE. In the second part, we presented three gaming strategies: (1) Deterministic NE (2) No-Risk (3) Risk-Taking. The strategies were then tested and their profit performances were compared using two assessment tools: (a) Expected value and standard deviation (b) Inverse cumulative distribution. We concluded that despite yielding higher profit performance under the right conjectures, Risk-Taking strategies are very sensitive to incorrect conjectures on the competitors' gaming decisions. As such, despite its lower profit performance, the No-Risk strategy was deemed preferable.
Blanco, Eleonora Zambrano; Bajay, Miklos Maximiliano; Siqueira, Marcos Vinícius Bohrer Monteiro; Zucchi, Maria Imaculada; Pinheiro, José Baldin
2016-12-01
Ginger is a vegetable with medicinal and culinary properties widely cultivated in the Southern and Southeastern Brazil. The knowledge of ginger species' genetic variability is essential to direct correctly future studies of conservation and genetic improvement, but in Brazil, little is known about this species' genetic variability. In this study, we analyzed the genetic diversity and structure of 55 Brazilian accessions and 6 Colombian accessions of ginger, using AFLP (Amplified Fragment Length Polymorphism) molecular markers. The molecular characterization was based on 13 primers combinations, which generated an average of 113.5 polymorphic loci. The genetic diversity estimates of Nei (Hj), Shannon-Weiner index (I) and an effective number of alleles (n e ) were greater in the Colombian accessions in relation to the Brazilian accessions. The analysis of molecular variance showed that most of the genetic variation occurred between the two countries while in the Brazilian populations there is no genetic structure and probably each region harbors 100 % of genetic variation found in the samples. The bayesian model-based clustering and the dendrogram using the dissimilarity's coefficient of Jaccard were congruent with each other and showed that the Brazilian accessions are highly similar between themselves, regardless of the geographic region of origin. We suggested that the exploration of the interspecific variability and the introduction of new varieties of Z.officinale are viable alternatives for generating diversity in breeding programs in Brazil. The introduction of new genetic materials will certainly contribute to a higher genetic basis of such crop.
Ti, Lianping; Richardson, Lindsey; DeBeck, Kora; Nguyen, Paul; Montaner, Julio; Wood, Evan; Kerr, Thomas
2014-01-01
Background Despite the growing prevalence of illicit stimulant drug use internationally, and the widespread involvement of people who inject drugs (IDU) within street-based drug markets, little is known about the impact of different types of street-based income generation activities on the cessation of stimulant use among IDU. Methods Data were derived from an open prospective cohort of IDU in Vancouver, Canada. We used Kaplan-Meier methods and Cox proportional hazards regression to examine the effect of different types of street-based income generation activities (e.g., sex work, drug dealing, and scavenging) on time to cessation of stimulant use. Results Between December, 2005 and November, 2012, 887 IDU who use stimulant drugs (cocaine, crack cocaine, or crystal methamphetamine) were prospectively followed-up for a median duration of 47 months. In Kaplan-Meier analyses, compared to those who did not engage in street-based income generation activities, participants who reported sex work, drug dealing, scavenging, or more than one of these activities were significantly less likely to report stimulant drug use cessation (all p<0.001). When considered as time-updated variables and adjusted for potential confounders in a multivariable model, each type of street-based income generation activity remained significantly associated with a slower time to stimulant drug cessation (all p<0.005). Conclusions Our findings highlight the urgent need for strategies to address stimulant dependence, including novel pharmacotherapies. Also important, structural interventions, such as low-threshold employment opportunities, availability of supportive housing, legal reforms regarding drug use, and evidence-based approaches that reduce harm among IDU are urgently required. PMID:24909853
Ti, Lianping; Richardson, Lindsey; DeBeck, Kora; Nguyen, Paul; Montaner, Julio; Wood, Evan; Kerr, Thomas
2014-08-01
Despite the growing prevalence of illicit stimulant drug use internationally, and the widespread involvement of people who inject drugs (IDU) within street-based drug markets, little is known about the impact of different types of street-based income generation activities on the cessation of stimulant use among IDU. Data were derived from an open prospective cohort of IDU in Vancouver, Canada. We used Kaplan-Meier methods and Cox proportional hazards regression to examine the effect of different types of street-based income generation activities (e.g., sex work, drug dealing, and scavenging) on time to cessation of stimulant use. Between December, 2005 and November, 2012, 887 IDU who use stimulant drugs (cocaine, crack cocaine, or crystal methamphetamine) were prospectively followed-up for a median duration of 47 months. In Kaplan-Meier analyses, compared to those who did not engage in street-based income generation activities, participants who reported sex work, drug dealing, scavenging, or more than one of these activities were significantly less likely to report stimulant drug use cessation (all p<0.001). When considered as time-updated variables and adjusted for potential confounders in a multivariable model, each type of street-based income generation activity remained significantly associated with a slower time to stimulant drug cessation (all p<0.005). Our findings highlight the urgent need for strategies to address stimulant dependence, including novel pharmacotherapies. Also important, structural interventions, such as low-threshold employment opportunities, availability of supportive housing, legal reforms regarding drug use, and evidence-based approaches that reduce harm among IDU are urgently required. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo
2015-01-01
The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and design space. In this article, we integrate thin-plate spline (TPS) interpolation, Kohonen's self-organizing map (SOM) and a Bayesian network (BN) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared using a standard formulation. We measured the tensile strength and disintegration time as response variables and the compressibility, cohesion and dispersibility of the pretableting blend as latent variables. We predicted these variables quantitatively using nonlinear TPS, generated a large amount of data on pretableting blends and tablets and clustered these data into several clusters using a SOM. Our results show that we are able to predict the experimental values of the latent and response variables with a high degree of accuracy and are able to classify the tablet data into several distinct clusters. In addition, to visualize the latent structure between the causal and latent factors and the response variables, we applied a BN method to the SOM clustering results. We found that despite having inserted latent variables between the causal factors and response variables, their relation is equivalent to the results for the SOM clustering, and thus we are able to explain the underlying latent structure. Consequently, this technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulation.
Free piston variable-stroke linear-alternator generator
Haaland, Carsten M.
1998-01-01
A free-piston variable stroke linear-alternator AC power generator for a combustion engine. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod.
Energy Optimization for a Weak Hybrid Power System of an Automobile Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fang, Wei; Quan, Shuhai; Xie, Changjun; Tang, Xinfeng; Ran, Bin; Jiao, Yatian
2017-11-01
An integrated starter generator (ISG)-type hybrid electric vehicle (HEV) scheme is proposed based on the automobile exhaust thermoelectric generator (AETEG). An eddy current dynamometer is used to simulate the vehicle's dynamic cycle. A weak ISG hybrid bench test system is constructed to test the 48 V output from the power supply system, which is based on engine exhaust-based heat power generation. The thermoelectric power generation-based system must ultimately be tested when integrated into the ISG weak hybrid mixed power system. The test process is divided into two steps: comprehensive simulation and vehicle-based testing. The system's dynamic process is simulated for both conventional and thermoelectric powers, and the dynamic running process comprises four stages: starting, acceleration, cruising and braking. The quantity of fuel available and battery pack energy, which are used as target vehicle energy functions for comparison with conventional systems, are simplified into a single energy target function, and the battery pack's output current is used as the control variable in the thermoelectric hybrid energy optimization model. The system's optimal battery pack output current function is resolved when its dynamic operating process is considered as part of the hybrid thermoelectric power generation system. In the experiments, the system bench is tested using conventional power and hybrid thermoelectric power for the four dynamic operation stages. The optimal battery pack curve is calculated by functional analysis. In the vehicle, a power control unit is used to control the battery pack's output current and minimize energy consumption. Data analysis shows that the fuel economy of the hybrid power system under European Driving Cycle conditions is improved by 14.7% when compared with conventional systems.
NASA Astrophysics Data System (ADS)
Francois, Baptiste; Creutin, Jean-Dominique
2016-04-01
Today, most of the produced energy is generated from fossil energy sources (i.e. coal, petroleum). As a result, the energy sector is still the main source of greenhouse gas in the atmosphere. For limiting greenhouse gas emission, a transition from fossil to renewable energy is required, increasing gradually the fraction energy coming from variable renewable energy (i.e. solar power, wind power and run-of-the river hydropower, hereafter denoted as VRE). VRE penetration, i.e. the percentage of demand satisfied by variable renewables assuming no storage capacity, is hampered by their variable and un-controllable features. Many studies show that combining different VRE over space smoothes their variability and increases their global penetration by a better match of demand fluctuations. When the demand is not fully supplied by the VRE generation, backup generation is required from stored energy (mostly from dams) or fossil sources, the latter being associated with high greenhouse gas emission. Thus the VRE penetration is a direct indicator of carbon savings and basically depends on the VRE installed capacity, its mix features, and on the installed storage capacity. In this study we analyze the European transition to a low carbon electricity system. Over a selection of representative regions we analyze carbon saving trajectories as functions of VRE production and storage capacities for different scenarios mixing one to three VRE with non-renewables. We show substantial differences between trajectories when the mix of sources is far from the local optimums, when the storage capacity evolves. We bring new elements of reflection about the effect of transport grid features from local independent systems to a European "copper plate". This work is part of the FP7 project COMPLEX (Knowledge based climate mitigation systems for a low carbon economy; Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/).
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-04-20
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.
Grievink, Liat Shavit; Penny, David; Hendy, Mike D; Holland, Barbara R
2009-01-01
Correction to Shavit Grievink L, Penny D, Hendy MD, Holland BR: LineageSpecificSeqgen: generating sequence data with lineage-specific variation in the proportion of variable sites. BMC Evol Biol 2008, 8(1):317.
NASA Astrophysics Data System (ADS)
Peck, Jaron Joshua
Water is used in power generation for cooling processes in thermoelectric power. plants and currently withdraws more water than any other sector in the U.S. Reducing water. use from power generation will help to alleviate water stress in at risk areas, where droughts. have the potential to strain water resources. The amount of water used for power varies. depending on many climatic aspects as well as plant operation factors. This work presents. a model that quantifies the water use for power generation for two regions representing. different generation fuel portfolios, California and Utah. The analysis of the California Independent System Operator introduces the methods. of water energy modeling by creating an overall water use factor in volume of water per. unit of energy produced based on the fuel generation mix of the area. The idea of water. monitoring based on energy used by a building or region is explored based on live fuel mix. data. This is for the purposes of increasing public awareness of the water associated with. personal energy use and helping to promote greater energy efficiency. The Utah case study explores the effects more renewable, and less water-intensive, forms of energy will have on the overall water use from power generation for the state. Using a similar model to that of the California case study, total water savings are quantified. based on power reduction scenarios involving increased use of renewable energy. The. plausibility of implementing more renewable energy into Utah’s power grid is also. discussed. Data resolution, as well as dispatch methods, economics, and solar variability, introduces some uncertainty into the analysis.
Computational Motion Phantoms and Statistical Models of Respiratory Motion
NASA Astrophysics Data System (ADS)
Ehrhardt, Jan; Klinder, Tobias; Lorenz, Cristian
Breathing motion is not a robust and 100 % reproducible process, and inter- and intra-fractional motion variations form an important problem in radiotherapy of the thorax and upper abdomen. A widespread consensus nowadays exists that it would be useful to use prior knowledge about respiratory organ motion and its variability to improve radiotherapy planning and treatment delivery. This chapter discusses two different approaches to model the variability of respiratory motion. In the first part, we review computational motion phantoms, i.e. computerized anatomical and physiological models. Computational phantoms are excellent tools to simulate and investigate the effects of organ motion in radiation therapy and to gain insight into methods for motion management. The second part of this chapter discusses statistical modeling techniques to describe the breathing motion and its variability in a population of 4D images. Population-based models can be generated from repeatedly acquired 4D images of the same patient (intra-patient models) and from 4D images of different patients (inter-patient models). The generation of those models is explained and possible applications of those models for motion prediction in radiotherapy are exemplified. Computational models of respiratory motion and motion variability have numerous applications in radiation therapy, e.g. to understand motion effects in simulation studies, to develop and evaluate treatment strategies or to introduce prior knowledge into the patient-specific treatment planning.
Zhang, Renduo; Wood, A Lynn; Enfield, Carl G; Jeong, Seung-Woo
2003-01-01
Stochastical analysis was performed to assess the effect of soil spatial variability and heterogeneity on the recovery of denser-than-water nonaqueous phase liquids (DNAPL) during the process of surfactant-enhanced remediation. UTCHEM, a three-dimensional, multicomponent, multiphase, compositional model, was used to simulate water flow and chemical transport processes in heterogeneous soils. Soil spatial variability and heterogeneity were accounted for by considering the soil permeability as a spatial random variable and a geostatistical method was used to generate random distributions of the permeability. The randomly generated permeability fields were incorporated into UTCHEM to simulate DNAPL transport in heterogeneous media and stochastical analysis was conducted based on the simulated results. From the analysis, an exponential relationship between average DNAPL recovery and soil heterogeneity (defined as the standard deviation of log of permeability) was established with a coefficient of determination (r2) of 0.991, which indicated that DNAPL recovery decreased exponentially with increasing soil heterogeneity. Temporal and spatial distributions of relative saturations in the water phase, DNAPL, and microemulsion in heterogeneous soils were compared with those in homogeneous soils and related to soil heterogeneity. Cleanup time and uncertainty to determine DNAPL distributions in heterogeneous soils were also quantified. The study would provide useful information to design strategies for the characterization and remediation of nonaqueous phase liquid-contaminated soils with spatial variability and heterogeneity.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Coq, Johanne; Ghosh, Partho
2012-06-19
Anticipatory ligand binding through massive protein sequence variation is rare in biological systems, having been observed only in the vertebrate adaptive immune response and in a phage diversity-generating retroelement (DGR). Earlier work has demonstrated that the prototypical DGR variable protein, major tropism determinant (Mtd), meets the demands of anticipatory ligand binding by novel means through the C-type lectin (CLec) fold. However, because of the low sequence identity among DGR variable proteins, it has remained unclear whether the CLec fold is a general solution for DGRs. We have addressed this problem by determining the structure of a second DGR variable protein,more » TvpA, from the pathogenic oral spirochete Treponema denticola. Despite its weak sequence identity to Mtd ({approx}16%), TvpA was found to also have a CLec fold, with predicted variable residues exposed in a ligand-binding site. However, this site in TvpA was markedly more variable than the one in Mtd, reflecting the unprecedented approximate 10{sup 20} potential variability of TvpA. In addition, similarity between TvpA and Mtd with formylglycine-generating enzymes was detected. These results provide strong evidence for the conservation of the formylglycine-generating enzyme-type CLec fold among DGRs as a means of accommodating massive sequence variation.« less
Le Coq, Johanne; Ghosh, Partho
2011-01-01
Anticipatory ligand binding through massive protein sequence variation is rare in biological systems, having been observed only in the vertebrate adaptive immune response and in a phage diversity-generating retroelement (DGR). Earlier work has demonstrated that the prototypical DGR variable protein, major tropism determinant (Mtd), meets the demands of anticipatory ligand binding by novel means through the C-type lectin (CLec) fold. However, because of the low sequence identity among DGR variable proteins, it has remained unclear whether the CLec fold is a general solution for DGRs. We have addressed this problem by determining the structure of a second DGR variable protein, TvpA, from the pathogenic oral spirochete Treponema denticola. Despite its weak sequence identity to Mtd (∼16%), TvpA was found to also have a CLec fold, with predicted variable residues exposed in a ligand-binding site. However, this site in TvpA was markedly more variable than the one in Mtd, reflecting the unprecedented approximate 1020 potential variability of TvpA. In addition, similarity between TvpA and Mtd with formylglycine-generating enzymes was detected. These results provide strong evidence for the conservation of the formylglycine-generating enzyme-type CLec fold among DGRs as a means of accommodating massive sequence variation. PMID:21873231
Le Coq, Johanne; Ghosh, Partho
2011-08-30
Anticipatory ligand binding through massive protein sequence variation is rare in biological systems, having been observed only in the vertebrate adaptive immune response and in a phage diversity-generating retroelement (DGR). Earlier work has demonstrated that the prototypical DGR variable protein, major tropism determinant (Mtd), meets the demands of anticipatory ligand binding by novel means through the C-type lectin (CLec) fold. However, because of the low sequence identity among DGR variable proteins, it has remained unclear whether the CLec fold is a general solution for DGRs. We have addressed this problem by determining the structure of a second DGR variable protein, TvpA, from the pathogenic oral spirochete Treponema denticola. Despite its weak sequence identity to Mtd (∼16%), TvpA was found to also have a CLec fold, with predicted variable residues exposed in a ligand-binding site. However, this site in TvpA was markedly more variable than the one in Mtd, reflecting the unprecedented approximate 10(20) potential variability of TvpA. In addition, similarity between TvpA and Mtd with formylglycine-generating enzymes was detected. These results provide strong evidence for the conservation of the formylglycine-generating enzyme-type CLec fold among DGRs as a means of accommodating massive sequence variation.
NASA Astrophysics Data System (ADS)
Lu, Siqi; Wang, Xiaorong; Wu, Junyong
2018-01-01
The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.
Stochastic Watershed Models for Risk Based Decision Making
NASA Astrophysics Data System (ADS)
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Woonki; Muljadi, Eduard; Leighty, Bill
A Self-Excited Induction Generation (SEIG) for a variable speed wind turbine generation(VS-WG) is normally considered to be a good candidate for implementation in stand-alone applications such as battery charging, hydrogenation, water pumping, water purification, water desalination, and etc. In this study, we have examined a study on active power and flux control strategies for a SEIG for a variable speed wind turbine generation. The control analysis for the proposed system is carried out by using PSCAD software. In the process, we can optimize the control design of the system, thereby enhancing and expediting the control design procedure for this application.more » With this study, this control design for a SEIG for VS-WG can become the industry standard for analysis and development in terms of SEIG.« less
Review of Variable Generation Integration Charges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, K.; Fink, S.; Buckley, M.
2013-03-01
The growth of wind and solar generation in the United States, and the expectation of continued growth of these technologies, dictates that the future power system will be operated in a somewhat different manner because of increased variability and uncertainty. A small number of balancing authorities have attempted to determine an 'integration cost' to account for these changes to their current operating practices. Some balancing authorities directly charge wind and solar generators for integration charges, whereas others add integration charges to projected costs of wind and solar in integrated resource plans or in competitive solicitations for generation. This report reviewsmore » the balancing authorities that have calculated variable generation integration charges and broadly compares and contrasts the methodologies they used to determine their specific integration charges. The report also profiles each balancing authority and how they derived wind and solar integration charges.« less
Wu, Chung-Hsien; Chiu, Yu-Hsien; Guo, Chi-Shiang
2004-12-01
This paper proposes a novel approach to the generation of Chinese sentences from ill-formed Taiwanese Sign Language (TSL) for people with hearing impairments. First, a sign icon-based virtual keyboard is constructed to provide a visualized interface to retrieve sign icons from a sign database. A proposed language model (LM), based on a predictive sentence template (PST) tree, integrates a statistical variable n-gram LM and linguistic constraints to deal with the translation problem from ill-formed sign sequences to grammatical written sentences. The PST tree trained by a corpus collected from the deaf schools was used to model the correspondence between signed and written Chinese. In addition, a set of phrase formation rules, based on trigger pair category, was derived for sentence pattern expansion. These approaches improved the efficiency of text generation and the accuracy of word prediction and, therefore, improved the input rate. For the assessment of practical communication aids, a reading-comprehension training program with ten profoundly deaf students was undertaken in a deaf school in Tainan, Taiwan. Evaluation results show that the literacy aptitude test and subjective satisfactory level are significantly improved.
Understanding text-based persuasion and support tactics of concerned significant others.
van Stolk-Cooke, Katherine; Hayes, Marie; Baumel, Amit; Muench, Frederick
2015-01-01
The behavior of concerned significant others (CSOs) can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs) to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1) to help TSOs achieve the goal, (2) in the event that the TSO is struggling to meet the goal, and (3) in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs' perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs' perceptions of their TSOs affect these characteristics.
Qiu, Zeyuan
2009-11-01
A science-based geographic information system (GIS) approach is presented to target critical source areas in watersheds for conservation buffer placement. Critical source areas are the intersection of hydrologically sensitive areas and pollutant source areas in watersheds. Hydrologically sensitive areas are areas that actively generate runoff in the watershed and are derived using a modified topographic index approach based on variable source area hydrology. Pollutant source areas are the areas in watersheds that are actively and intensively used for such activities as agricultural production. The method is applied to the Neshanic River watershed in Hunterdon County, New Jersey. The capacity of the topographic index in predicting the spatial pattern of runoff generation and the runoff contribution to stream flow in the watershed is evaluated. A simple cost-effectiveness assessment is conducted to compare the conservation buffer placement scenario based on this GIS method to conventional riparian buffer scenarios for placing conservation buffers in agricultural lands in the watershed. The results show that the topographic index reasonably predicts the runoff generation in the watershed. The GIS-based conservation buffer scenario appears to be more cost-effective than the conventional riparian buffer scenarios.
NASA Astrophysics Data System (ADS)
Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.
1995-04-01
It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.
Adding Concrete Syntax to a Prolog-Based Program Synthesis System
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Visser, Eelco
2003-01-01
Program generation and transformation systems manipulate large, pa- rameterized object language fragments. Support for user-definable concrete syntax makes this easier but is typically restricted to certain object and meta languages. We show how Prolog can be retrofitted with concrete syntax and describe how a seamless interaction of concrete syntax fragments with an existing legacy meta-programming system based on abstract syntax is achieved. We apply the approach to gradually migrate the schemas of the AUTOBAYES program synthesis system to concrete syntax. Fit experiences show that this can result in a considerable reduction of the code size and an improved readability of the code. In particular, abstracting out fresh-variable generation and second-order term construction allows the formulation of larger continuous fragments and improves the locality in the schemas.
The CARIBU EBIS control and synchronization system
NASA Astrophysics Data System (ADS)
Dickerson, Clayton; Peters, Christopher
2015-01-01
The Californium Rare Isotope Breeder Upgrade (CARIBU) Electron Beam Ion Source (EBIS) charge breeder has been built and tested. The bases of the CARIBU EBIS electrical system are four voltage platforms on which both DC and pulsed high voltage outputs are controlled. The high voltage output pulses are created with either a combination of a function generator and a high voltage amplifier, or two high voltage DC power supplies and a high voltage solid state switch. Proper synchronization of the pulsed voltages, fundamental to optimizing the charge breeding performance, is achieved with triggering from a digital delay pulse generator. The control system is based on National Instruments realtime controllers and LabVIEW software implementing Functional Global Variables (FGV) to store and access instrument parameters. Fiber optic converters enable network communication and triggering across the platforms.
Methodology for the development of normative data for Spanish-speaking pediatric populations.
Rivera, D; Arango-Lasprilla, J C
2017-01-01
To describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries. The study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests. Test-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again. This is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.
Variable Cultural Acquisition Costs Constrain Cumulative Cultural Evolution
Mesoudi, Alex
2011-01-01
One of the hallmarks of the human species is our capacity for cumulative culture, in which beneficial knowledge and technology is accumulated over successive generations. Yet previous analyses of cumulative cultural change have failed to consider the possibility that as cultural complexity accumulates, it becomes increasingly costly for each new generation to acquire from the previous generation. In principle this may result in an upper limit on the cultural complexity that can be accumulated, at which point accumulated knowledge is so costly and time-consuming to acquire that further innovation is not possible. In this paper I first review existing empirical analyses of the history of science and technology that support the possibility that cultural acquisition costs may constrain cumulative cultural evolution. I then present macroscopic and individual-based models of cumulative cultural evolution that explore the consequences of this assumption of variable cultural acquisition costs, showing that making acquisition costs vary with cultural complexity causes the latter to reach an upper limit above which no further innovation can occur. These models further explore the consequences of different cultural transmission rules (directly biased, indirectly biased and unbiased transmission), population size, and cultural innovations that themselves reduce innovation or acquisition costs. PMID:21479170
Cellular Contraction and Polarization Drive Collective Cellular Motion.
Notbohm, Jacob; Banerjee, Shiladitya; Utuje, Kazage J C; Gweon, Bomi; Jang, Hwanseok; Park, Yongdoo; Shin, Jennifer; Butler, James P; Fredberg, Jeffrey J; Marchetti, M Cristina
2016-06-21
Coordinated motions of close-packed multicellular systems typically generate cooperative packs, swirls, and clusters. These cooperative motions are driven by active cellular forces, but the physical nature of these forces and how they generate collective cellular motion remain poorly understood. Here, we study forces and motions in a confined epithelial monolayer and make two experimental observations: 1) the direction of local cellular motion deviates systematically from the direction of the local traction exerted by each cell upon its substrate; and 2) oscillating waves of cellular motion arise spontaneously. Based on these observations, we propose a theory that connects forces and motions using two internal state variables, one of which generates an effective cellular polarization, and the other, through contractile forces, an effective cellular inertia. In agreement with theoretical predictions, drugs that inhibit contractility reduce both the cellular effective elastic modulus and the frequency of oscillations. Together, theory and experiment provide evidence suggesting that collective cellular motion is driven by at least two internal variables that serve to sustain waves and to polarize local cellular traction in a direction that deviates systematically from local cellular velocity. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
McGivern, C. L.; Le, T.; Eberly, B.; ...
2016-09-06
Separate samples of charged-current pion production events representing two semi-inclusive channels ν μ–CC(π +) and ν¯ μ–CC(π 0) have been obtained using neutrino and antineutrino exposures of the MINERvA detector. Distributions in kinematic variables based upon μ±-track reconstructions are analyzed and compared for the two samples. The differential cross sections for muon production angle, muon momentum, and four-momentum transfer Q 2 are reported, and cross sections versus neutrino energy are obtained. Comparisons with predictions of current neutrino event generators are used to clarify the role of the Δ(1232) and higher-mass baryon resonances in CC pion production and to show themore » importance of pion final-state interactions. For the ν μ–CC(π +) [ν¯ μ–CC(π 0)] sample, the absolute data rate is observed to lie below (above) the predictions of some of the event generators by amounts that are typically 1-to- 2σ. Furthermore, the generators are able to reproduce the shapes of the differential cross sections for all kinematic variables of either data set.« less
Measuring spatial variability in soil characteristics
Hoskinson, Reed L.; Svoboda, John M.; Sawyer, J. Wayne; Hess, John R.; Hess, J. Richard
2002-01-01
The present invention provides systems and methods for measuring a load force associated with pulling a farm implement through soil that is used to generate a spatially variable map that represents the spatial variability of the physical characteristics of the soil. An instrumented hitch pin configured to measure a load force is provided that measures the load force generated by a farm implement when the farm implement is connected with a tractor and pulled through or across soil. Each time a load force is measured, a global positioning system identifies the location of the measurement. This data is stored and analyzed to generate a spatially variable map of the soil. This map is representative of the physical characteristics of the soil, which are inferred from the magnitude of the load force.
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Strategies for Voltage Control and Transient Stability Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiskens, Ian A.
As wind generation grows, its influence on power system performance will becoming increasingly noticeable. Wind generation di ffers from traditional forms of generation in numerous ways though, motivating the need to reconsider the usual approaches to power system assessment and performance enhancement. The project has investigated the impact of wind generation on transient stability and voltage control, identifying and addressing issues at three distinct levels of the power system: 1) at the device level, the physical characteristics of wind turbine generators (WTGs) are quite unlike those of synchronous machines, 2) at the wind-farm level, the provision of reactive support ismore » achieved through coordination of numerous dissimilar devices, rather than straightforward generator control, and 3) from a systems perspective, the location of wind-farms on the sub-transmission network, coupled with the variability inherent in their power output, can cause complex voltage control issues. The project has sought to develop a thorough understanding of the dynamic behaviour of type-3 WTGs, and in particular the WECC generic model. The behaviour of such models is governed by interactions between the continuous dynamics of state variables and discrete events associated with limits. It was shown that these interactions can be quite complex, and may lead to switching deadlock that prevents continuation of the trajectory. Switching hysteresis was proposed for eliminating deadlock situations. Various type-3 WTG models include control blocks that duplicate integrators. It was shown that this leads to non-uniqueness in the conditions governing steady-state, and may result in pre- and post-disturbance equilibria not coinciding. It also gives rise to a zero eigenvalue in the linearized WTG model. In order to eliminate the anomalous behaviour revealed through this investigation, WECC has now released a new generic model for type-3 WTGs. Wind-farms typically incorporate a variety of voltage control equipment including tapchanging transformers, switched capacitors, SVCs, STATCOMs and the WTGs themselves. The project has considered the coordinated control of this equipment, and has addressed a range of issues that arise in wind-farm operation. The first concerns the ability of WTGs to meet reactive power requirements when voltage saturation in the collector network restricts the reactive power availability of individual generators. Secondly, dynamic interactions between voltage regulating devices have been investigated. It was found that under certain realistic conditions, tap-changing transformers may exhibit instability. In order to meet cost, maintenance, fault tolerance and other requirements, it is desirable for voltage control equipment to be treated as an integrated system rather than as independent devices. The resulting high-level scheduling of wind-farm reactive support has been investigated. In addressing this control problem, several forms of future information were considered, including exact future knowledge and stochastic predictions. Deterministic and Stochastic Dynamic Programming techniques were used in the development of control algorithms. The results demonstrated that while exact future knowledge is very useful, simple prediction methods yield little bene fit. The integration of inherently variable wind generation into weak grids, particularly subtransmission networks that are characterized by low X=R ratios, aff ects bus voltages, regulating devices and line flows. The meshed structure of these networks adds to the complexity, especially when wind generation is distributed across multiple nodes. A range of techniques have been considered for analyzing the impact of wind variability on weak grids. Sensitivity analysis, based on the power-flow Jacobian, was used to highlight sections of a system that are most severely a ffected by wind-power variations. A continuation power flow was used to determine parameter changes that reduce the impact of wind-power variability. It was also used to explore interactions between multiple wind-farms. Furthermore, these tools have been used to examine the impact of wind injection on transformer tap operation in subtransmission networks. The results of a tap operation simulation study show that voltage regulation at wind injection nodes increases tap change operations. The tradeo ff between local voltage regulation and tap change frequency is fundamentally important in optimizing the size of reactive compensation used for voltage regulation at wind injection nodes. Line congestion arising as a consequence of variable patterns of wind-power production has also been investigated. Two optimization problems have been formulated, based respectively on the DC and AC power flow models, for identifying vulnerable line segments. The DC optimization is computationally more e fficient, whereas the AC sensitivity-based optimization provides greater accuracy.« less
Estimating organ doses from tube current modulated CT examinations using a generalized linear model.
Bostani, Maryam; McMillan, Kyle; Lu, Peiyun; Kim, Grace Hyun J; Cody, Dianna; Arbique, Gary; Greenberg, S Bruce; DeMarco, John J; Cagnon, Chris H; McNitt-Gray, Michael F
2017-04-01
Currently, available Computed Tomography dose metrics are mostly based on fixed tube current Monte Carlo (MC) simulations and/or physical measurements such as the size specific dose estimate (SSDE). In addition to not being able to account for Tube Current Modulation (TCM), these dose metrics do not represent actual patient dose. The purpose of this study was to generate and evaluate a dose estimation model based on the Generalized Linear Model (GLM), which extends the ability to estimate organ dose from tube current modulated examinations by incorporating regional descriptors of patient size, scanner output, and other scan-specific variables as needed. The collection of a total of 332 patient CT scans at four different institutions was approved by each institution's IRB and used to generate and test organ dose estimation models. The patient population consisted of pediatric and adult patients and included thoracic and abdomen/pelvis scans. The scans were performed on three different CT scanner systems. Manual segmentation of organs, depending on the examined anatomy, was performed on each patient's image series. In addition to the collected images, detailed TCM data were collected for all patients scanned on Siemens CT scanners, while for all GE and Toshiba patients, data representing z-axis-only TCM, extracted from the DICOM header of the images, were used for TCM simulations. A validated MC dosimetry package was used to perform detailed simulation of CT examinations on all 332 patient models to estimate dose to each segmented organ (lungs, breasts, liver, spleen, and kidneys), denoted as reference organ dose values. Approximately 60% of the data were used to train a dose estimation model, while the remaining 40% was used to evaluate performance. Two different methodologies were explored using GLM to generate a dose estimation model: (a) using the conventional exponential relationship between normalized organ dose and size with regional water equivalent diameter (WED) and regional CTDI vol as variables and (b) using the same exponential relationship with the addition of categorical variables such as scanner model and organ to provide a more complete estimate of factors that may affect organ dose. Finally, estimates from generated models were compared to those obtained from SSDE and ImPACT. The Generalized Linear Model yielded organ dose estimates that were significantly closer to the MC reference organ dose values than were organ doses estimated via SSDE or ImPACT. Moreover, the GLM estimates were better than those of SSDE or ImPACT irrespective of whether or not categorical variables were used in the model. While the improvement associated with a categorical variable was substantial in estimating breast dose, the improvement was minor for other organs. The GLM approach extends the current CT dose estimation methods by allowing the use of additional variables to more accurately estimate organ dose from TCM scans. Thus, this approach may be able to overcome the limitations of current CT dose metrics to provide more accurate estimates of patient dose, in particular, dose to organs with considerable variability across the population. © 2017 American Association of Physicists in Medicine.
Comparing UK, USA and Australian values for EQ-5D as a health utility measure of oral health.
Brennan, D S; Teusner, D N
2015-09-01
Using generic measures to examine outcomes of oral disorders can add additional information relating to health utility. However, different algorithms are available to generate health states. The aim was to assess UK-, US- and Australian-based algorithms for the EuroQol (EQ-5D) in relation to their discriminative and convergent validity. Methods: Data were collected from adults in Australia aged 30-61 years by mailed survey in 2009-10, including the EQ-5D and a range of self-reported oral health variables, and self-rated oral and general health. Responses were collected from n=1,093 persons (response rate 39.1%). UK-based EQ-5D estimates were lower (0.85) than the USA and Australian estimates (0.91). EQ-5D was associated (p<0.01) with all seven oral health variables, with differences in utility scores ranging from 0.03 to 0.06 for the UK, from 0.04 to 0.07 for the USA, and from 0.05 to 0.08 for the Australian-based estimates. The effect sizes (ESs) of the associations with all seven oral health variables were similar for the UK (ES=0.26 to 0.49), USA (ES=0.31 to 0.48) and Australian-based (ES=0.31 to 0.46) estimates. EQ-5D was correlated with global dental health for the UK (rho=0.29), USA (rho=0.30) and Australian-based estimates (rho=0.30), and correlations with global general health were the same (rho=0.42) for the UK, USA and Australian-based estimates. EQ-5D exhibited equivalent discriminative validity and convergent validity in relation to oral health variables for the UK, USA and Australian-based estimates.
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation
Wilke, Marko; Altaye, Mekibib; Holland, Scott K.
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.
Wilke, Marko; Altaye, Mekibib; Holland, Scott K
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.
Realistic and efficient 2D crack simulation
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek
2010-04-01
Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2015-04-01
Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrera, J.I.; Reddoch, T.W.
1988-02-01
Variable speed electric generating technology can enhance the general use of wind energy in electric utility applications. This enhancement results from two characteristic properties of variable speed wind turbine generators: an improvement in drive train damping characteristics, which results in reduced structural loading on the entire wind turbine system, and an improvement in the overall efficiency by using a more sophisticated electrical generator. Electronic converter systems are the focus of this investigation -- in particular, the properties of a wound-rotor induction generator with the slip recovery system and direct-current link converter. Experience with solid-state converter systems in large wind turbinesmore » is extremely limited. This report presents measurements of electrical performances of the slip recovery system and is limited to the terminal characteristics of the system. Variable speed generating systems working effectively in utility applications will require a satisfactory interface between the turbine/generator pair and the utility network. The electrical testing described herein focuses largely on the interface characteristics of the generating system. A MOD-O wind turbine was connected to a very strong system; thus, the voltage distortion was low and the total harmonic distortion in the utility voltage was less than 3% (within the 5% limit required by most utilities). The largest voltage component of a frequency below 60 Hz was 40 dB down from the 60-Hz< component. 8 refs., 14 figs., 8 tabs.« less
NASA Technical Reports Server (NTRS)
Nelson, Ross; Margolis, Hank; Montesano, Paul; Sun, Guoqing; Cook, Bruce; Corp, Larry; Andersen, Hans-Erik; DeJong, Ben; Pellat, Fernando Paz; Fickel, Thaddeus;
2016-01-01
Existing national forest inventory plots, an airborne lidar scanning (ALS) system, and a space profiling lidar system (ICESat-GLAS) are used to generate circa 2005 estimates of total aboveground dry biomass (AGB) in forest strata, by state, in the continental United States (CONUS) and Mexico. The airborne lidar is used to link ground observations of AGB to space lidar measurements. Two sets of models are generated, the first relating ground estimates of AGB to airborne laser scanning (ALS) measurements and the second set relating ALS estimates of AGB (generated using the first model set) to GLAS measurements. GLAS then, is used as a sampling tool within a hybrid estimation framework to generate stratum-, state-, and national-level AGB estimates. A two-phase variance estimator is employed to quantify GLAS sampling variability and, additively, ALS-GLAS model variability in this current, three-phase (ground-ALS-space lidar) study. The model variance component characterizes the variability of the regression coefficients used to predict ALS-based estimates of biomass as a function of GLAS measurements. Three different types of predictive models are considered in CONUS to determine which produced biomass totals closest to ground-based national forest inventory estimates - (1) linear (LIN), (2) linear-no-intercept (LNI), and (3) log-linear. For CONUS at the national level, the GLAS LNI model estimate (23.95 +/- 0.45 Gt AGB), agreed most closely with the US national forest inventory ground estimate, 24.17 +/- 0.06 Gt, i.e., within 1%. The national biomass total based on linear ground-ALS and ALS-GLAS models (25.87 +/- 0.49 Gt) overestimated the national ground-based estimate by 7.5%. The comparable log-linear model result (63.29 +/-1.36 Gt) overestimated ground results by 261%. All three national biomass GLAS estimates, LIN, LNI, and log-linear, are based on 241,718 pulses collected on 230 orbits. The US national forest inventory (ground) estimates are based on 119,414 ground plots. At the US state level, the average absolute value of the deviation of LNI GLAS estimates from the comparable ground estimate of total biomass was 18.8% (range: Oregon,-40.8% to North Dakota, 128.6%). Log-linear models produced gross overestimates in the continental US, i.e., N2.6x, and the use of this model to predict regional biomass using GLAS data in temperate, western hemisphere forests is not appropriate. The best model form, LNI, is used to produce biomass estimates in Mexico. The average biomass density in Mexican forests is 53.10 +/- 0.88 t/ha, and the total biomass for the country, given a total forest area of 688,096 sq km, is 3.65 +/- 0.06 Gt. In Mexico, our GLAS biomass total underestimated a 2005 FAO estimate (4.152 Gt) by 12% and overestimated a 2007/8 radar study's figure (3.06 Gt) by 19%.
Hahn, Ezra; Jiang, Haiyan; Ng, Angela; Bashir, Shaheena; Ahmed, Sameera; Tsang, Richard; Sun, Alexander; Gospodarowicz, Mary; Hodgson, David
2017-08-01
Mediastinal radiation therapy (RT) for Hodgkin lymphoma (HL) is associated with late cardiotoxicity, but there are limited data to indicate which dosimetric parameters are most valuable for predicting this risk. This study investigated which whole heart dosimetric measurements provide the most information regarding late cardiotoxicity, and whether coronary artery dosimetry was more predictive of this outcome than whole heart dosimetry. A random sample of 125 HL patients treated with mediastinal RT was selected, and 3-dimensional cardiac dose-volume data were generated from historical plans using validated methods. Cardiac events were determined by linking patients to population-based datasets of inpatient and same-day hospitalizations and same-day procedures. Variables collected for the whole heart and 3 coronary arteries included the following: Dmean, Dmax, Dmin, dose homogeneity, V5, V10, V20, and V30. Multivariable competing risk regression models were generated for the whole heart and coronary arteries. There were 44 cardiac events documented, of which 70% were ischemic. The best multivariable model included the following covariates: whole heart Dmean (hazard ratio [HR] 1.09, P=.0083), dose homogeneity (HR 0.94, P=.0034), male sex (HR 2.31, P=.014), and age (HR 1.03, P=.0049). When any adverse cardiac event was the outcome, models using coronary artery variables did not perform better than models using whole heart variables. However, in a subanalysis of ischemic cardiac events only, the model using coronary artery variables was superior to the whole heart model and included the following covariates: age (HR 1.05, P<.001), volume of left anterior descending artery receiving 5 Gy (HR 0.98, P=.003), and volume of left circumflex artery receiving 20 Gy (HR 1.03, P<.001). In addition to higher mean heart dose, increasing inhomogeneity in cardiac dose was associated with a greater risk of late cardiac effects. When all types of cardiotoxicity were evaluated, the whole heart variable model outperformed the coronary artery models. However, when events were limited to ischemic cardiotoxicity, the coronary artery-based model was superior. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jayasinghe, S.; Dutta, R.; Basnayake, S. B.; Granger, S. L.; Andreadis, K. M.; Das, N.; Markert, K. N.; Cutter, P. G.; Towashiraporn, P.; Anderson, E.
2017-12-01
The Lower Mekong Region has been experiencing frequent and prolonged droughts resulting in severe damage to agricultural production leading to food insecurity and impacts on livelihoods of the farming communities. Climate variability further complicates the situation by making drought harder to forecast. The Regional Drought and Crop Yield Information System (RDCYIS), developed by SERVIR-Mekong, helps decision makers to take effective measures through monitoring, analyzing and forecasting of drought conditions and providing early warnings to farmers to make adjustments to cropping calendars. The RDCYIS is built on regionally calibrated Regional Hydrologic Extreme Assessment System (RHEAS) framework that integrates the Variable Infiltration Capacity (VIC) and Decision Support System for Agro-technology Transfer (DSSAT) models, allowing both nowcast and forecast of drought. The RHEAS allows ingestion of numerus freely available earth observation and ground observation data to generate and customize drought related indices, variables and crop yield information for better decision making. The Lower Mekong region has experienced severe drought in 2016 encompassing the region's worst drought in 90 years. This paper presents the simulation of the 2016 drought event using RDCYIS based on its hindcast and forecast capabilities. The regionally calibrated RDCYIS can help capture salient features of drought through a variety of drought indices, soil variables, energy balance variables and water balance variables. The RDCYIS is capable of assimilating soil moisture data from different satellite products and perform ensemble runs to further reduce the uncertainty of it outputs. The calibrated results have correlation coefficient around 0.73 and NSE between 0.4-0.5. Based on the acceptable results of the retrospective runs, the system has the potential to generate reliable drought monitoring and forecasting information to improve decision-makings at operational, technological and institutional level of mandated institutes of lower Mekong countries. This is turn would help countries to prepare for and respond to drought situations by taking short and long-term risk mitigation measures such as adjusting cropping calendars, rainwater harvesting, and so on.
Variable speed wind turbine generator with zero-sequence filter
Muljadi, Eduard
1998-01-01
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.
Variable Speed Wind Turbine Generator with Zero-sequence Filter
Muljadi, Eduard
1998-08-25
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.
Variable speed wind turbine generator with zero-sequence filter
Muljadi, E.
1998-08-25
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility. 14 figs.
Free piston variable-stroke linear-alternator generator
Haaland, C.M.
1998-12-15
A free-piston variable stroke linear-alternator AC power generator for a combustion engine is described. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod. 8 figs.
AL-Huqail, Asma A.; Abdelhaliem, Ekram
2015-01-01
The current study analyzed proteins and nuclear DNA of electric fields (ELF) exposed and nonexposed maize seedlings for different exposure periods using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), isozymes, random amplified polymorphic DNA (RAPD), and comet assay, respectively. SDS-PAGE analysis revealed total of 46 polypeptides bands with different molecular weights ranging from 186.20 to 36.00 KDa. It generated distinctive polymorphism value of 84.62%. Leucine-aminopeptidase, peroxidase, and catalase isozymes showed the highest values of polymorphism (100%) based on zymograms number, relative front (R f), and optical intensity while esterase isozyme generated polymorphism value of 83.33%. Amino acids were analyzed using high-performance liquid chromatography, which revealed the presence of 17 amino acids of variable contents ranging from 22.65% to 28.09%. RAPD revealed that 78 amplified DNA products had highly polymorphism value (95.08%) based on band numbers, with variable sizes ranging from 120 to 992 base pairs and band intensity. Comet assay recorded the highest extent of nuclear DNA damage as percentage of tailed DNA (2.38%) and tail moment unit (5.36) at ELF exposure of maize nuclei for 5 days. The current study concluded that the longer ELF exposing periods had genotoxic stress on macromolecules of maize cells and biomarkers used should be augmented for reliable estimates of genotoxicity after exposure of economic plants to ELF stressors. PMID:26180815
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T. Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K+, inward rectifying K+, L-type Ca2+, and Na+/K+ pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation. PMID:24587229
Variable speed generator application on the MOD-5A 7.3 mW wind turbine generator
NASA Technical Reports Server (NTRS)
Barton, Robert S.
1995-01-01
This paper describes the application of a Scherbiustat type variable speed subsystem in the MOD-5A Wind Turbine Generator. As designed by General Electric Company, Advanced Energy Programs Department, under contract DEN3-153 with NASA Lewis Research Center and DOE, the MOD-5A utilizes the subsystem for both starting assistance in a motoring mode and generation in a controlled airgap torque mode. Reactive power control is also provided. The Scherbiustat type arrangement of a wound rotor machine with a cycloconverter in the rotor circuit was selected after an evaluation of variable speed technologies that followed a system evaluation of drivetrain cost and risk. The paper describes the evaluation factors considered, the results of the evaluations and summarizes operating strategy and performance simulations.
A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run.
Armeanu, Daniel; Andrei, Jean Vasile; Lache, Leonard; Panait, Mirela
2017-01-01
The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets.
Potential distribution of the viral haemorrhagic septicaemia virus in the Great Lakes region
Escobar, Luis E.; Kurath, Gael; Escobar-Dodero, Joaquim; Craft, Meggan E.; Phelps, Nicholas B.D.
2017-01-01
Viral haemorrhagic septicaemia virus (VHSV) genotype IVb has been responsible for large-scale fish mortality events in the Great Lakes of North America. Anticipating the areas of potential VHSV occurrence is key to designing epidemiological surveillance and disease prevention strategies in the Great Lakes basin. We explored the environmental features that could shape the distribution of VHSV, based on remote sensing and climate data via ecological niche modelling. Variables included temperature measured during the day and night, precipitation, vegetation, bathymetry, solar radiation and topographic wetness. VHSV occurrences were obtained from available reports of virus confirmation in laboratory facilities. We fit a Maxent model using VHSV-IVb reports and environmental variables under different parameterizations to identify the best model to determine potential VHSV occurrence based on environmental suitability. VHSV reports were generated from both passive and active surveillance. VHSV occurrences were most abundant near shore sites. We were, however, able to capture the environmental signature of VHSV based on the environmental variables employed in our model, allowing us to identify patterns of VHSV potential occurrence. Our findings suggest that VHSV is not at an ecological equilibrium and more areas could be affected, including areas not in close geographic proximity to past VHSV reports.
A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run
Armeanu, Daniel; Lache, Leonard; Panait, Mirela
2017-01-01
The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets. PMID:28742100
Simulation of California's Major Reservoirs Outflow Using Data Mining Technique
NASA Astrophysics Data System (ADS)
Yang, T.; Gao, X.; Sorooshian, S.
2014-12-01
The reservoir's outflow is controlled by reservoir operators, which is different from the upstream inflow. The outflow is more important than the reservoir's inflow for the downstream water users. In order to simulate the complicated reservoir operation and extract the outflow decision making patterns for California's 12 major reservoirs, we build a data-driven, computer-based ("artificial intelligent") reservoir decision making tool, using decision regression and classification tree approach. This is a well-developed statistical and graphical modeling methodology in the field of data mining. A shuffled cross validation approach is also employed to extract the outflow decision making patterns and rules based on the selected decision variables (inflow amount, precipitation, timing, water type year etc.). To show the accuracy of the model, a verification study is carried out comparing the model-generated outflow decisions ("artificial intelligent" decisions) with that made by reservoir operators (human decisions). The simulation results show that the machine-generated outflow decisions are very similar to the real reservoir operators' decisions. This conclusion is based on statistical evaluations using the Nash-Sutcliffe test. The proposed model is able to detect the most influential variables and their weights when the reservoir operators make an outflow decision. While the proposed approach was firstly applied and tested on California's 12 major reservoirs, the method is universally adaptable to other reservoir systems.
Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Yamamoto, Rie; Takayama, Kozo
2013-01-01
The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and a design space. We integrated thin-plate spline (TPS) interpolation and Kohonen's self-organizing map (SOM) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared based on a standard formulation. The tensile strength, disintegration time, and stability of these variables were measured as response variables. These responses were predicted quantitatively based on nonlinear TPS. A large amount of data on these tablets was generated and classified into several clusters using an SOM. The experimental values of the responses were predicted with high accuracy, and the data generated for the tablets were classified into several distinct clusters. The SOM feature map allowed us to analyze the global and local correlations between causal factors and tablet characteristics. The results of this study suggest that increasing the proportion of microcrystalline cellulose (MCC) improved the tensile strength and the stability of tensile strength of these theophylline tablets. In addition, the proportion of MCC has an optimum value for disintegration time and stability of disintegration. Increasing the proportion of magnesium stearate extended disintegration time. Increasing the compression force improved tensile strength, but degraded the stability of disintegration. This technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulations.
Oscillating flow loss test results in Stirling engine heat exchangers
NASA Technical Reports Server (NTRS)
Koester, G.; Howell, S.; Wood, G.; Miller, E.; Gedeon, D.
1990-01-01
The results are presented for a test program designed to generate a database of oscillating flow loss information that is applicable to Stirling engine heat exchangers. The tests were performed on heater/cooler tubes of various lengths and entrance/exit configurations, on stacked and sintered screen regenerators of various wire diameters and on Brunswick and Metex random fiber regenerators. The test results were performed over a range of oscillating flow parameters consistent with Stirling engine heat exchanger experience. The tests were performed on the Sunpower oscillating flow loss rig which is based on a variable stroke and variable frequency linear drive motor. In general, the results are presented by comparing the measured oscillating flow losses to the calculated flow losses. The calculated losses are based on the cycle integration of steady flow friction factors and entrance/exit loss coefficients.
NASA Astrophysics Data System (ADS)
Khan, Arina; Khan, Haris Hasan; Umar, Rashid
2017-12-01
In this study, groundwater quality of an alluvial aquifer in the western Ganges basin is assessed using a GIS-based groundwater quality index (GQI) concept that uses groundwater quality data from field survey and laboratory analysis. Groundwater samples were collected from 42 wells during pre-monsoon and post-monsoon periods of 2012 and analysed for pH, EC, TDS, Anions (Cl, SO4, NO3), and Cations (Ca, Mg, Na). To generate the index, several parameters were selected based on WHO recommendations. The spatially variable grids of each parameter were modified by normalizing with the WHO standards and finally integrated into a GQI grid. The mean GQI values for both the season suggest good groundwater quality. However, spatial variations exist and are represented by GQI map of both seasons. This spatial variability was compared with the existing land-use, prepared using high-resolution satellite imagery available in Google earth. The GQI grids were compared to the land-use map using an innovative GIS-based method. Results indicate that the spatial variability of groundwater quality in the region is not fully controlled by the land-use pattern. This probably reflects the diffuse nature of land-use classes, especially settlements and plantations.
Lin, Frank Po-Yen; Pokorny, Adrian; Teng, Christina; Epstein, Richard J
2017-07-31
Vast amounts of clinically relevant text-based variables lie undiscovered and unexploited in electronic medical records (EMR). To exploit this untapped resource, and thus facilitate the discovery of informative covariates from unstructured clinical narratives, we have built a novel computational pipeline termed Text-based Exploratory Pattern Analyser for Prognosticator and Associator discovery (TEPAPA). This pipeline combines semantic-free natural language processing (NLP), regular expression induction, and statistical association testing to identify conserved text patterns associated with outcome variables of clinical interest. When we applied TEPAPA to a cohort of head and neck squamous cell carcinoma patients, plausible concepts known to be correlated with human papilloma virus (HPV) status were identified from the EMR text, including site of primary disease, tumour stage, pathologic characteristics, and treatment modalities. Similarly, correlates of other variables (including gender, nodal status, recurrent disease, smoking and alcohol status) were also reliably recovered. Using highly-associated patterns as covariates, a patient's HPV status was classifiable using a bootstrap analysis with a mean area under the ROC curve of 0.861, suggesting its predictive utility in supporting EMR-based phenotyping tasks. These data support using this integrative approach to efficiently identify disease-associated factors from unstructured EMR narratives, and thus to efficiently generate testable hypotheses.
S. A. Covert; P. R. Robichaud; W. J. Elliot; T. E. Link
2005-01-01
This study evaluates runoff predictions generated by GeoWEPP (Geo-spatial interface to the Water Erosion Prediction Project) and a modified version of WEPP v98.4 for forest soils. Three small (2 to 9 ha) watersheds in the mountains of the interior Northwest were monitored for several years following timber harvest and prescribed fires. Observed climate variables,...
NASA Astrophysics Data System (ADS)
Dowell, Laurie; Gary, Jack; Illingworth, Bill; Sargent, Tom
1987-05-01
Gathering information, necessary forms, and financial calculations needed to generate a "capital investment proposal" is an extremely complex and difficult process. The intent of the capital investment proposal is to ensure management that the proposed investment has been thoroughly investigated and will have a positive impact on corporate goals. Meeting this requirement typically takes four or five experts a total of 12 hours to generate a "Capital Package." A Capital Expert System was therefore developed using "Personal Consultant." The completed system is hybrid and as such does not depend solely on rules but incorporates several different software packages that communicate through variables and functions passed from one to another. This paper describes the use of expert system techniques, methodology in building the knowledge base, contexts, LISP functions, data base, and special challenges that had to be overcome to create this system. The Capital Expert System is the successful result of a unique integration of artificial intelligence with business accounting, financial forms generation, and investment proposal expertise.
Smart Energy Cryo-refrigerator Technology for the next generation Very Large Array
NASA Astrophysics Data System (ADS)
Spagna, Stefano
2018-01-01
We describe a “smart energy” cryocooler technology architecture for the next generation Very Large Array that makes use of multiple variable frequency cold heads driven from a single variable speed air cooled compressor. Preliminary experiments indicate that the compressor variable flow control, advanced diagnostics, and the cryo-refrigerator low vibration, provide a unique energy efficient capability for the very large number of antennas that will be employed in this array.
Biologic variability and correlation of platelet function testing in healthy dogs.
Blois, Shauna L; Lang, Sean T; Wood, R Darren; Monteith, Gabrielle
2015-12-01
Platelet function tests are influenced by biologic variability, including inter-individual (CVG ) and intra-individual (CVI ), as well as analytic (CVA ) variability. Variability in canine platelet function testing is unknown, but if excessive, would make it difficult to interpret serial results. Additionally, the correlation between platelet function tests is poor in people, but not well described in dogs. The aims were to: (1) identify the effect of variation in preanalytic factors (venipuncture, elapsed time until analysis) on platelet function tests; (2) calculate analytic and biologic variability of adenosine diphosphate (ADP) and arachidonic acid (AA)-induced thromboelastograph platelet mapping (TEG-PM), ADP-, AA-, and collagen-induced whole blood platelet aggregometry (WBA), and collagen/ADP and collagen/epinephrine platelet function analysis (PFA-CADP, PFA-CEPI); and (3) determine the correlation between these variables. In this prospective observational trial, platelet function was measured once every 7 days, for 4 consecutive weeks, in 9 healthy dogs. In addition, CBC, TEG-PM, WBA, and PFA were performed. Overall coefficients of variability ranged from 13.3% to 87.8% for the platelet function tests. Biologic variability was highest for AA-induced maximum amplitude generated during TEG-PM (MAAA; CVG = 95.3%, CVI = 60.8%). Use of population-based reference intervals (RI) was determined appropriate only for PFA-CADP (index of individuality = 10.7). There was poor correlation between most platelet function tests. Use of population-based RI appears inappropriate for most platelet function tests, and tests poorly correlate with one another. Future studies on biologic variability and correlation of platelet function tests should be performed in dogs with platelet dysfunction and those treated with antiplatelet therapy. © 2015 American Society for Veterinary Clinical Pathology.
Use of genome editing tools in human stem cell-based disease modeling and precision medicine.
Wei, Yu-da; Li, Shuang; Liu, Gai-gai; Zhang, Yong-xian; Ding, Qiu-rong
2015-10-01
Precision medicine emerges as a new approach that takes into account individual variability. The successful conduct of precision medicine requires the use of precise disease models. Human pluripotent stem cells (hPSCs), as well as adult stem cells, can be differentiated into a variety of human somatic cell types that can be used for research and drug screening. The development of genome editing technology over the past few years, especially the CRISPR/Cas system, has made it feasible to precisely and efficiently edit the genetic background. Therefore, disease modeling by using a combination of human stem cells and genome editing technology has offered a new platform to generate " personalized " disease models, which allow the study of the contribution of individual genetic variabilities to disease progression and the development of precise treatments. In this review, recent advances in the use of genome editing in human stem cells and the generation of stem cell models for rare diseases and cancers are discussed.
Miller, John J; Eackles, Michael S.; Stauffer, Jay R; King, Timothy L.
2015-01-01
We characterized variation within the mitochondrial genomes of the invasive silver carp (Hypophthalmichthys molitrix) and bighead carp (H. nobilis) from the Mississippi River drainage by mapping our Next-Generation sequences to their publicly available genomes. Variant detection resulted in 338 single-nucleotide polymorphisms for H. molitrix and 39 for H. nobilis. The much greater genetic variation in H. molitrix mitochondria relative to H. nobilis may be indicative of a greater North American female effective population size of the former. When variation was quantified by gene, many tRNA loci appear to have little or no variability based on our results whereas protein-coding regions were more frequently polymorphic. These results provide biologists with additional regions of DNA to be used as markers to study the invasion dynamics of these species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xujun, E-mail: yexujun@cc.hirosaki-u.ac.jp; Faculty of Agriculture and Life Sciences, Hirosaki University, Aomori 036-8561; Sakai, Kenshi, E-mail: ken@cc.tuat.ac.jp
Alternate bearing or masting is a yield variability phenomenon in perennial crops. The complex dynamics in this phenomenon have stimulated much ecological research. Motivated by data from an eight-year experiment with forty-eight individual trees, we explored the mechanism inherent to these dynamics in Satsuma mandarin (Citrus unshiu Marc.). By integrating high-resolution imaging technology, we found that the canopy structure and reproduction output of individual citrus crops are mutually dependent on each other. Furthermore, it was revealed that the mature leaves in early season contribute their energy to the fruiting of the current growing season, whereas the younger leaves show amore » delayed contribution to the next growing season. We thus hypothesized that the annual yield variability might be caused by the limited and time-delayed resource allocation in individual plants. A novel lattice model based on this hypothesis demonstrates that this pattern of resource allocation will generate oscillations and chaos in citrus yield.« less
Amarillo, Yimy; Mato, Germán; Nadal, Marcela S
2015-01-01
Thalamocortical neurons are involved in the generation and maintenance of brain rhythms associated with global functional states. The repetitive burst firing of TC neurons at delta frequencies (1-4 Hz) has been linked to the oscillations recorded during deep sleep and during episodes of absence seizures. To get insight into the biophysical properties that are the basis for intrinsic delta oscillations in these neurons, we performed a bifurcation analysis of a minimal conductance-based thalamocortical neuron model including only the IT channel and the sodium and potassium leak channels. This analysis unveils the dynamics of repetitive burst firing of TC neurons, and describes how the interplay between the amplifying variable mT and the recovering variable hT of the calcium channel IT is sufficient to generate low threshold oscillations in the delta band. We also explored the role of the hyperpolarization activated cationic current Ih in this reduced model and determine that, albeit not required, Ih amplifies and stabilizes the oscillation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagerlöf, Jakob H., E-mail: Jakob@radfys.gu.se; Kindblom, Jon; Bernhardt, Peter
2014-09-15
Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumormore » oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.« less
Lagerlöf, Jakob H; Kindblom, Jon; Bernhardt, Peter
2014-09-01
To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO2)]. A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO2), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO2 (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO2 (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO2 (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.
The risk of misclassifying subjects within principal component based asset index
2014-01-01
The asset index is often used as a measure of socioeconomic status in empirical research as an explanatory variable or to control confounding. Principal component analysis (PCA) is frequently used to create the asset index. We conducted a simulation study to explore how accurately the principal component based asset index reflects the study subjects’ actual poverty level, when the actual poverty level is generated by a simple factor analytic model. In the simulation study using the PC-based asset index, only 1% to 4% of subjects preserved their real position in a quintile scale of assets; between 44% to 82% of subjects were misclassified into the wrong asset quintile. If the PC-based asset index explained less than 30% of the total variance in the component variables, then we consistently observed more than 50% misclassification across quintiles of the index. The frequency of misclassification suggests that the PC-based asset index may not provide a valid measure of poverty level and should be used cautiously as a measure of socioeconomic status. PMID:24987446
NASA Astrophysics Data System (ADS)
Opanchuk, B.; Arnaud, L.; Reid, M. D.
2014-06-01
We demonstrate the principle of one-sided device-independent continuous-variable (CV) quantum information. In situations of no trust, we show by enactment how the use of standard CV entanglement criteria can mislead Charlie into thinking that Alice and Bob share entanglement, when the data are actually generated classically using a local-hidden-variable theory based on the Wigner function. We distinguish between criteria that demonstrate CV entanglement, and criteria that demonstrate the CV Einstein-Podolsky-Rosen (EPR) steering paradox. We show that the latter, but not the former, are necessarily one-sided device-independent entanglement witnesses, and can be used by Charlie to signify genuine EPR entanglement, if he trusts only Alice. A monogamy result for the EPR steering paradox confirms the security of the shared amplitude values in that case.
NASA Astrophysics Data System (ADS)
Ottaviani, Carlo; Spedalieri, Gaetana; Braunstein, Samuel L.; Pirandola, Stefano
2015-02-01
We consider the continuous-variable protocol of Pirandola et al. [arXiv:1312.4104] where the secret key is established by the measurement of an untrusted relay. In this network protocol, two authorized parties are connected to an untrusted relay by insecure quantum links. Secret correlations are generated by a continuous-variable Bell detection performed on incoming coherent states. In the present work we provide a detailed study of the symmetric configuration, where the relay is midway between the parties. We analyze symmetric eavesdropping strategies against the quantum links explicitly showing that, at fixed transmissivity and thermal noise, two-mode coherent attacks are optimal, manifestly outperforming one-mode collective attacks based on independent entangling cloners. Such an advantage is shown both in terms of security threshold and secret-key rate.
NASA Astrophysics Data System (ADS)
Morimoto, Shigeo; Nakamura, Tomohiko; Takeda, Yoji
This paper proposes the sensorless output power maximization control of the wind generation system. A permanent magnet synchronous generator (PMSG) is used as a variable speed generator in the proposed system. The generator torque is suitably controlled according to the generator speed and thus the power from a wind turbine settles down on the maximum power point by the proposed MPPT control method, where the information of wind velocity is not required. Moreover, the maximum available generated power is obtained by the optimum current vector control. The current vector of PMSG is optimally controlled according to the generator speed and the required torque in order to minimize the losses of PMSG considering the voltage and current constraints. The proposed wind power generation system can be achieved without mechanical sensors such as a wind velocity detector and a position sensor. Several experimental results show the effectiveness of the proposed control method.
Gene Expression Signatures Based on Variability can Robustly Predict Tumor Progression and Prognosis
Dinalankara, Wikum; Bravo, Héctor Corrada
2015-01-01
Gene expression signatures are commonly used to create cancer prognosis and diagnosis methods, yet only a small number of them are successfully deployed in the clinic since many fail to replicate performance on subsequent validation. A primary reason for this lack of reproducibility is the fact that these signatures attempt to model the highly variable and unstable genomic behavior of cancer. Our group recently introduced gene expression anti-profiles as a robust methodology to derive gene expression signatures based on the observation that while gene expression measurements are highly heterogeneous across tumors of a specific cancer type relative to the normal tissue, their degree of deviation from normal tissue expression in specific genes involved in tissue differentiation is a stable tumor mark that is reproducible across experiments and cancer types. Here we show that constructing gene expression signatures based on variability and the anti-profile approach yields classifiers capable of successfully distinguishing benign growths from cancerous growths based on deviation from normal expression. We then show that this same approach generates stable and reproducible signatures that predict probability of relapse and survival based on tumor gene expression. These results suggest that using the anti-profile framework for the discovery of genomic signatures is an avenue leading to the development of reproducible signatures suitable for adoption in clinical settings. PMID:26078586
Fang, Lin; Schinke, Steven P
2011-01-01
Underage drinking among Asian American adolescent girls is not well understood. Based on family interaction theory, the study examined the interrelationships among acculturation variables, family relationships, girls' depressed mood, peer alcohol use, and girls' alcohol use in a sample of 130 Asian American mother-daughter dyads. The mediating role of family relationships, girls' depressed mood, and peer alcohol use on girls' drinking was also assessed. The study advances knowledge related to alcohol use among early Asian American adolescent girls, highlights the effect of immigrant generation status and family relationships, and has implications for culturally specific underage drinking prevention programs.
Computing an operating parameter of a unified power flow controller
Wilson, David G.; Robinett, III, Rush D.
2017-12-26
A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.
Computing an operating parameter of a unified power flow controller
Wilson, David G; Robinett, III, Rush D
2015-01-06
A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.
Automated consensus contour building for prostate MRI.
Khalvati, Farzad
2014-01-01
Inter-observer variability is the lack of agreement among clinicians in contouring a given organ or tumour in a medical image. The variability in medical image contouring is a source of uncertainty in radiation treatment planning. Consensus contour of a given case, which was proposed to reduce the variability, is generated by combining the manually generated contours of several clinicians. However, having access to several clinicians (e.g., radiation oncologists) to generate a consensus contour for one patient is costly. This paper presents an algorithm that automatically generates a consensus contour for a given case using the atlases of different clinicians. The algorithm was applied to prostate MR images of 15 patients manually contoured by 5 clinicians. The automatic consensus contours were compared to manual consensus contours where a median Dice similarity coefficient (DSC) of 88% was achieved.
Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches
NASA Astrophysics Data System (ADS)
H, Vathsala; Koolagudi, Shashidhar G.
2017-10-01
This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).
Design of the primary and secondary Pre-TRMM and TRMM ground truth sites
NASA Technical Reports Server (NTRS)
Garstang, Michael; Austin, Geoffrey; Cosgrove, Claire
1991-01-01
Results generated over six months are covered in five manuscripts: (1) estimates of rain volume over the Peninsula of Florida during the summer season based upon the Manually Digitized Radar data; (2) the diurnal characteristics of rainfall over Florida and over the near shore waters; (3) convective rainfall as measured over the east coast of central Florida; (4) the spatial and temporal variability of rainfall over Florida; and (5) comparisons between the land based radar and an optical raingage onboard an anchored buoy 50 km offshore.
Probabilistic atlas and geometric variability estimation to drive tissue segmentation.
Xu, Hao; Thirion, Bertrand; Allassonnière, Stéphanie
2014-09-10
Computerized anatomical atlases play an important role in medical image analysis. While an atlas usually refers to a standard or mean image also called template, which presumably represents well a given population, it is not enough to characterize the observed population in detail. A template image should be learned jointly with the geometric variability of the shapes represented in the observations. These two quantities will in the sequel form the atlas of the corresponding population. The geometric variability is modeled as deformations of the template image so that it fits the observations. In this paper, we provide a detailed analysis of a new generative statistical model based on dense deformable templates that represents several tissue types observed in medical images. Our atlas contains both an estimation of probability maps of each tissue (called class) and the deformation metric. We use a stochastic algorithm for the estimation of the probabilistic atlas given a dataset. This atlas is then used for atlas-based segmentation method to segment the new images. Experiments are shown on brain T1 MRI datasets. Copyright © 2014 John Wiley & Sons, Ltd.
Rapid isolation of IgNAR variable single-domain antibody fragments from a shark synthetic library.
Shao, Cui-Ying; Secombes, Chris J; Porter, Andrew J
2007-01-01
The immunoglobulin isotype IgNAR (Novel Antigen Receptor) was discovered in the serum of the nurse shark (Ginglymostoma cirratum) and wobbegong shark (Orectolobus maculates) as a homodimer of two protein chains, each composed of a single variable domain (V) domain and five constant domains. The IgNAR variable domain contains an intact antigen-binding site and functions as an independent domain able to react to antigen with both high specificity and affinity. Here we describe the successful construction of a synthetic phage-displayed library based upon a single anti-lysozyme clone HEL-5A7 scaffold, which was previously selected from an immune IgNAR variable domain library. The complementarity-determining region 3 (CDR3) loop of this clone was varied in both length and composition and the derived library was used to pan against two model proteins, lysozyme and leptin. A single anti-lysozyme clone (Ly-X20) and anti-leptin clone (Lep-12E1) were selected for further study. Both clones were shown to be functionally expressed in Escherichia coli, extremely thermostable and bind to corresponding antigens specifically. The results here demonstrate that a synthetic IgNAR variable domain library based on a single framework scaffold can be used as a route to generate antigen binders quickly, easily and without the need of immunization.
Unsupervised classification of variable stars
NASA Astrophysics Data System (ADS)
Valenzuela, Lucas; Pichara, Karim
2018-03-01
During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.
Wang, Su-hua; Baillargeon, Renée
2009-01-01
As they observe or produce events, infants identify variables that help them predict outcomes in each category of events. How do infants identify a new variable? An explanation-based learning (EBL) account suggests three essential steps: (1) observing contrastive outcomes relevant to the variable; (2) discovering the conditions associated with these outcomes; and (3) generating an explanation for the condition-outcome regularity discovered. In Experiments 1–3, 9-month-old infants watched events designed to “teach” them the variable height in covering events. After watching these events, designed in accord with the EBL account, the infants detected a height violation in a covering event, three months earlier than they ordinarily would have. In Experiments 4–6, the “teaching” events were modified to remove one of the EBL steps, and the infants no longer detected the height violation. The present findings thus support the EBL account and help specify the processes by which infants acquire their physical knowledge. PMID:18177635
NASA Astrophysics Data System (ADS)
Zeyringer, Marianne; Price, James; Fais, Birgit; Li, Pei-Hao; Sharp, Ed
2018-05-01
The design of cost-effective power systems with high shares of variable renewable energy (VRE) technologies requires a modelling approach that simultaneously represents the whole energy system combined with the spatiotemporal and inter-annual variability of VRE. Here, we soft-link a long-term energy system model, which explores new energy system configurations from years to decades, with a high spatial and temporal resolution power system model that captures VRE variability from hours to years. Applying this methodology to Great Britain for 2050, we find that VRE-focused power system design is highly sensitive to the inter-annual variability of weather and that planning based on a single year can lead to operational inadequacy and failure to meet long-term decarbonization objectives. However, some insights do emerge that are relatively stable to weather-year. Reinforcement of the transmission system consistently leads to a decrease in system costs while electricity storage and flexible generation, needed to integrate VRE into the system, are generally deployed close to demand centres.
Wilderjans, Tom F; Ceulemans, Eva; Van Mechelen, Iven; Depril, Dirk
2011-03-01
In many areas of psychology, one is interested in disclosing the underlying structural mechanisms that generated an object by variable data set. Often, based on theoretical or empirical arguments, it may be expected that these underlying mechanisms imply that the objects are grouped into clusters that are allowed to overlap (i.e., an object may belong to more than one cluster). In such cases, analyzing the data with Mirkin's additive profile clustering model may be appropriate. In this model: (1) each object may belong to no, one or several clusters, (2) there is a specific variable profile associated with each cluster, and (3) the scores of the objects on the variables can be reconstructed by adding the cluster-specific variable profiles of the clusters the object in question belongs to. Until now, however, no software program has been publicly available to perform an additive profile clustering analysis. For this purpose, in this article, the ADPROCLUS program, steered by a graphical user interface, is presented. We further illustrate its use by means of the analysis of a patient by symptom data matrix.
NASA Astrophysics Data System (ADS)
Kai, Takaaki; Tanaka, Yuji; Kaneda, Hirotoshi; Kobayashi, Daichi; Tanaka, Akio
Recently, doubly fed induction generator (DFIG) and synchronous generator are mostly applied for wind power generation, and variable speed control and power factor control are executed for high efficiently for wind energy capture and high quality for power system voltage. In variable speed control, a wind speed or a generator speed is used for maximum power point tracking. However, performances of a wind generation power fluctuation due to wind speed variation have not yet investigated for those controls. The authors discuss power smoothing by those controls for the DFIG inter-connected to 6.6kV distribution line. The performances are verified using power system simulation software PSCAD/EMTDC for actual wind speed data and are examined from an approximate equation of wind generation power fluctuation for wind speed variation.
NASA Astrophysics Data System (ADS)
Perez, Marc J. R.
With extraordinary recent growth of the solar photovoltaic industry, it is paramount to address the biggest barrier to its high-penetration across global electrical grids: the inherent variability of the solar resource. This resource variability arises from largely unpredictable meteorological phenomena and from the predictable rotation of the earth around the sun and about its own axis. To achieve very high photovoltaic penetration, the imbalance between the variable supply of sunlight and demand must be alleviated. The research detailed herein consists of the development of a computational model which seeks to optimize the combination of 3 supply-side solutions to solar variability that minimizes the aggregate cost of electricity generated therefrom: Storage (where excess solar generation is stored when it exceeds demand for utilization when it does not meet demand), interconnection (where solar generation is spread across a large geographic area and electrically interconnected to smooth overall regional output) and smart curtailment (where solar capacity is oversized and excess generation is curtailed at key times to minimize the need for storage.). This model leverages a database created in the context of this doctoral work of satellite-derived photovoltaic output spanning 10 years at a daily interval for 64,000 unique geographic points across the globe. Underpinning the model's design and results, the database was used to further the understanding of solar resource variability at timescales greater than 1-day. It is shown that--as at shorter timescales--cloud/weather-induced solar variability decreases with geographic extent and that the geographic extent at which variability is mitigated increases with timescale and is modulated by the prevailing speed of clouds/weather systems. Unpredictable solar variability up to the timescale of 30 days is shown to be mitigated across a geographic extent of only 1500km if that geographic extent is oriented in a north/south bearing. Using technical and economic data reflecting today's real costs for solar generation technology, storage and electric transmission in combination with this model, we determined the minimum cost combination of these solutions to transform the variable output from solar plants into 3 distinct output profiles: A constant output equivalent to a baseload power plant, a well-defined seasonally-variable output with no weather-induced variability and a variable output but one that is 100% predictable on a multi-day ahead basis. In order to do this, over 14,000 model runs were performed by varying the desired output profile, the amount of energy curtailment, the penetration of solar energy and the geographic region across the continental United States. Despite the cost of supplementary electric transmission, geographic interconnection has the potential to reduce the levelized cost of electricity when meeting any of the studied output profiles by over 65% compared to when only storage is used. Energy curtailment, despite the cost of underutilizing solar energy capacity, has the potential to reduce the total cost of electricity when meeting any of the studied output profiles by over 75% compared to when only storage is used. The three variability mitigation strategies are thankfully not mutually exclusive. When combined at their ideal levels, each of the regions studied saw a reduction in cost of electricity of over 80% compared to when only energy storage is used to meet a specified output profile. When including current costs for solar generation, transmission and energy storage, an optimum configuration can conservatively provide guaranteed baseload power generation with solar across the entire continental United States (equivalent to a nuclear power plant with no down time) for less than 0.19 per kilowatt-hour. If solar is preferentially clustered in the southwest instead of evenly spread throughout the United States, and we adopt future expected costs for solar generation of 1 per watt, optimal model results show that meeting a 100% predictable output target with solar will cost no more than $0.08 per kilowatt-hour.
Infant breathing rate counter based on variable resistor for pneumonia
NASA Astrophysics Data System (ADS)
Sakti, Novi Angga; Hardiyanto, Ardy Dwi; La Febry Andira R., C.; Camelya, Kesa; Widiyanti, Prihartini
2016-03-01
Pneumonia is one of the leading causes of death in new born baby in Indonesia. According to WHO in 2002, breathing rate is very important index to be the symptom of pneumonia. In the Community Health Center, the nurses count with a stopwatch for exactly one minute. Miscalculation in Community Health Center occurs because of long time concentration and focus on two object at once. This calculation errors can cause the baby who should be admitted to the hospital only be attended at home. Therefore, an accurate breathing rate counter at Community Health Center level is necessary. In this work, resistance change of variable resistor is made to be breathing rate counter. Resistance change in voltage divider can produce voltage change. If the variable resistance moves periodically, the voltage will change periodically too. The voltage change counted by software in the microcontroller. For the every mm shift at the variable resistor produce average 0.96 voltage change. The software can count the number of wave generated by shifting resistor.
Early warning of changing drinking water quality by trend analysis.
Tomperi, Jani; Juuso, Esko; Leiviskä, Kauko
2016-06-01
Monitoring and control of water treatment plants play an essential role in ensuring high quality drinking water and avoiding health-related problems or economic losses. The most common quality variables, which can be used also for assessing the efficiency of the water treatment process, are turbidity and residual levels of coagulation and disinfection chemicals. In the present study, the trend indices are developed from scaled measurements to detect warning signs of changes in the quality variables of drinking water and some operating condition variables that strongly affect water quality. The scaling is based on monotonically increasing nonlinear functions, which are generated with generalized norms and moments. Triangular episodes are classified with the trend index and its derivative. Deviation indices are used to assess the severity of situations. The study shows the potential of the described trend analysis as a predictive monitoring tool, as it provides an advantage over the traditional manual inspection of variables by detecting changes in water quality and giving early warnings.
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-01-01
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. PMID:28425957
Kushawaha, Akhilesh Kumar; Rabindran, Ramalingam; Dasgupta, Indranil
2018-03-01
Cassava mosaic disease is a widespread disease of cassava in south Asia and the African continent. In India, CMD is known to be caused by two single-stranded DNA viruses (geminiviruses), Indian cassava mosaic virus (ICMV) and Sri Lankan cassava mosdaic virus (SLCMV). Previously, the diversity of ICMV and SLCMV in India has been studied using PCR, a sequence-dependent method. To have a more in-depth study of the variability of the above viruses and to detect any novel geminiviruses associated with CMD, sequence-independent amplification using rolling circle amplification (RCA)-based methods were used. CMD affected cassava plants were sampled across eighty locations in nine districts of the southern Indian state of Tamil Nadu. Twelve complete sequence of coat protein genes of the resident geminiviruses, comprising 256 amino acid residues were generated from the above samples, which indicated changes at only six positions. RCA followed by RFLP of the 80 samples indicated that most samples (47) contained only SLCMV, followed by 8, which were infected jointly with ICMV and SLCMV. In 11 samples, the pattern did not match the expected patterns from either of the two viruses and hence, were variants. Sequence analysis of an average of 700 nucleotides from 31 RCA-generated fragments of the variants indicated identities of 97-99% with the sequence of a previously reported infectious clone of SLCMV. The evidence suggests low levels of genetic variability in the begomoviruses infecting cassava, mainly in the form of scattered single nucleotide changes.
Combining climatic and soil properties better predicts covers of Brazilian biomes.
Arruda, Daniel M; Fernandes-Filho, Elpídio I; Solar, Ricardo R C; Schaefer, Carlos E G R
2017-04-01
Several techniques have been used to model the area covered by biomes or species. However, most models allow little freedom of choice of response variables and are conditioned to the use of climate predictors. This major restriction of the models has generated distributions of low accuracy or inconsistent with the actual cover. Our objective was to characterize the environmental space of the most representative biomes of Brazil and predict their cover, using climate and soil-related predictors. As sample units, we used 500 cells of 100 km 2 for ten biomes, derived from the official vegetation map of Brazil (IBGE 2004). With a total of 38 (climatic and soil-related) predictors, an a priori model was run with the random forest classifier. Each biome was calibrated with 75% of the samples. The final model was based on four climate and six soil-related predictors, the most important variables for the a priori model, without collinearity. The model reached a kappa value of 0.82, generating a highly consistent prediction with the actual cover of the country. We showed here that the richness of biomes should not be underestimated, and that in spite of the complex relationship, highly accurate modeling based on climatic and soil-related predictors is possible. These predictors are complementary, for covering different parts of the multidimensional niche. Thus, a single biome can cover a wide range of climatic space, versus a narrow range of soil types, so that its prediction is best adjusted by soil-related variables, or vice versa.
Combining climatic and soil properties better predicts covers of Brazilian biomes
NASA Astrophysics Data System (ADS)
Arruda, Daniel M.; Fernandes-Filho, Elpídio I.; Solar, Ricardo R. C.; Schaefer, Carlos E. G. R.
2017-04-01
Several techniques have been used to model the area covered by biomes or species. However, most models allow little freedom of choice of response variables and are conditioned to the use of climate predictors. This major restriction of the models has generated distributions of low accuracy or inconsistent with the actual cover. Our objective was to characterize the environmental space of the most representative biomes of Brazil and predict their cover, using climate and soil-related predictors. As sample units, we used 500 cells of 100 km2 for ten biomes, derived from the official vegetation map of Brazil (IBGE 2004). With a total of 38 (climatic and soil-related) predictors, an a priori model was run with the random forest classifier. Each biome was calibrated with 75% of the samples. The final model was based on four climate and six soil-related predictors, the most important variables for the a priori model, without collinearity. The model reached a kappa value of 0.82, generating a highly consistent prediction with the actual cover of the country. We showed here that the richness of biomes should not be underestimated, and that in spite of the complex relationship, highly accurate modeling based on climatic and soil-related predictors is possible. These predictors are complementary, for covering different parts of the multidimensional niche. Thus, a single biome can cover a wide range of climatic space, versus a narrow range of soil types, so that its prediction is best adjusted by soil-related variables, or vice versa.
The CARIBU EBIS control and synchronization system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickerson, Clayton, E-mail: cdickerson@anl.gov; Peters, Christopher, E-mail: cdickerson@anl.gov
2015-01-09
The Californium Rare Isotope Breeder Upgrade (CARIBU) Electron Beam Ion Source (EBIS) charge breeder has been built and tested. The bases of the CARIBU EBIS electrical system are four voltage platforms on which both DC and pulsed high voltage outputs are controlled. The high voltage output pulses are created with either a combination of a function generator and a high voltage amplifier, or two high voltage DC power supplies and a high voltage solid state switch. Proper synchronization of the pulsed voltages, fundamental to optimizing the charge breeding performance, is achieved with triggering from a digital delay pulse generator. Themore » control system is based on National Instruments realtime controllers and LabVIEW software implementing Functional Global Variables (FGV) to store and access instrument parameters. Fiber optic converters enable network communication and triggering across the platforms.« less
Extended Importance Sampling for Reliability Analysis under Evidence Theory
NASA Astrophysics Data System (ADS)
Yuan, X. K.; Chen, B.; Zhang, B. Q.
2018-05-01
In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.
Spatial drought reconstructions for central High Asia based on tree rings
NASA Astrophysics Data System (ADS)
Fang, Keyan; Davi, Nicole; Gou, Xiaohua; Chen, Fahu; Cook, Edward; Li, Jinbao; D'Arrigo, Rosanne
2010-11-01
Spatial reconstructions of drought for central High Asia based on a tree-ring network are presented. Drought patterns for central High Asia are classified into western and eastern modes of variability. Tree-ring based reconstructions of the Palmer drought severity index (PDSI) are presented for both the western central High Asia drought mode (1587-2005), and for the eastern central High Asia mode (1660-2005). Both reconstructions, generated using a principal component regression method, show an increased variability in recent decades. The wettest epoch for both reconstructions occurred from the 1940s to the 1950s. The most extreme reconstructed drought for western central High Asia was from the 1640s to the 1650s, coinciding with the collapse of the Chinese Ming Dynasty. The eastern central High Asia reconstruction has shown a distinct tendency towards drier conditions since the 1980s. Our spatial reconstructions agree well with previous reconstructions that fall within each mode, while there is no significant correlation between the two spatial reconstructions.
NASA Astrophysics Data System (ADS)
Voitsekhovskii, A. V.; Nesmelov, S. N.; Dzyadukh, S. M.; Varavin, V. S.; Dvoretskii, S. A.; Mikhailov, N. N.; Yakushev, M. V.; Sidorov, G. Yu.
2017-12-01
Metal-insulator-semiconductor (MIS) structures based on n(p)-Hg1-xCdxTe (x = 0.22-0.40) with near-surface variable-gap layers were grown by the molecular-beam epitaxy (MBE) technique on the Si (0 1 3) substrates. Electrical properties of MIS structures were investigated experimentally at various temperatures (9-77 K) and directions of voltage sweep. The ;narrow swing; technique was used to determine the spectra of fast surface states with the exception of hysteresis effects. It is established that the density of fast surface states at the MCT/Al2O3 interface at a minimum does not exceed 3 × 1010 eV-1 × cm-2. For MIS structures based on n-MCT/Si(0 1 3), the differential resistance of the space-charge region in strong inversion mode in the temperature range 50-90 K is limited by the Shockley-Read-Hall generation in the space-charge region.
The Importance of Distance to Resources in the Spatial Modelling of Bat Foraging Habitat
Rainho, Ana; Palmeirim, Jorge M.
2011-01-01
Many bats are threatened by habitat loss, but opportunities to manage their habitats are now increasing. Success of management depends greatly on the capacity to determine where and how interventions should take place, so models predicting how animals use landscapes are important to plan them. Bats are quite distinctive in the way they use space for foraging because (i) most are colonial central-place foragers and (ii) exploit scattered and distant resources, although this increases flying costs. To evaluate how important distances to resources are in modelling foraging bat habitat suitability, we radio-tracked two cave-dwelling species of conservation concern (Rhinolophus mehelyi and Miniopterus schreibersii) in a Mediterranean landscape. Habitat and distance variables were evaluated using logistic regression modelling. Distance variables greatly increased the performance of models, and distance to roost and to drinking water could alone explain 86 and 73% of the use of space by M. schreibersii and R. mehelyi, respectively. Land-cover and soil productivity also provided a significant contribution to the final models. Habitat suitability maps generated by models with and without distance variables differed substantially, confirming the shortcomings of maps generated without distance variables. Indeed, areas shown as highly suitable in maps generated without distance variables proved poorly suitable when distance variables were also considered. We concluded that distances to resources are determinant in the way bats forage across the landscape, and that using distance variables substantially improves the accuracy of suitability maps generated with spatially explicit models. Consequently, modelling with these variables is important to guide habitat management in bats and similarly mobile animals, particularly if they are central-place foragers or depend on spatially scarce resources. PMID:21547076
Van Gordon, William; Shonin, Edo; Dunn, Thomas J; Garcia-Campayo, Javier; Griffiths, Mark D
2017-02-01
The purpose of this study was to conduct the first randomized controlled trial (RCT) to evaluate the effectiveness of a second-generation mindfulness-based intervention (SG-MBI) for treating fibromyalgia syndrome (FMS). Compared to first-generation mindfulness-based interventions, SG-MBIs are more acknowledging of the spiritual aspect of mindfulness. A RCT employing intent-to-treat analysis. Adults with FMS received an 8-week SG-MBI known as meditation awareness training (MAT; n = 74) or an active control intervention known as cognitive behaviour theory for groups (n = 74). Assessments were performed at pre-, post-, and 6-month follow-up phases. Meditation awareness training participants demonstrated significant and sustained improvements over control group participants in FMS symptomatology, pain perception, sleep quality, psychological distress, non-attachment (to self, symptoms, and environment), and civic engagement. A mediation analysis found that (1) civic engagement partially mediated treatment effects for all outcome variables, (2) non-attachment partially mediated treatment effects for psychological distress and sleep quality, and (3) non-attachment almost fully mediated treatment effects for FMS symptomatology and pain perception. Average daily time spent in meditation was found to be a significant predictor of changes in all outcome variables. Meditation awareness training may be a suitable treatment for adults with FMS and appears to ameliorate FMS symptomatology and pain perception by reducing attachment to self. Statement of contribution What is already known on this subject? Designing interventions to treat fibromyalgia syndrome (FMS) continues to be a challenge. There is growing interest into the applications of mindfulness-based interventions for treating FMS. Second-generation mindfulness-based interventions (SG-MBIs) are a key new direction in mindfulness research. What does this study add? Meditation awareness training - an SG-MBI - resulted in significant reductions in FMS symptomatology. SG-MBIs recognize the spiritual aspect of mindfulness and may have a role in the treatment of FMS. © 2016 The British Psychological Society.
Heuristic Programming Project: October 1979-September 1982
1985-03-27
the research on ROGET focused on acquiring the initial conceptual structure needed to design the knowedge base in the first place. ROGET’s special...the user in constructing and specifying the details of the components. A component is a collection of functions and variables that support conceptual ...other framwork , called the Backchain framework, is for building programs that use backward-chaind production rules as the primary mechanism of generating
The Impact of Army and Family Factors on Individual Readiness
1993-08-01
families. While, in general , individual characteristics were more important in the determination of soldieL. readiness than famlily characteristics...affected indi- vidual readiness, in general , family-related variables had higher impact on soldier intention to remain in the Army after their cur- rent...installations. This survey was designed to provide information related to Army policy/program questions based on prior and current research anu to generate new
Probabilistic Methods for Image Generation and Encoding.
1993-10-15
video and graphics lab at Georgia Tech, linking together Silicon Graphics workstations, a laser video recorder, a Betacam video recorder, scanner...computer laboratory at Georgia Tech, based on two Silicon Graphics Personal Iris workstations, a SONY laser video recorder, a SONY Betacam SP video...laser disk in component RGB form, with variable speed playback. From the laser recorder the images can be dubbed to the Betacam or the VHS recorder in
2017-07-01
forecasts and observations on a common grid, which enables the application a number of different spatial verification methods that reveal various...forecasts of continuous meteorological variables using categorical and object-based methods . White Sands Missile Range (NM): Army Research Laboratory (US... Research version of the Weather Research and Forecasting Model adapted for generating short-range nowcasts and gridded observations produced by the
Operations Optimization of Hybrid Energy Systems under Variable Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jun; Garcia, Humberto E.
Hybrid energy systems (HES) have been proposed to be an important element to enable increasing penetration of clean energy. This paper investigates the operations flexibility of HES, and develops a methodology for operations optimization to maximize its economic value based on predicted renewable generation and market information. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value, and is illustrated by numerical results.
ERIC Educational Resources Information Center
Moore, John S., III.
2013-01-01
The purpose of this study was to investigate how self-regulated learning and ethnic/racial variables predict minority first-generation college student persistence and related constructs. Participants were drawn nationally from the U.S. Department of Education funded TRiO Student Support Services Programs. Additional participants from the Talent…
Future Visions of the Brahmaputra - Establishing Hydrologic Baseline and Water Resources Context
NASA Astrophysics Data System (ADS)
Ray, P. A.; Yang, Y. E.; Wi, S.; Brown, C. M.
2013-12-01
The Brahmaputra River Basin (China-India-Bhutan-Bangladesh) is on the verge of a transition from a largely free flowing and highly variable river to a basin of rapid investment and infrastructure development. This work demonstrates a knowledge platform for the basin that compiles available data, and develops hydrologic and water resources system models of the basin. A Variable Infiltration Capacity (VIC) model of the Brahmaputra basin supplies hydrologic information of major tributaries to a water resources system model, which routes runoff generated via the VIC model through water infrastructure, and accounts for water withdrawals for agriculture, hydropower generation, municipal demand, return flows and others human activities. The system model also simulates agricultural production and the economic value of water in its various uses, including municipal, agricultural, and hydropower. Furthermore, the modeling framework incorporates plausible climate change scenarios based on the latest projections of changes to contributing glaciers (upstream), as well as changes to monsoon behavior (downstream). Water resources projects proposed in the Brahmaputra basin are evaluated based on their distribution of benefits and costs in the absence of well-defined water entitlements, and relative to a complex regional water-energy-food nexus. Results of this project will provide a basis for water sharing negotiation among the four countries and inform trans-national water-energy policy making.
NASA Technical Reports Server (NTRS)
Carlson, C. R.
1981-01-01
The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.
The Holocene floods and their affinity to climatic variability in the western Himalaya, India
NASA Astrophysics Data System (ADS)
Sharma, Shubhra; Shukla, A. D.; Bartarya, S. K.; Marh, B. S.; Juyal, Navin
2017-08-01
The present study in the middle Satluj valley explores the sedimentary records of past floods with an objective to understand the climatic processes responsible for their genesis. Based on lithostratigraphy, sedimentology, and grain size variability, 25 flood events are identified. The geochemical data indicate that the flood sediments were mostly generated and transported from the higher Himalayan crystalline and the trans-Himalaya. Our study suggests that the floods were generated by Landslide Lake Outburst Floods (LLOFs) during extreme precipitation events. However, the existing database does not allow us to negate the contribution from Glacial Lake Outburst Floods (GLOFs). Field stratigraphy supported by optical chronology indicates four major flood phases that are dated to 13.4-10.4, 8.3-3.6, 2.2-1.4, and < 1.4 ka (kilo-annum). These phases correspond to the cooler and less wet conditions and broadly correlate with the phases of negative Arctic Oscillation (- AO) and negative North Atlantic Oscillation (- NAO). Thus, implying coupling between the moisture-laden monsoon circulation and southward penetrating mid-latitude westerly troughs for extreme precipitation events and consequent LLOFs. Additionally, a broad synchronicity in Holocene floods between the western Himalaya and across the mid-latitudinal region (30°N-40°N) suggests a synoptic scale Arctic and Atlantic climate variability.
The continuum of hydroclimate variability in western North America during the last millennium
Ault, Toby R.; Cole, Julia E.; Overpeck, Jonathan T.; Pederson, Gregory T.; St. George, Scott; Otto-Bliesner, Bette; Woodhouse, Connie A.; Deser, Clara
2013-01-01
The distribution of climatic variance across the frequency spectrum has substantial importance for anticipating how climate will evolve in the future. Here we estimate power spectra and power laws (ß) from instrumental, proxy, and climate model data to characterize the hydroclimate continuum in western North America (WNA). We test the significance of our estimates of spectral densities and ß against the null hypothesis that they reflect solely the effects of local (non-climate) sources of autocorrelation at the monthly timescale. Although tree-ring based hydroclimate reconstructions are generally consistent with this null hypothesis, values of ß calculated from long-moisture sensitive chronologies (as opposed to reconstructions), and other types of hydroclimate proxies, exceed null expectations. We therefore argue that there is more low-frequency variability in hydroclimate than monthly autocorrelation alone can generate. Coupled model results archived as part of the Climate Model Intercomparison Project 5 (CMIP5) are consistent with the null hypothesis and appear unable to generate variance in hydroclimate commensurate with paleoclimate records. Consequently, at decadal to multidecadal timescales there is more variability in instrumental and proxy data than in the models, suggesting that the risk of prolonged droughts under climate change may be underestimated by CMIP5 simulations of the future.
Determinants of choice for pigeons and humans on concurrent-chains schedules of reinforcement.
Belke, T W; Pierce, W D; Powell, R A
1989-09-01
Concurrent-chains schedules of reinforcement were arranged for humans and pigeons. Responses of humans were reinforced with tokens exchangeable for money, and key pecks of 4 birds were reinforced with food. Variable-interval 30-s and 40-s schedules operated in the terminal links of the chains. Condition 1 exposed subjects to variable-interval 90-s and variable-interval 30-s initial links, respectively. Conditions 2 and 3 arranged equal initial-link schedules of 40 s or 120 s. Experimental conditions tested the descriptive adequacy of five equations: reinforcement density, delay reduction, modified delay reduction, matching and maximization. Results based on choice proportions and switch rates during the initial links showed that pigeons behaved in accord with delay-reduction models, whereas humans maximized overall rate of reinforcement. As discussed by Logue and associates in self-control research, different types of reinforcement may affect sensitivity to delay differentially. Pigeons' responses were reinforced with food, a reinforcer that is consumable upon presentation. Humans' responses were reinforced with money, a reinforcer exchanged for consumable reinforcers after it was earned. Reinforcers that are immediately consumed may generate high sensitivity to delay and behavior described as delay reduction. Reinforces with longer times to consumption may generate low sensitivity to delay and behavior that maximizes overall payoff.
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
Wu, Chao-Jung
2017-01-01
Producing indices composed of multiple input variables has been embedded in some data processing and analytical methods. We aim to test the feasibility of creating data-driven indices by aggregating input variables according to principal component analysis (PCA) loadings. To validate the significance of both the theory-based and data-driven indices, we propose principles to review innovative indices. We generated weighted indices with the variables obtained in the first years of the two-year panels in the Medical Expenditure Panel Survey initiated between 1996 and 2011. Variables were weighted according to PCA loadings and summed. The statistical significance and residual deviance of each index to predict mortality in the second years was extracted from the results of discrete-time survival analyses. There were 237,832 surviving the first years of panels, represented 4.5 billion civilians in the United States, of which 0.62% (95% CI = 0.58% to 0.66%) died in the second years of the panels. Of all 134,689 weighted indices, there were 40,803 significantly predicting mortality in the second years with or without the adjustment of age, sex and races. The significant indices in the both models could at most lead to 10,200 years of academic tenure for individual researchers publishing four indices per year or 618.2 years of publishing for journals with annual volume of 66 articles. In conclusion, if aggregating information based on PCA loadings, there can be a large number of significant innovative indices composing input variables of various predictive powers. To justify the large quantities of innovative indices, we propose a reporting and review framework for novel indices based on the objectives to create indices, variable weighting, related outcomes and database characteristics. The indices selected by this framework could lead to a new genre of publications focusing on meaningful aggregation of information. PMID:28886057
VizieR Online Data Catalog: Catalogue of variable stars in open clusters (Zejda+, 2012)
NASA Astrophysics Data System (ADS)
Zejda, M.; Paunzen, E.; Baumann, B.; Mikulasek, Z.; Liska, J.
2012-08-01
The catalogue of variable stars in open clusters were prepared by cross-matching of Variable Stars Index (http://www.aavso.org/vsx) version Apr 29, 2012 (available online, Cat. B/vsx) against the version 3.1. catalogue of open clusters DAML02 (Dias et al. 2002A&A...389..871D, Cat. B/ocl) available on the website http://www.astro.iag.usp.br/~wilton. The open clusters were divided into two categories according to their size, where the limiting diameter was 60 arcmin. The list of all suspected variables and variable stars located within the fields of open clusters up to two times of given cluster radius were generated (Table 1). 8938 and 9127 variable stars are given in 461 "smaller" and 74 "larger" clusters, respectively. All found variable stars were matched against the PPMXL catalog of positions and proper motions within the ICRS (Roeser et al., 2010AJ....139.2440R, Cat. I/317). Proper motion data were included in our catalogue. Unfortunately, a homogeneous data set of mean cluster proper motions has not been available until now. Therefore we used the following sources (sorted alphabetically) to compile a new catalogue: Baumgardt et al. (2000, Cat. J/A+AS/146/251): based on the Hipparcos catalogue Beshenov & Loktin (2004A&AT...23..103B): based on the Tycho-2 catalogue Dias et al. (2001, Cat. J/A+A/376/441, 2002A&A...389..871D, Cat. B/ocl): based on the Tycho-2 catalogue Dias et al. (2006, Cat. J/A+A/446/949): based on the UCAC2 catalog (Zacharias et al., 2004AJ....127.3043Z, Cat. I/289) Frinchaboy & Majewski (2008, Cat. J/AJ/136/118): based on the Tycho-2 catalogue Kharchenko et al. (2005, J/A+A/438/1163): based on the ASCC2.5 catalogue (Kharchenko, 2001KFNT...17..409K, Cat. I/280) Krone-Martins et al. (2010, Cat. J/A+A/516/A3): based on the Bordeaux PM2000 proper motion catalogue (Ducourant et al., 2006A&A...448.1235D, Cat. I/300) Robichon et al. (1999, Cat. J/A+A/345/471): based on the Hipparcos catalogue van Leeuwen (2009A&A...497..209V): based on the new Hipparcos catalogue. In total, a catalogue of proper motions for 879 open clusters (Table 2), from which 436 have more than one available measurement, was compiled. (3 data files).