Garcia-Vicente, Ana María; Pérez-Beteta, Julián; Pérez-García, Víctor Manuel; Molina, David; Jiménez-Londoño, German Andrés; Soriano-Castrejón, Angel; Martínez-González, Alicia
2017-08-01
The aim of the study was to investigate the influence of dual time point 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) positron emission tomography/x-ray computed tomography (PET/CT) on the standard uptake value (SUV) and volume-based metabolic variables of breast lesions and their relation with biological characteristics and molecular phenotypes. Retrospective analysis including 67 patients with locally advanced breast cancer (LABC). All patients underwent a dual time point [ 18 F]FDG PET/CT, 1 h (PET-1) and 3 h (PET-2) after [ 18 F]FDG administration. Tumors were segmented following a three-dimensional methodology. Semiquantitative metabolic variables (SUV max , SUV mean , and SUV peak ) and volume-based variables (metabolic tumor volume, MTV, and total lesion glycolysis, TLG) were obtained. Biologic prognostic parameters, such as the hormone receptors status, p53, HER2 expression, proliferation rate (Ki-67), and grading were obtained. Molecular phenotypes and risk-classification [low: luminal A, intermediate: luminal B HER2 (-) or luminal B HER2 (+), and high: HER2 pure or triple negative] were established. Relations between clinical and biological variables with the metabolic parameters were studied. The relevance of each metabolic variable in the prediction of phenotype risk was assessed using a multivariate analysis. SUV-based variables and TLG obtained in the PET-1 and PET-2 showed high and significant correlations between them. MTV and SUV variables (SUV max , SUV mean , and SUV peak ) where only marginally correlated. Significant differences were found between mean SUV variables and TLG obtained in PET-1 and PET-2. High and significant associations were found between metabolic variables obtained in PET-1 and their homonymous in PET-2. Based on that, only relations of PET-1 variables with biological tumor characteristics were explored. SUV variables showed associations with hormone receptors status (p < 0.001 and p = 0.001 for estrogen and progesterone receptor, respectively) and risk-classification according to phenotype (SUV max , p = 0.003; SUV mean , p = 0.004; SUV peak , p = 0.003). As to volume-based variables, only TLG showed association with hormone receptors status (estrogen, p < 0.001; progesterone, p = 0.031), risk-classification (p = 0.007), and grade (p = 0.036). Hormone receptor negative tumors, high-grade tumors, and high-risk phenotypes showed higher TLG values. No association was found between the metabolic variables and Ki-67, HER2, or p53 expression. Statistical differences were found between mean SUV-based variables and TLG obtained in the dual time point PET/CT. Most of PET-derived parameters showed high association with molecular factors of breast cancer. However, dual time point PET/CT did not offer any added value to the single PET acquisition with respect to the relations with biological variables, based on PET-1 SUV, and volume-based variables were predictors of those obtained in PET-2.
[A meta-analysis of the variables related to depression in Korean patients with a stroke].
Park, Eun Young; Shin, In Soo; Kim, Jung Hee
2012-08-01
The purpose of this study was to use meta-analysis to evaluate the variables related to depression in patients who have had a stroke. The materials of this study were based on 16 variables obtained from 26 recent studies over a span of 10 years which were selected from doctoral dissertations, master's thesis and published articles. Related variables were categorized into sixteen variables and six variable groups which included general characteristics of the patients, disease characteristics, psychological state, physical function, basic needs, and social variables. Also, the classification of six defensive and three risk variables group was based on the negative or positive effect of depression. The quality of life (ES=-.79) and acceptance of disability (ES=-.64) were highly correlated with depression in terms of defensive variables. For risk variables, anxiety (ES=.66), stress (ES=.53) showed high correlation effect size among the risk variables. These findings showed that defensive and risk variables were related to depression among stroke patients. Psychological interventions and improvement in physical functions should be effective in decreasing depression among stroke patients.
Sharpening method of satellite thermal image based on the geographical statistical model
NASA Astrophysics Data System (ADS)
Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng
2016-04-01
To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
2014-04-01
surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital
Wahlheim, Christopher N; Finn, Bridgid; Jacoby, Larry L
2012-07-01
In four experiments, we examined the effects of repetitions and variability on the learning of bird families and metacognitive awareness of such effects. Of particular interest was the accuracy of, and bases for, predictions regarding classification of novel bird species, referred to as category learning judgments (CLJs). Participants studied birds in high repetitions and high variability conditions. These conditions differed in the number of presentations of each bird (repetitions) and the number of unique species from each family (variability). After study, participants made CLJs for each family and were then tested. Results from a classification test revealed repetition benefits for studied species and variability benefits for novel species. In contrast with performance, CLJs did not reflect the benefits of variability. Results showed that CLJs were susceptible to accessibility-based metacognitive illusions produced by additional repetitions of studied items.
Variable-intercept panel model for deformation zoning of a super-high arch dam.
Shi, Zhongwen; Gu, Chongshi; Qin, Dong
2016-01-01
This study determines dam deformation similarity indexes based on an analysis of deformation zoning features and panel data clustering theory, with comprehensive consideration to the actual deformation law of super-high arch dams and the spatial-temporal features of dam deformation. Measurement methods of these indexes are studied. Based on the established deformation similarity criteria, the principle used to determine the number of dam deformation zones is constructed through entropy weight method. This study proposes the deformation zoning method for super-high arch dams and the implementation steps, analyzes the effect of special influencing factors of different dam zones on the deformation, introduces dummy variables that represent the special effect of dam deformation, and establishes a variable-intercept panel model for deformation zoning of super-high arch dams. Based on different patterns of the special effect in the variable-intercept panel model, two panel analysis models were established to monitor fixed and random effects of dam deformation. Hausman test method of model selection and model effectiveness assessment method are discussed. Finally, the effectiveness of established models is verified through a case study.
A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism
NASA Astrophysics Data System (ADS)
Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo
2015-03-01
In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
NASA Astrophysics Data System (ADS)
Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair
2018-04-01
The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.
Crawford, John T; Loken, Luke C; Casson, Nora J; Smith, Colin; Stone, Amanda G; Winslow, Luke A
2015-01-06
Advanced sensor technology is widely used in aquatic monitoring and research. Most applications focus on temporal variability, whereas spatial variability has been challenging to document. We assess the capability of water chemistry sensors embedded in a high-speed water intake system to document spatial variability. This new sensor platform continuously samples surface water at a range of speeds (0 to >45 km h(-1)) resulting in high-density, mesoscale spatial data. These novel observations reveal previously unknown variability in physical, chemical, and biological factors in streams, rivers, and lakes. By combining multiple sensors into one platform, we were able to detect terrestrial-aquatic hydrologic connections in a small dystrophic lake, to infer the role of main-channel vs backwater nutrient processing in a large river and to detect sharp chemical changes across aquatic ecosystem boundaries in a stream/lake complex. Spatial sensor data were verified in our examples by comparing with standard lab-based measurements of selected variables. Spatial fDOM data showed strong correlation with wet chemistry measurements of DOC, and optical NO3 concentrations were highly correlated with lab-based measurements. High-frequency spatial data similar to our examples could be used to further understand aquatic biogeochemical fluxes, ecological patterns, and ecosystem processes, and will both inform and benefit from fixed-site data.
Crawford, John T.; Loken, Luke C.; Casson, Nora J.; Smith, Collin; Stone, Amanda G.; Winslow, Luke A.
2015-01-01
Advanced sensor technology is widely used in aquatic monitoring and research. Most applications focus on temporal variability, whereas spatial variability has been challenging to document. We assess the capability of water chemistry sensors embedded in a high-speed water intake system to document spatial variability. This new sensor platform continuously samples surface water at a range of speeds (0 to >45 km h–1) resulting in high-density, mesoscale spatial data. These novel observations reveal previously unknown variability in physical, chemical, and biological factors in streams, rivers, and lakes. By combining multiple sensors into one platform, we were able to detect terrestrial–aquatic hydrologic connections in a small dystrophic lake, to infer the role of main-channel vs backwater nutrient processing in a large river and to detect sharp chemical changes across aquatic ecosystem boundaries in a stream/lake complex. Spatial sensor data were verified in our examples by comparing with standard lab-based measurements of selected variables. Spatial fDOM data showed strong correlation with wet chemistry measurements of DOC, and optical NO3 concentrations were highly correlated with lab-based measurements. High-frequency spatial data similar to our examples could be used to further understand aquatic biogeochemical fluxes, ecological patterns, and ecosystem processes, and will both inform and benefit from fixed-site data.
Cai, Hong; Long, Christopher M.; DeRose, Christopher T.; ...
2017-01-01
We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.
Cai, Hong; Long, Christopher M; DeRose, Christopher T; Boynton, Nicholas; Urayama, Junji; Camacho, Ryan; Pomerene, Andrew; Starbuck, Andrew L; Trotter, Douglas C; Davids, Paul S; Lentine, Anthony L
2017-05-29
We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Hong; Long, Christopher M.; DeRose, Christopher T.
We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.
Bielak, Allison A M; Hultsch, David F; Strauss, Esther; MacDonald, Stuart W S; Hunter, Michael A
2010-09-01
In this study, the authors addressed the longitudinal nature of intraindividual variability over 3 years. A sample of 304 community-dwelling older adults, initially between the ages of 64 and 92 years, completed 4 waves of annual testing on a battery of accuracy- and latency-based tests covering a wide range of cognitive complexity. Increases in response-time inconsistency on moderately and highly complex tasks were associated with increasing age, but there were significant individual differences in change across the entire sample. The time-varying covariation between cognition and inconsistency was significant across the 1-year intervals and remained stable across both time and age. On occasions when intraindividual variability was high, participants' cognitive performance was correspondingly low. The strength of the coupling relationship was greater for more fluid cognitive domains such as memory, reasoning, and processing speed than for more crystallized domains such as verbal ability. Variability based on moderately and highly complex tasks provided the strongest prediction. These results suggest that intraindividual variability is highly sensitive to even subtle changes in cognitive ability. (c) 2010 APA, all rights reserved.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
NASA Astrophysics Data System (ADS)
Babushkina, Elena A.; Belokopytova, Liliana V.; Shah, Santosh K.; Zhirnova, Dina F.
2018-05-01
Interrelations of the yield variability of the main crops (wheat, barley, and oats) with hydrothermal regime and growth of conifer trees ( Pinus sylvestris and Larix sibirica) in forest-steppes were investigated in Khakassia, South Siberia. An attempt has been made to understand the role and mechanisms of climatic impact on plants productivity. It was found that amongst variables describing moisture supply, wetness index had maximum impact. Strength of climatic response and correlations with tree growth are different for rain-fed and irrigated crops yield. Separated high-frequency variability components of yield and tree-ring width have more pronounced relationships between each other and with climatic variables than their chronologies per se. Corresponding low-frequency variability components are strongly correlated with maxima observed after 1- to 5-year time shift of tree-ring width. Results of analysis allowed us to develop original approach of crops yield dynamics reconstruction on the base of high-frequency variability component of the growth of pine and low-frequency one of larch.
NASA Astrophysics Data System (ADS)
Dudley, R. W.; Hodgkins, G. A.; Nielsen, M. G.; Qi, S. L.
2018-07-01
A number of previous studies have examined relations between groundwater levels and hydrologic and meteorological variables over parts of the glacial aquifer system, but systematic analyses across the entire U.S. glacial aquifer system are lacking. We tested correlations between monthly groundwater levels measured at 1043 wells in the U.S. glacial aquifer system considered to be minimally influenced by human disturbance and selected hydrologic and meteorological variables with the goal of extending historical groundwater records where there were strong correlations. Groundwater levels in the East region correlated most strongly with short-term (1 and 3 month) averages of hydrologic and meteorological variables, while those in the Central and West Central regions yielded stronger correlations with hydrologic and meteorological variables averaged over longer time intervals (6-12 months). Variables strongly correlated with high and low annual groundwater levels were identified as candidate records for use in statistical linear models as a means to fill in and extend historical high and low groundwater levels respectively. Overall, 37.4% of study wells meeting data criteria had successful models for high and (or) low groundwater levels; these wells shared characteristics of relatively higher local precipitation, higher local land-surface slope, lower amounts of clay within the surficial sediments, and higher base-flow index. Streamflow and base flow served as explanatory variables in about two thirds of both high- and low-groundwater-level models in all three regions, and generally yielded more and better models compared to precipitation and Palmer Drought Severity Index. The use of variables such as streamflow with substantially longer and more complete records than those of groundwater wells provide a means for placing contemporary groundwater levels in a longer historical context and can support site-specific analyses such as groundwater modeling.
City scale pollen concentration variability
NASA Astrophysics Data System (ADS)
van der Molen, Michiel; van Vliet, Arnold; Krol, Maarten
2016-04-01
Pollen are emitted in the atmosphere both in the country-side and in cities. Yet the majority of the population is exposed to pollen in cities. Allergic reactions may be induced by short-term exposure to pollen. This raises the question how variable pollen concentration in cities are in temporally and spatially, and how much of the pollen in cities are actually produced in the urban region itself. We built a high resolution (1 × 1 km) pollen dispersion model based on WRF-Chem to study a city's pollen budget and the spatial and temporal variability in concentration. It shows that the concentrations are highly variable, as a result of source distribution, wind direction and boundary layer mixing, as well as the release rate as a function of temperature, turbulence intensity and humidity. Hay Fever Forecasts based on such high resolution emission and physical dispersion modelling surpass traditional hay fever warning methods based on temperature sum methods. The model gives new insights in concentration variability, personal and community level exposure and prevention. The model will be developped into a new forecast tool to serve allergic people to minimize their exposure and reduce nuisance, coast of medication and sick leave. This is an innovative approach in hay fever warning systems.
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Variable Selection through Correlation Sifting
NASA Astrophysics Data System (ADS)
Huang, Jim C.; Jojic, Nebojsa
Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.
Ross, Michael G; Jessie, Marquis; Amaya, Kevin; Matushewski, Brad; Durosier, L Daniel; Frasch, Martin G; Richardson, Bryan S
2013-04-01
Recent guidelines classify variable decelerations without detail as to degree of depth. We hypothesized that variable deceleration severity is highly correlated with fetal base deficit accumulation. Seven near-term fetal sheep underwent a series of graded umbilical cord occlusions resulting in mild (30 bpm decrease), moderate (60 bpm decrease), or severe (decrease of 90 bpm to baseline <70 bpm) variable decelerations at 2.5 minute intervals. Mild, moderate, and severe variable decelerations increased fetal base deficit (0.21 ± 0.03, 0.27 ± 0.03, and 0.54 ± 0.09 mEq/L per minute) in direct proportion to severity. During recovery, fetal base deficit cleared at 0.12 mEq/L per minute. In this model, ovine fetuses can tolerate repetitive mild and moderate variable decelerations with minimal change in base deficit and lactate. In contrast, repetitive severe variable decelerations may result in significant base deficit increases, dependent on frequency. Modified guideline differentiation of mild/moderate vs severe variable decelerations may aid in the interpretation of fetal heart rate tracings and optimization of clinical management paradigms. Copyright © 2013 Mosby, Inc. All rights reserved.
Comparison of High-Frequency Solar Irradiance: Ground Measured vs. Satellite-Derived
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lave, Matthew; Weekley, Andrew
2016-11-21
High-frequency solar variability is an important to grid integration studies, but ground measurements are scarce. The high resolution irradiance algorithm (HRIA) has the ability to produce 4-sceond resolution global horizontal irradiance (GHI) samples, at locations across North America. However, the HRIA has not been extensively validated. In this work, we evaluate the HRIA against a database of 10 high-frequency ground-based measurements of irradiance. The evaluation focuses on variability-based metrics. This results in a greater understanding of the errors in the HRIA as well as suggestions for improvement to the HRIA.
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
Mining the oral mycobiome: Methods, components, and meaning
Diaz, Patricia I.; Hong, Bo-Young; Dupuy, Amanda K.; Strausbaugh, Linda D.
2017-01-01
ABSTRACT Research on oral fungi has centered on Candida. However, recent internal transcribed spacer (ITS)-based studies revealed a vast number of fungal taxa as potential oral residents. We review DNA-based studies of the oral mycobiome and contrast them with cultivation-based surveys, showing that most genera encountered by cultivation have also been detected molecularly. Some taxa such as Malassezia, however, appear in high prevalence and abundance in molecular studies but have not been cultivated. Important technical and bioinformatic challenges to ITS-based oral mycobiome studies are discussed. These include optimization of sample lysis, variability in length of ITS amplicons, high intra-species ITS sequence variability, high inter-species variability in ITS copy number and challenges in nomenclature and maintenance of curated reference databases. Molecular surveys are powerful first steps to characterize the oral mycobiome but further research is needed to unravel which fungi detected by DNA are true oral residents and what role they play in oral homeostasis. PMID:27791473
Gabriel, Florence C.; Szücs, Dénes
2014-01-01
Recent studies have indicated that people have a strong tendency to compare fractions based on constituent numerators or denominators. This is called componential processing. This study explored whether componential processing was preferred in tasks involving high stimuli variability and high contextual interference, when fractions could be compared based either on the holistic values of fractions or on their denominators. Here, stimuli variability referred to the fact that fractions were not monotonous but diversiform. Contextual interference referred to the fact that the processing of fractions was interfered by other stimuli. To our ends, three tasks were used. In Task 1, participants compared a standard fraction 1/5 to unit fractions. This task was used as a low stimuli variability and low contextual interference task. In Task 2 stimuli variability was increased by mixing unit and non-unit fractions. In Task 3, high contextual interference was created by incorporating decimals into fractions. The RT results showed that the processing patterns of fractions were very similar for adults and children. In task 1 and task 3, only componential processing was utilzied. In contrast, both holistic processing and componential processing were utilized in task 2. These results suggest that, if individuals are presented with the opportunity to perform componential processing, both adults and children will tend to do so, even if they are faced with high variability of fractions or high contextual interference. PMID:25249995
Zhang, Li; Fang, Qiaochu; Gabriel, Florence C; Szücs, Dénes
2014-01-01
Recent studies have indicated that people have a strong tendency to compare fractions based on constituent numerators or denominators. This is called componential processing. This study explored whether componential processing was preferred in tasks involving high stimuli variability and high contextual interference, when fractions could be compared based either on the holistic values of fractions or on their denominators. Here, stimuli variability referred to the fact that fractions were not monotonous but diversiform. Contextual interference referred to the fact that the processing of fractions was interfered by other stimuli. To our ends, three tasks were used. In Task 1, participants compared a standard fraction 1/5 to unit fractions. This task was used as a low stimuli variability and low contextual interference task. In Task 2 stimuli variability was increased by mixing unit and non-unit fractions. In Task 3, high contextual interference was created by incorporating decimals into fractions. The RT results showed that the processing patterns of fractions were very similar for adults and children. In task 1 and task 3, only componential processing was utilzied. In contrast, both holistic processing and componential processing were utilized in task 2. These results suggest that, if individuals are presented with the opportunity to perform componential processing, both adults and children will tend to do so, even if they are faced with high variability of fractions or high contextual interference.
NASA Astrophysics Data System (ADS)
Crawford, Ben; Grimmond, Sue; Kent, Christoph; Gabey, Andrew; Ward, Helen; Sun, Ting; Morrison, William
2017-04-01
Remotely sensed data from satellites have potential to enable high-resolution, automated calculation of urban surface energy balance terms and inform decisions about urban adaptations to environmental change. However, aerodynamic resistance methods to estimate sensible heat flux (QH) in cities using satellite-derived observations of surface temperature are difficult in part due to spatial and temporal variability of the thermal aerodynamic resistance term (rah). In this work, we extend an empirical function to estimate rah using observational data from several cities with a broad range of surface vegetation land cover properties. We then use this function to calculate spatially and temporally variable rah in London based on high-resolution (100 m) land cover datasets and in situ meteorological observations. In order to calculate high-resolution QH based on satellite-observed land surface temperatures, we also develop and employ novel methods to i) apply source area-weighted averaging of surface and meteorological variables across the study spatial domain, ii) calculate spatially variable, high-resolution meteorological variables (wind speed, friction velocity, and Obukhov length), iii) incorporate spatially interpolated urban air temperatures from a distributed sensor network, and iv) apply a modified Monte Carlo approach to assess uncertainties with our results, methods, and input variables. Modeled QH using the aerodynamic resistance method is then compared to in situ observations in central London from a unique network of scintillometers and eddy-covariance measurements.
Identification of young stellar variables with KELT for K2 - II. The Upper Scorpius association
NASA Astrophysics Data System (ADS)
Ansdell, Megan; Oelkers, Ryan J.; Rodriguez, Joseph E.; Gaidos, Eric; Somers, Garrett; Mamajek, Eric; Cargile, Phillip A.; Stassun, Keivan G.; Pepper, Joshua; Stevens, Daniel J.; Beatty, Thomas G.; Siverd, Robert J.; Lund, Michael B.; Kuhn, Rudolf B.; James, David; Gaudi, B. Scott
2018-01-01
High-precision photometry from space-based missions such as K2 and Transiting Exoplanet Survey Satellite enables detailed studies of young star variability. However, because space-based observing campaigns are often short (e.g. 80 d for K2), complementary long-baseline photometric surveys are critical for obtaining a complete understanding of young star variability, which can change on time-scales of minutes to years. We therefore present and analyse light curves of members of the Upper Scorpius association made over 5.5 yr by the ground-based Kilodegree Extremely Little Telescope (KELT), which complement the high-precision observations of this region taken by K2 during its Campaigns 2 and 15. We show that KELT data accurately identify the periodic signals found with high-precision K2 photometry, demonstrating the power of ground-based surveys in deriving stellar rotation periods of young stars. We also use KELT data to identify sources exhibiting variability that is likely related to circumstellar material and/or stellar activity cycles; these signatures are often unseen in the short-term K2 data, illustrating the importance of long-term monitoring surveys for studying the full range of young star variability. We provide the KELT light curves as electronic tables in an ongoing effort to establish legacy time series data sets for young stellar clusters.
ERIC Educational Resources Information Center
Marsh, Herbert W.
Variables that influence growth and change in educational outcomes in the last 2 years of high school were studied using data from the High School and Beyond (HSB) study. The HSB study provided a database of thousands of variables for about 30 students from each of 1,000 randomly selected high schools in the United States in their sophomore and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallo, Giulia
Integrating increasingly high levels of variable generation in U.S. electricity markets requires addressing not only power system and grid modeling challenges but also an understanding of how market participants react and adapt to them. Key elements of current and future wholesale power markets can be modeled using an agent-based approach, which may prove to be a useful paradigm for researchers studying and planning for power systems of the future.
A review of covariate selection for non-experimental comparative effectiveness research.
Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler
2013-11-01
This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.
A Review of Covariate Selection for Nonexperimental Comparative Effectiveness Research
Sauer, Brian C.; Brookhart, Alan; Roy, Jason; Vanderweele, Tyler
2014-01-01
This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research (CER), and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for on a common cause pathway between treatment and outcome can remove confounding, while adjustment for other structural types may increase bias. For this reason variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely know. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses the high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher’s knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically-derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. PMID:24006330
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Variability of African Farming Systems from Phenological Analysis of NDVI Time Series
NASA Technical Reports Server (NTRS)
Vrieling, Anton; deBeurs, K. M.; Brown, Molly E.
2011-01-01
Food security exists when people have access to sufficient, safe and nutritious food at all times to meet their dietary needs. The natural resource base is one of the many factors affecting food security. Its variability and decline creates problems for local food production. In this study we characterize for sub-Saharan Africa vegetation phenology and assess variability and trends of phenological indicators based on NDVI time series from 1982 to 2006. We focus on cumulated NDVI over the season (cumNDVI) which is a proxy for net primary productivity. Results are aggregated at the level of major farming systems, while determining also spatial variability within farming systems. High temporal variability of cumNDVI occurs in semiarid and subhumid regions. The results show a large area of positive cumNDVI trends between Senegal and South Sudan. These correspond to positive CRU rainfall trends found and relate to recovery after the 1980's droughts. We find significant negative cumNDVI trends near the south-coast of West Africa (Guinea coast) and in Tanzania. For each farming system, causes of change and variability are discussed based on available literature (Appendix A). Although food security comprises more than the local natural resource base, our results can perform an input for food security analysis by identifying zones of high variability or downward trends. Farming systems are found to be a useful level of analysis. Diversity and trends found within farming system boundaries underline that farming systems are dynamic.
NASA Astrophysics Data System (ADS)
Cristiano, Elena; ten Veldhuis, Marie-claire; van de Giesen, Nick
2017-07-01
In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological response, based on small-scale representation of urban catchment spatial variability. Despite these efforts, interactions between rainfall variability, catchment heterogeneity, and hydrological response remain poorly understood. This paper presents a review of our current understanding of hydrological processes in urban environments as reported in the literature, focusing on their spatial and temporal variability aspects. We review recent findings on the effects of rainfall variability on hydrological response and identify gaps where knowledge needs to be further developed to improve our understanding of and capability to predict urban hydrological response.
Framework for making better predictions by directly estimating variables' predictivity.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2016-12-13
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.
Biodegradability study of high-erucic-acid-rapeseed-oil-based lubricant additives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, E.; Crawford, R.L.; Shanahan, A.
1995-12-31
A variety of high-erucic-acid-rapeseed (HEAR)-oil-based lubricants, lubricant additives, and greases were examined for biodegradability at the University of Idaho Center for Hazardous Waste Remediation Research. Two standard biodegradability tests were employed, a currently accepted US Environmental Protection Agency (EPA) protocol and the Sturm Test. As is normal for tests that employ variable inocula such as sewage as a source of microorganisms, these procedures yielded variable results from one repetition to another. However, a general trend of rapid and complete biodegradability of the HEAR-oil-based materials was observed.
Exploration of an oculometer-based model of pilot workload
NASA Technical Reports Server (NTRS)
Krebs, M. J.; Wingert, J. W.; Cunningham, T.
1977-01-01
Potential relationships between eye behavior and pilot workload are discussed. A Honeywell Mark IIA oculometer was used to obtain the eye data in a fixed base transport aircraft simulation facility. The data were analyzed to determine those parameters of eye behavior which were related to changes in level of task difficulty of the simulated manual approach and landing on instruments. A number of trends and relationships between eye variables and pilot ratings were found. A preliminary equation was written based on the results of a stepwise linear regression. High variability in time spent on various instruments was related to differences in scanning strategy among pilots. A more detailed analysis of individual runs by individual pilots was performed to investigate the source of this variability more closely. Results indicated a high degree of intra-pilot variability in instrument scanning. No consistent workload related trends were found. Pupil diameter which had demonstrated a strong relationship to task difficulty was extensively re-exmained.
Variable stator radial turbine
NASA Technical Reports Server (NTRS)
Rogo, C.; Hajek, T.; Chen, A. G.
1984-01-01
A radial turbine stage with a variable area nozzle was investigated. A high work capacity turbine design with a known high performance base was modified to accept a fixed vane stagger angle moveable sidewall nozzle. The nozzle area was varied by moving the forward and rearward sidewalls. Diffusing and accelerating rotor inlet ramps were evaluated in combinations with hub and shroud rotor exit rings. Performance of contoured sidewalls and the location of the sidewall split line with respect to the rotor inlet was compared to the baseline. Performance and rotor exit survey data are presented for 31 different geometries. Detail survey data at the nozzle exit are given in contour plot format for five configurations. A data base is provided for a variable geometry concept that is a viable alternative to the more common pivoted vane variable geometry radial turbine.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
ERIC Educational Resources Information Center
Guzeller, Cem Oktay; Akin, Ayca
2014-01-01
The purpose of this study is to determine the predicting power of mathematics achievement from ICT variables including the Internet/entertainment use (IEU), program/software use (PRGUSE), confidence in internet tasks (INTCONF) and confidence in ICT high level tasks (HIGHCONF) based on PISA 2006 data. This study indicates that the ICT variables…
ERIC Educational Resources Information Center
Yenice, Nilgun
2011-01-01
This study was conducted to examine pre-service science teachers' critical thinking dispositions and problem solving skills based on gender, grade level and graduated high school variables. Also relationship between pre-service science teachers' critical thinking dispositions and problem solving skills was examined based on gender, grade level and…
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
Framework for making better predictions by directly estimating variables’ predictivity
Chernoff, Herman; Lo, Shaw-Hwa
2016-01-01
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830
Taha, Zahari; Musa, Rabiu Muazu; P P Abdul Majeed, Anwar; Alim, Muhammad Muaz; Abdullah, Mohamad Razali
2018-02-01
Support Vector Machine (SVM) has been shown to be an effective learning algorithm for classification and prediction. However, the application of SVM for prediction and classification in specific sport has rarely been used to quantify/discriminate low and high-performance athletes. The present study classified and predicted high and low-potential archers from a set of fitness and motor ability variables trained on different SVMs kernel algorithms. 50 youth archers with the mean age and standard deviation of 17.0 ± 0.6 years drawn from various archery programmes completed a six arrows shooting score test. Standard fitness and ability measurements namely hand grip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were also recorded. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the performance variables tested. SVM models with linear, quadratic, cubic, fine RBF, medium RBF, as well as the coarse RBF kernel functions, were trained based on the measured performance variables. The HACA clustered the archers into high-potential archers (HPA) and low-potential archers (LPA), respectively. The linear, quadratic, cubic, as well as the medium RBF kernel functions models, demonstrated reasonably excellent classification accuracy of 97.5% and 2.5% error rate for the prediction of the HPA and the LPA. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from a combination of the selected few measured fitness and motor ability performance variables examined which would consequently save cost, time and effort during talent identification programme. Copyright © 2017 Elsevier B.V. All rights reserved.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Valcke, Mathieu; Haddad, Sami
2015-01-01
The objective of this study was to compare the magnitude of interindividual variability in internal dose for inhalation exposure to single versus multiple chemicals. Physiologically based pharmacokinetic models for adults (AD), neonates (NEO), toddlers (TODD), and pregnant women (PW) were used to simulate inhalation exposure to "low" (RfC-like) or "high" (AEGL-like) air concentrations of benzene (Bz) or dichloromethane (DCM), along with various levels of toluene alone or toluene with ethylbenzene and xylene. Monte Carlo simulations were performed and distributions of relevant internal dose metrics of either Bz or DCM were computed. Area under the blood concentration of parent compound versus time curve (AUC)-based variability in AD, TODD, and PW rose for Bz when concomitant "low" exposure to mixtures of increasing complexities occurred (coefficient of variation (CV) = 16-24%, vs. 12-15% for Bz alone), but remained unchanged considering DCM. Conversely, AUC-based CV in NEO fell (15 to 5% for Bz; 12 to 6% for DCM). Comparable trends were observed considering production of metabolites (AMET), except for NEO's CYP2E1-mediated metabolites of Bz, where an increased CV was observed (20 to 71%). For "high" exposure scenarios, Cmax-based variability of Bz and DCM remained unchanged in AD and PW, but decreased in NEO (CV= 11-16% to 2-6%) and TODD (CV= 12-13% to 7-9%). Conversely, AMET-based variability for both substrates rose in every subpopulation. This study analyzed for the first time the impact of multiple exposures on interindividual variability in toxicokinetics. Evidence indicates that this impact depends upon chemical concentrations and biochemical properties, as well as the subpopulation and internal dose metrics considered.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Spatial drought reconstructions for central High Asia based on tree rings
NASA Astrophysics Data System (ADS)
Fang, Keyan; Davi, Nicole; Gou, Xiaohua; Chen, Fahu; Cook, Edward; Li, Jinbao; D'Arrigo, Rosanne
2010-11-01
Spatial reconstructions of drought for central High Asia based on a tree-ring network are presented. Drought patterns for central High Asia are classified into western and eastern modes of variability. Tree-ring based reconstructions of the Palmer drought severity index (PDSI) are presented for both the western central High Asia drought mode (1587-2005), and for the eastern central High Asia mode (1660-2005). Both reconstructions, generated using a principal component regression method, show an increased variability in recent decades. The wettest epoch for both reconstructions occurred from the 1940s to the 1950s. The most extreme reconstructed drought for western central High Asia was from the 1640s to the 1650s, coinciding with the collapse of the Chinese Ming Dynasty. The eastern central High Asia reconstruction has shown a distinct tendency towards drier conditions since the 1980s. Our spatial reconstructions agree well with previous reconstructions that fall within each mode, while there is no significant correlation between the two spatial reconstructions.
Effects of Group and Situational Factors on Pre-Adolescent Children's Attitudes to School Bullying
ERIC Educational Resources Information Center
Nesdale, Drew; Scarlett, Michael
2004-01-01
This study examined the effect on pre-adolescent children's attitudes to bullying of one group-based variable (group status) and two situational variables (rule legitimacy and rule consistency). Pre-adolescent boys (n = 229) read a story about a group of boys who had high or low (handball) status. The legitimacy (high versus low) of the rules…
ERIC Educational Resources Information Center
Gambro, John S.; Switzky, Harvey N.
The objectives of this study are to assess the current environmental knowledge base in a national probability sample of American high school students, and examine the distribution of environmental knowledge across several variables which have been found to be related to environmental knowledge in previous research (e.g. education and gender).…
ERIC Educational Resources Information Center
O'Keeffe, Breda V.; Bundock, Kaitlin; Kladis, Kristin L.; Yan, Rui; Nelson, Kat
2017-01-01
Previous research on curriculum-based measurement of oral reading fluency (CBM ORF) found high levels of variability around the estimates of students' fluency; however, little research has studied the issue of variability specifically with well-designed passage sets and a sample of students who scored below benchmark for the purpose of progress…
Urbinello, Damiano; Huss, Anke; Beekhuizen, Johan; Vermeulen, Roel; Röösli, Martin
2014-01-15
Radiofrequency electromagnetic fields (RF-EMF) are highly variable and differ considerably within as well as between areas. Exposure assessment studies characterizing spatial and temporal variation are limited so far. Our objective was to evaluate sources of data variability and the repeatability of daily measurements using portable exposure meters (PEMs). Data were collected at 12 days between November 2010 and January 2011 with PEMs in four different types of urban areas in the cities of Basel (BSL) and Amsterdam (AMS). Exposure from mobile phone base stations ranged from 0.30 to 0.53 V/m in downtown and business areas and in residential areas from 0.09 to 0.41 V/m. Analysis of variance (ANOVA) demonstrated that measurements from various days were highly reproducible (measurement duration of approximately 30 min) with only 0.6% of the variance of all measurements from mobile phone base station radiation being explained by the measurement day and only 0.2% by the measurement time (morning, noon, afternoon), whereas type of area (30%) and city (50%) explained most of the data variability. We conclude that mobile monitoring of exposure from mobile phone base station radiation with PEMs is useful due to the high repeatability of mobile phone base station exposure levels, despite the high spatial variation. © 2013.
Chuang, Yung-Chung Matt; Shiu, Yi-Shiang
2016-01-01
Tea is an important but vulnerable economic crop in East Asia, highly impacted by climate change. This study attempts to interpret tea land use/land cover (LULC) using very high resolution WorldView-2 imagery of central Taiwan with both pixel and object-based approaches. A total of 80 variables derived from each WorldView-2 band with pan-sharpening, standardization, principal components and gray level co-occurrence matrix (GLCM) texture indices transformation, were set as the input variables. For pixel-based image analysis (PBIA), 34 variables were selected, including seven principal components, 21 GLCM texture indices and six original WorldView-2 bands. Results showed that support vector machine (SVM) had the highest tea crop classification accuracy (OA = 84.70% and KIA = 0.690), followed by random forest (RF), maximum likelihood algorithm (ML), and logistic regression analysis (LR). However, the ML classifier achieved the highest classification accuracy (OA = 96.04% and KIA = 0.887) in object-based image analysis (OBIA) using only six variables. The contribution of this study is to create a new framework for accurately identifying tea crops in a subtropical region with real-time high-resolution WorldView-2 imagery without field survey, which could further aid agriculture land management and a sustainable agricultural product supply. PMID:27128915
Chuang, Yung-Chung Matt; Shiu, Yi-Shiang
2016-04-26
Tea is an important but vulnerable economic crop in East Asia, highly impacted by climate change. This study attempts to interpret tea land use/land cover (LULC) using very high resolution WorldView-2 imagery of central Taiwan with both pixel and object-based approaches. A total of 80 variables derived from each WorldView-2 band with pan-sharpening, standardization, principal components and gray level co-occurrence matrix (GLCM) texture indices transformation, were set as the input variables. For pixel-based image analysis (PBIA), 34 variables were selected, including seven principal components, 21 GLCM texture indices and six original WorldView-2 bands. Results showed that support vector machine (SVM) had the highest tea crop classification accuracy (OA = 84.70% and KIA = 0.690), followed by random forest (RF), maximum likelihood algorithm (ML), and logistic regression analysis (LR). However, the ML classifier achieved the highest classification accuracy (OA = 96.04% and KIA = 0.887) in object-based image analysis (OBIA) using only six variables. The contribution of this study is to create a new framework for accurately identifying tea crops in a subtropical region with real-time high-resolution WorldView-2 imagery without field survey, which could further aid agriculture land management and a sustainable agricultural product supply.
Streamlined approaches that use in vitro experimental data to predict chemical toxicokinetics (TK) are increasingly being used to perform risk-based prioritization based upon dosimetric adjustment of high-throughput screening (HTS) data across thousands of chemicals. However, ass...
VARIABILITY AND CHARACTER ASSOCIATION IN ROSE COLOURED LEADWORT (PLUMBAGO ROSEA Linn.)
Kurian, Alice; Anitha, C.A.; Nybe, E.V.
2001-01-01
Forty five plumbago rosea accessions collected from different parts of Kerala state were evaluated for variability in morphological and yield related characters and plumbagin content. Highly significant variation was evident for all the characters studied except leaf size indicating wide variability in the accessions. Accessions PR 25 and PR 31 appear to be promising with respect to root yield and high plumbagin content. Character association revelated significant and positive correlation of all the characters except leaf size with yield. Hence, selection of high yielding types could easily be done based on visual characters expressing more vegetative growth but with reduced leaf size. PMID:22557037
Rivers and Floodplains as Key Components of Global Terrestrial Water Storage Variability
NASA Astrophysics Data System (ADS)
Getirana, Augusto; Kumar, Sujay; Girotto, Manuela; Rodell, Matthew
2017-10-01
This study quantifies the contribution of rivers and floodplains to terrestrial water storage (TWS) variability. We use state-of-the-art models to simulate land surface processes and river dynamics and to separate TWS into its main components. Based on a proposed impact index, we show that surface water storage (SWS) contributes 8% of TWS variability globally, but that contribution differs widely among climate zones. Changes in SWS are a principal component of TWS variability in the tropics, where major rivers flow over arid regions and at high latitudes. SWS accounts for 22-27% of TWS variability in both the Amazon and Nile Basins. Changes in SWS are negligible in the Western U.S., Northern Africa, Middle East, and central Asia. Based on comparisons with Gravity Recovery and Climate Experiment-based TWS, we conclude that accounting for SWS improves simulated TWS in most of South America, Africa, and Southern Asia, confirming that SWS is a key component of TWS variability.
Prediction of Incident Diabetes in the Jackson Heart Study Using High-Dimensional Machine Learning
Casanova, Ramon; Saldana, Santiago; Simpson, Sean L.; Lacy, Mary E.; Subauste, Angela R.; Blackshear, Chad; Wagenknecht, Lynne; Bertoni, Alain G.
2016-01-01
Statistical models to predict incident diabetes are often based on limited variables. Here we pursued two main goals: 1) investigate the relative performance of a machine learning method such as Random Forests (RF) for detecting incident diabetes in a high-dimensional setting defined by a large set of observational data, and 2) uncover potential predictors of diabetes. The Jackson Heart Study collected data at baseline and in two follow-up visits from 5,301 African Americans. We excluded those with baseline diabetes and no follow-up, leaving 3,633 individuals for analyses. Over a mean 8-year follow-up, 584 participants developed diabetes. The full RF model evaluated 93 variables including demographic, anthropometric, blood biomarker, medical history, and echocardiogram data. We also used RF metrics of variable importance to rank variables according to their contribution to diabetes prediction. We implemented other models based on logistic regression and RF where features were preselected. The RF full model performance was similar (AUC = 0.82) to those more parsimonious models. The top-ranked variables according to RF included hemoglobin A1C, fasting plasma glucose, waist circumference, adiponectin, c-reactive protein, triglycerides, leptin, left ventricular mass, high-density lipoprotein cholesterol, and aldosterone. This work shows the potential of RF for incident diabetes prediction while dealing with high-dimensional data. PMID:27727289
NASA Astrophysics Data System (ADS)
Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Muaz Alim, Muhammad; Nasir, Ahmad Fakhri Ab
2018-04-01
The present study aims at classifying and predicting high and low potential archers from a collection of psychological coping skills variables trained on different k-Nearest Neighbour (k-NN) kernels. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed k-NN models, i.e. fine, medium, coarse, cosine, cubic and weighted kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the cosine k-NN model exhibited good accuracy and precision throughout the exercise with an accuracy of 94% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the rest of the models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.
Variability in Global Top-of-Atmosphere Shortwave Radiation Between 2000 and 2005
NASA Technical Reports Server (NTRS)
Loebe, Norman G.; Wielicki, Bruce A.; Rose, Fred G.; Doelling, David R.
2007-01-01
Measurements from various instruments and analysis techniques are used to directly compare changes in Earth-atmosphere shortwave (SW) top-of-atmosphere (TOA) radiation between 2000 and 2005. Included in the comparison are estimates of TOA reflectance variability from published ground-based Earthshine observations and from new satellite-based CERES, MODIS and ISCCP results. The ground-based Earthshine data show an order-of-magnitude more variability in annual mean SW TOA flux than either CERES or ISCCP, while ISCCP and CERES SW TOA flux variability is consistent to 40%. Most of the variability in CERES TOA flux is shown to be dominated by variations global cloud fraction, as observed using coincident CERES and MODIS data. Idealized Earthshine simulations of TOA SW radiation variability for a lunar-based observer show far less variability than the ground-based Earthshine observations, but are still a factor of 4-5 times more variable than global CERES SW TOA flux results. Furthermore, while CERES global albedos exhibit a well-defined seasonal cycle each year, the seasonal cycle in the lunar Earthshine reflectance simulations is highly variable and out-of-phase from one year to the next. Radiative transfer model (RTM) approaches that use imager cloud and aerosol retrievals reproduce most of the change in SW TOA radiation observed in broadband CERES data. However, assumptions used to represent the spectral properties of the atmosphere, clouds, aerosols and surface in the RTM calculations can introduce significant uncertainties in annual mean changes in regional and global SW TOA flux.
Variability in global top-of-atmosphere shortwave radiation between 2000 and 2005
NASA Astrophysics Data System (ADS)
Loeb, Norman G.; Wielicki, Bruce A.; Rose, Fred G.; Doelling, David R.
2007-02-01
Measurements from various instruments and analysis techniques are used to directly compare changes in Earth-atmosphere shortwave (SW) top-of-atmosphere (TOA) radiation between 2000 and 2005. Included in the comparison are estimates of TOA reflectance variability from published ground-based Earthshine observations and from new satellite-based CERES, MODIS and ISCCP results. The ground-based Earthshine data show an order-of-magnitude more variability in annual mean SW TOA flux than either CERES or ISCCP, while ISCCP and CERES SW TOA flux variability is consistent to 40%. Most of the variability in CERES TOA flux is shown to be dominated by variations global cloud fraction, as observed using coincident CERES and MODIS data. Idealized Earthshine simulations of TOA SW radiation variability for a lunar-based observer show far less variability than the ground-based Earthshine observations, but are still a factor of 4-5 times more variable than global CERES SW TOA flux results. Furthermore, while CERES global albedos exhibit a well-defined seasonal cycle each year, the seasonal cycle in the lunar Earthshine reflectance simulations is highly variable and out-of-phase from one year to the next. Radiative transfer model (RTM) approaches that use imager cloud and aerosol retrievals reproduce most of the change in SW TOA radiation observed in broadband CERES data. However, assumptions used to represent the spectral properties of the atmosphere, clouds, aerosols and surface in the RTM calculations can introduce significant uncertainties in annual mean changes in regional and global SW TOA flux.
The Effect of Visual Variability on the Learning of Academic Concepts.
Bourgoyne, Ashley; Alt, Mary
2017-06-10
The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.
Ding, Xuan; He, Minxia; Kulkarni, Rajesh; Patel, Nita; Zhang, Xiaoyu
2013-08-01
Identifying the source of inter- and/or intrasubject variability in pharmacokinetics (PK) provides fundamental information in understanding the pharmacokinetics-pharmacodynamics relationship of a drug and project its efficacy and safety in clinical populations. This identification process can be challenging given that a large number of potential causes could lead to PK variability. Here we present an integrated approach of physiologically based absorption modeling to investigate the root cause of unexpectedly high PK variability of a Phase I clinical trial drug. LY2196044 exhibited high intersubject variability in the absorption phase of plasma concentration-time profiles in humans. This could not be explained by in vitro measurements of drug properties and excellent bioavailability with low variability observed in preclinical species. GastroPlus™ modeling suggested that the compound's optimal solubility and permeability characteristics would enable rapid and complete absorption in preclinical species and in humans. However, simulations of human plasma concentration-time profiles indicated that despite sufficient solubility and rapid dissolution of LY2196044 in humans, permeability and/or transit in the gastrointestinal (GI) tract may have been negatively affected. It was concluded that clinical PK variability was potentially due to the drug's antagonism on opioid receptors that affected its transit and absorption in the GI tract. Copyright © 2013 Wiley Periodicals, Inc.
Nathan, Brian J; Golston, Levi M; O'Brien, Anthony S; Ross, Kevin; Harrison, William A; Tao, Lei; Lary, David J; Johnson, Derek R; Covington, April N; Clark, Nigel N; Zondlo, Mark A
2015-07-07
A model aircraft equipped with a custom laser-based, open-path methane sensor was deployed around a natural gas compressor station to quantify the methane leak rate and its variability at a compressor station in the Barnett Shale. The open-path, laser-based sensor provides fast (10 Hz) and precise (0.1 ppmv) measurements of methane in a compact package while the remote control aircraft provides nimble and safe operation around a local source. Emission rates were measured from 22 flights over a one-week period. Mean emission rates of 14 ± 8 g CH4 s(-1) (7.4 ± 4.2 g CH4 s(-1) median) from the station were observed or approximately 0.02% of the station throughput. Significant variability in emission rates (0.3-73 g CH4 s(-1) range) was observed on time scales of hours to days, and plumes showed high spatial variability in the horizontal and vertical dimensions. Given the high spatiotemporal variability of emissions, individual measurements taken over short durations and from ground-based platforms should be used with caution when examining compressor station emissions. More generally, our results demonstrate the unique advantages and challenges of platforms like small unmanned aerial vehicles for quantifying local emission sources to the atmosphere.
Springvloet, Linda; Lechner, Lilian; Candel, Math J J M; de Vries, Hein; Oenema, Anke
2016-03-01
This study explored whether the determinants that were targeted in two versions of a Web-based computer-tailored nutrition education intervention mediated the effects on fruit, high-energy snack, and saturated fat intake among adults who did not comply with dietary guidelines. A RCT was conducted with a basic (tailored intervention targeting individual cognitions and self-regulation), plus (additionally targeting environmental-level factors), and control group (generic nutrition information). Participants were recruited from the general Dutch adult population and randomly assigned to one of the study groups. Online self-reported questionnaires assessed dietary intake and potential mediating variables (behavior-specific cognitions, action- and coping planning, environmental-level factors) at baseline and one (T1) and four (T2) months post-intervention (i.e. four and seven months after baseline). The joint-significance test was used to establish mediating variables at different time points (T1-mediating variables - T2-intake; T1-mediating variables - T1-intake; T2-mediating variables - T2-intake). Educational differences were examined by testing interaction terms. The effect of the plus version on fruit intake was mediated (T2-T2) by intention and fruit availability at home and for high-educated participants also by attitude. Among low/moderate-educated participants, high-energy snack availability at home mediated (T1-T1) the effect of the basic version on high-energy snack intake. Subjective norm mediated (T1-T1) the effect of the basic version on fat intake among high-educated participants. Only some of the targeted determinants mediated the effects of both intervention versions on fruit, high-energy snack, and saturated fat intake. A possible reason for not finding a more pronounced pattern of mediating variables is that the educational content was tailored to individual characteristics and that participants only received feedback for relevant and not for all assessed mediating variables. Netherlands Trial Registry NTR3396. Copyright © 2015. Published by Elsevier Ltd.
High-efficiency cell concepts on low-cost silicon sheets
NASA Technical Reports Server (NTRS)
Bell, R. O.; Ravi, K. V.
1985-01-01
The limitations on sheet growth material in terms of the defect structure and minority carrier lifetime are discussed. The effect of various defects on performance are estimated. Given these limitations designs for a sheet growth cell that will make the best of the material characteristics are proposed. Achievement of optimum synergy between base material quality and device processing variables is proposed. A strong coupling exists between material quality and the variables during crystal growth, and device processing variables. Two objectives are outlined: (1) optimization of the coupling for maximum performance at minimal cost; and (2) decoupling of materials from processing by improvement in base material quality to make it less sensitive to processing variables.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2017-04-01
This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.
Breast density estimation from high spectral and spatial resolution MRI
Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.
2016-01-01
Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (p<0.0001) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 (p<0.0001) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590
ERIC Educational Resources Information Center
Egilmez, Hatice Onuray; Engur, Doruk
2017-01-01
In this study, the self-efficacy and motivation of Zeki Muren Fine Arts High School piano students were examined based on different variables as well as the reasons for their failure. The data on their self-efficacy were obtained through self-efficacy scale of piano performance and the data on their motivation were obtained through motivation…
Sloot, Rosa; Borgdorff, Martien W.; de Beer, Jessica L.; van Ingen, Jakko; Supply, Philip
2013-01-01
The population structure of 3,776 Mycobacterium tuberculosis isolates was determined using variable-number tandem-repeat (VNTR) typing. The degree of clonality was so high that a more relaxed definition of clustering cannot be applied. Among recent immigrants with non-Euro-American isolates, transmission is overestimated if based on identical VNTR patterns. PMID:23658260
Variables that Correlate with Faculty Use of Research-Based Instructional Strategies
NASA Astrophysics Data System (ADS)
Henderson, Charles; Dancy, Melissa H.; Niewiadomska-Bugaj, Magdalena
2010-10-01
During the Fall of 2008 a web survey, designed to collect information about pedagogical knowledge and practices, was completed by a representative sample of 722 physics faculty across the United States (a 50.3% response rate). This paper examines how 20 predictor variables correlate with faculty knowledge about and use of research-based instructional strategies (RBIS). Profiles were developed for each of four faculty levels of knowledge about and use of RBIS. Logistic regression analysis was used to identify a subset of the variables that could predict group membership. Five significant predictor variables were identified. High levels of knowledge and use of RBIS were associated with the following characteristics: attendee of the physics and astronomy new faculty workshop, attendee of at least one talk or workshop related to teaching in the last two years, satisfaction with meeting instructional goals, regular reader of one or more journals related to teaching, and being female. High research productivity and large class sizes were not found to be barriers to use of at least some RBIS.
Zhang, Miaomiao; Wells, William M; Golland, Polina
2017-10-01
We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.
Online prediction of organileptic data for snack food using color images
NASA Astrophysics Data System (ADS)
Yu, Honglu; MacGregor, John F.
2004-11-01
In this paper, a study for the prediction of organileptic properties of snack food in real-time using RGB color images is presented. The so-called organileptic properties, which are properties based on texture, taste and sight, are generally measured either by human sensory response or by mechanical devices. Neither of these two methods can be used for on-line feedback control in high-speed production. In this situation, a vision-based soft sensor is very attractive. By taking images of the products, the samples remain untouched and the product properties can be predicted in real time from image data. Four types of organileptic properties are considered in this study: blister level, toast points, taste and peak break force. Wavelet transform are applied on the color images and the averaged absolute value for each filtered image is used as texture feature variable. In order to handle the high correlation among the feature variables, Partial Least Squares (PLS) is used to regress the extracted feature variables against the four response variables.
Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition
Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao
2017-01-01
Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943
Variability in large-scale wind power generation: Variability in large-scale wind power generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiviluoma, Juha; Holttinen, Hannele; Weir, David
2015-10-25
The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1more » h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.« less
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Taha, Zahari; Muazu Musa, Rabiu; Majeed, A. P. P. Abdul; Razali Abdullah, Mohamad; Aizzat Zakaria, Muhammad; Muaz Alim, Muhammad; Arif Mat Jizat, Jessnor; Fauzi Ibrahim, Mohamad
2018-03-01
Support Vector Machine (SVM) has been revealed to be a powerful learning algorithm for classification and prediction. However, the use of SVM for prediction and classification in sport is at its inception. The present study classified and predicted high and low potential archers from a collection of psychological coping skills variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models, i.e. linear and fine radial basis function (RBF) kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy and precision throughout the exercise with an accuracy of 92% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the fine RBF SVM. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming
2013-05-01
An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.
Competency-Based, Time-Variable Education in the Health Professions: Crossroads.
Lucey, Catherine R; Thibault, George E; Ten Cate, Olle
2018-03-01
Health care systems around the world are transforming to align with the needs of 21st-century patients and populations. Transformation must also occur in the educational systems that prepare the health professionals who deliver care, advance discovery, and educate the next generation of physicians in these evolving systems. Competency-based, time-variable education, a comprehensive educational strategy guided by the roles and responsibilities that health professionals must assume to meet the needs of contemporary patients and communities, has the potential to catalyze optimization of educational and health care delivery systems. By designing educational and assessment programs that require learners to meet specific competencies before transitioning between the stages of formal education and into practice, this framework assures the public that every physician is capable of providing high-quality care. By engaging learners as partners in assessment, competency-based, time-variable education prepares graduates for careers as lifelong learners. While the medical education community has embraced the notion of competencies as a guiding framework for educational institutions, the structure and conduct of formal educational programs remain more aligned with a time-based, competency-variable paradigm.The authors outline the rationale behind this recommended shift to a competency-based, time-variable education system. They then introduce the other articles included in this supplement to Academic Medicine, which summarize the history of, theories behind, examples demonstrating, and challenges associated with competency-based, time-variable education in the health professions.
Evaluation of variable selection methods for random forests and omics data sets.
Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke
2017-10-16
Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.
Tang, Rongnian; Chen, Xupeng; Li, Chuang
2018-05-01
Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.
National Longitudinal Study of the High School Class of 1972: Critical Data Base. 22U-884.
ERIC Educational Resources Information Center
Talbert, Robin
The National Longitudinal Study of the High School Class of 1972 (NLS) critical data base contains 151 items (plus background information) from the base year and followup questionnaires; about thirty-seven percent of all items. This set of critical items consists of: (1) basic demographic variables; (2) items necessary for defining activity states…
Identification of weather variables sensitive to dysentery in disease-affected county of China.
Liu, Jianing; Wu, Xiaoxu; Li, Chenlu; Xu, Bing; Hu, Luojia; Chen, Jin; Dai, Shuang
2017-01-01
Climate change mainly refers to long-term change in weather variables, and it has significant impact on sustainability and spread of infectious diseases. Among three leading infectious diseases in China, dysentery is exclusively sensitive to climate change. Previous researches on weather variables and dysentery mainly focus on determining correlation between dysentery incidence and weather variables. However, the contribution of each variable to dysentery incidence has been rarely clarified. Therefore, we chose a typical county in epidemic of dysentery as the study area. Based on data of dysentery incidence, weather variables (monthly mean temperature, precipitation, wind speed, relative humidity, absolute humidity, maximum temperature, and minimum temperature) and lagged analysis, we used principal component analysis (PCA) and classification and regression trees (CART) to examine the relationships between the incidence of dysentery and weather variables. Principal component analysis showed that temperature, precipitation, and humidity played a key role in determining transmission of dysentery. We further selected weather variables including minimum temperature, precipitation, and relative humidity based on results of PCA, and used CART to clarify contributions of these three weather variables to dysentery incidence. We found when minimum temperature was at a high level, the high incidence of dysentery occurred if relative humidity or precipitation was at a high level. We compared our results with other studies on dysentery incidence and meteorological factors in areas both in China and abroad, and good agreement has been achieved. Yet, some differences remain for three reasons: not identifying all key weather variables, climate condition difference caused by local factors, and human factors that also affect dysentery incidence. This study hopes to shed light on potential early warnings for dysentery transmission as climate change occurs, and provide a theoretical basis for the control and prevention of dysentery. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
Zhang, Zheshen; Voss, Paul L
2009-07-06
We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.
NASA Astrophysics Data System (ADS)
Heslop, E.; Ruiz, S.; Allen, J.; Tintoré, J.
2012-04-01
One of the clear challenges facing oceanography today is to define variability in ocean processes at a seasonal and sub-seasonal scale, in order to clearly identify the signature of both natural large-scale climatic oscillations and the long-term trends brought about by the human-induced change in atmospheric composition. Without visibility of this variance, which helps to determine the margins of significance for long-term trends and decipher cause and effect, the inferences drawn from sparse data points can be misleading. The cyclonic basin scale circulation pattern in the Western Mediterranean has long been known; the role/contribution that processes in the Balearic Basin play in modifying this is less well defined. The Balearic Channels (channels between the Balearic Islands) are constriction points on this basin scale circulation that appear to exert a controlling influence on the north/south exchange of water masses. Understanding the variability in current flows through these channels is important, not just for the transport of heat and salt, but also for ocean biology that responds to physical variability at the scale of that variability. Earlier studies at a seasonal scale identified; an interannual summer/winter variation of 1 Sv in the strength of the main circulation pattern and a high cruise-to-cruise variability in the pattern and strength of the flows through the channels brought about by mesoscale activity. Initial results using new high-resolution data from glider based monitoring missions across the Ibiza Channel (the main exchange channel in the Balearic Basin), combined with ship and contemporaneous satellite data, indicate surprisingly high and rapid changes in the flows of surface and intermediate waters imposed on the broad seasonal cycle. To date the data suggests that there are three potential 'modes' of water volume transport, generated from the interplay between basin and mesoscale circulation. We will review the concept of transport modes as seen through the earlier seasonal ship based studies and demonstrate that the scales of variability captured by the glider monitoring provides a unique view of variability in this circulation system, which is as high on a weekly timescale as the previously identified seasonal cycle.
Validating a visual version of the metronome response task.
Laflamme, Patrick; Seli, Paul; Smilek, Daniel
2018-02-12
The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.
Variability of Attention Processes in ADHD: Observations from the Classroom
ERIC Educational Resources Information Center
Rapport, Mark D.; Kofler, Michael J.; Alderson, R. Matt; Timko, Thomas M., Jr.; DuPaul, George J.
2009-01-01
Objective: Classroom- and laboratory-based efforts to study the attentional problems of children with ADHD are incongruent in elucidating attentional deficits; however, none have explored within- or between-minute variability in the classroom attentional processing in children with ADHD. Method: High and low attention groups of ADHD children…
NASA Astrophysics Data System (ADS)
Aerts, C.; Símon-Díaz, S.; Bloemen, S.; Debosscher, J.; Pápics, P. I.; Bryson, S.; Still, M.; Moravveji, E.; Williamson, M. H.; Grundahl, F.; Fredslund Andersen, M.; Antoci, V.; Pallé, P. L.; Christensen-Dalsgaard, J.; Rogers, T. M.
2017-06-01
Stellar evolution models are most uncertain for evolved massive stars. Asteroseismology based on high-precision uninterrupted space photometry has become a new way to test the outcome of stellar evolution theory and was recently applied to a multitude of stars, but not yet to massive evolved supergiants.Our aim is to detect, analyse and interpret the photospheric and wind variability of the O9.5 Iab star HD 188209 from Kepler space photometry and long-term high-resolution spectroscopy. We used Kepler scattered-light photometry obtained by the nominal mission during 1460 d to deduce the photometric variability of this O-type supergiant. In addition, we assembled and analysed high-resolution high signal-to-noise spectroscopy taken with four spectrographs during some 1800 d to interpret the temporal spectroscopic variability of the star. The variability of this blue supergiant derived from the scattered-light space photometry is in full in agreement with the one found in the ground-based spectroscopy. We find significant low-frequency variability that is consistently detected in all spectral lines of HD 188209. The photospheric variability propagates into the wind, where it has similar frequencies but slightly higher amplitudes. The morphology of the frequency spectra derived from the long-term photometry and spectroscopy points towards a spectrum of travelling waves with frequency values in the range expected for an evolved O-type star. Convectively-driven internal gravity waves excited in the stellar interior offer the most plausible explanation of the detected variability. Based on photometric observations made with the NASA Kepler satellite and on spectroscopic observations made with four telescopes: the Nordic Optical Telescope operated by NOTSA and the Mercator Telescope operated by the Flemish Community, both at the Observatorio del Roque de los Muchachos (La Palma, Spain) of the Instituto de Astrofísica de Canarias, the T13 2.0 m Automatic Spectroscopic Telescope (AST) operated by Tennessee State University at the Fairborn Observatory, and the Hertzsprung SONG telescope operated on the Spanish Observatorio del Teide on the island of Tenerife by the Aarhus and Copenhagen Universities and by the Instituto de Astrofísica de Canarias, Spain.
Investigating Electromagnetic Induction through a Microcomputer-Based Laboratory.
ERIC Educational Resources Information Center
Trumper, Ricardo; Gelbman, Moshe
2000-01-01
Describes a microcomputer-based laboratory experiment designed for high school students that very accurately analyzes Faraday's law of electromagnetic induction, addressing each variable separately while the others are kept constant. (Author/CCM)
Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016.
Wohland, Jan; Reyers, Mark; Märker, Carolin; Witthaut, Dirk
2018-01-01
Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability.
USDA-ARS?s Scientific Manuscript database
The high spatial resolution of QuickBird satellite images makes it possible to show spatial variability at fine details. However, the effect of topography-induced illumination variations become more evident, even in moderately sloped areas. Based on a high resolution (1 m) digital elevation model ge...
Light valve based on nonimaging optics with potential application in cold climate greenhouses
NASA Astrophysics Data System (ADS)
Valerio, Angel A.; Mossman, Michele A.; Whitehead, Lorne A.
2014-09-01
We have evaluated a new concept for a variable light valve and thermal insulation system based on nonimaging optics. The system incorporates compound parabolic concentrators and can readily be switched between an open highly light transmissive state and a closed highly thermally insulating state. This variable light valve makes the transition between high thermal insulation and efficient light transmittance practical and may be useful in plant growth environments to provide both adequate sunlight illumination and thermal insulation as needed. We have measured light transmittance values exceeding 80% for the light valve design and achieved thermal insulation values substantially exceeding those of traditional energy efficient windows. The light valve system presented in this paper represents a potential solution for greenhouse food production in locations where greenhouses are not feasible economically due to high heating cost.
NASA Astrophysics Data System (ADS)
Astuti, H. N.; Saputro, D. R. S.; Susanti, Y.
2017-06-01
MGWR model is combination of linear regression model and geographically weighted regression (GWR) model, therefore, MGWR model could produce parameter estimation that had global parameter estimation, and other parameter that had local parameter in accordance with its observation location. The linkage between locations of the observations expressed in specific weighting that is adaptive bi-square. In this research, we applied MGWR model with weighted adaptive bi-square for case of DHF in Surakarta based on 10 factors (variables) that is supposed to influence the number of people with DHF. The observation unit in the research is 51 urban villages and the variables are number of inhabitants, number of houses, house index, many public places, number of healthy homes, number of Posyandu, area width, level population density, welfare of the family, and high-region. Based on this research, we obtained 51 MGWR models. The MGWR model were divided into 4 groups with significant variable is house index as a global variable, an area width as a local variable and the remaining variables vary in each. Global variables are variables that significantly affect all locations, while local variables are variables that significantly affect a specific location.
NASA Astrophysics Data System (ADS)
Wilhelmsen, Hallgeir; Ladstädter, Florian; Scherllin-Pirscher, Barbara; Steiner, Andrea K.
2018-03-01
We provide atmospheric temperature variability indices for the tropical troposphere and stratosphere based on global navigation satellite system (GNSS) radio occultation (RO) temperature measurements. By exploiting the high vertical resolution and the uniform distribution of the GNSS RO temperature soundings we introduce two approaches, both based on an empirical orthogonal function (EOF) analysis. The first method utilizes the whole vertical and horizontal RO temperature field from 30° S to 30° N and from 2 to 35 km altitude. The resulting indices, the leading principal components, resemble the well-known patterns of the Quasi-Biennial Oscillation (QBO) and the El Niño-Southern Oscillation (ENSO) in the tropics. They provide some information on the vertical structure; however, they are not vertically resolved. The second method applies the EOF analysis on each altitude level separately and the resulting indices contain information on the horizontal variability at each densely available altitude level. They capture more variability than the indices from the first method and present a mixture of all variability modes contributing at the respective altitude level, including the QBO and ENSO. Compared to commonly used variability indices from QBO winds or ENSO sea surface temperature, these new indices cover the vertical details of the atmospheric variability. Using them as proxies for temperature variability is also of advantage because there is no further need to account for response time lags. Atmospheric variability indices as novel products from RO are expected to be of great benefit for studies on atmospheric dynamics and variability, for climate trend analysis, as well as for climate model evaluation.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
Zhang, Miaomiao; Wells, William M; Golland, Polina
2016-10-01
Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).
Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran
2017-02-01
The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.
Fine-scale habitat modeling of a top marine predator: do prey data improve predictive capacity?
Torres, Leigh G; Read, Andrew J; Halpin, Patrick
2008-10-01
Predators and prey assort themselves relative to each other, the availability of resources and refuges, and the temporal and spatial scale of their interaction. Predictive models of predator distributions often rely on these relationships by incorporating data on environmental variability and prey availability to determine predator habitat selection patterns. This approach to predictive modeling holds true in marine systems where observations of predators are logistically difficult, emphasizing the need for accurate models. In this paper, we ask whether including prey distribution data in fine-scale predictive models of bottlenose dolphin (Tursiops truncatus) habitat selection in Florida Bay, Florida, U.S.A., improves predictive capacity. Environmental characteristics are often used as predictor variables in habitat models of top marine predators with the assumption that they act as proxies of prey distribution. We examine the validity of this assumption by comparing the response of dolphin distribution and fish catch rates to the same environmental variables. Next, the predictive capacities of four models, with and without prey distribution data, are tested to determine whether dolphin habitat selection can be predicted without recourse to describing the distribution of their prey. The final analysis determines the accuracy of predictive maps of dolphin distribution produced by modeling areas of high fish catch based on significant environmental characteristics. We use spatial analysis and independent data sets to train and test the models. Our results indicate that, due to high habitat heterogeneity and the spatial variability of prey patches, fine-scale models of dolphin habitat selection in coastal habitats will be more successful if environmental variables are used as predictor variables of predator distributions rather than relying on prey data as explanatory variables. However, predictive modeling of prey distribution as the response variable based on environmental variability did produce high predictive performance of dolphin habitat selection, particularly foraging habitat.
Kusumaningrum, Dewi; Lee, Hoonsoo; Lohumi, Santosh; Mo, Changyeun; Kim, Moon S; Cho, Byoung-Kwan
2018-03-01
The viability of seeds is important for determining their quality. A high-quality seed is one that has a high capability of germination that is necessary to ensure high productivity. Hence, developing technology for the detection of seed viability is a high priority in agriculture. Fourier transform near-infrared (FT-NIR) spectroscopy is one of the most popular devices among other vibrational spectroscopies. This study aims to use FT-NIR spectroscopy to determine the viability of soybean seeds. Viable and artificial ageing seeds as non-viable soybeans were used in this research. The FT-NIR spectra of soybean seeds were collected and analysed using a partial least-squares discriminant analysis (PLS-DA) to classify viable and non-viable soybean seeds. Moreover, the variable importance in projection (VIP) method for variable selection combined with the PLS-DA was employed. The most effective wavelengths were selected by the VIP method, which selected 146 optimal variables from the full set of 1557 variables. The results demonstrated that the FT-NIR spectral analysis with the PLS-DA method that uses all variables or the selected variables showed good performance based on the high value of prediction accuracy for soybean viability with an accuracy close to 100%. Hence, FT-NIR techniques with a chemometric analysis have the potential for rapidly measuring soybean seed viability. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Why significant variables aren't automatically good predictors.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2015-11-10
Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.
Variability, trends, and drivers of regional fluctuations in Australian fire activity
NASA Astrophysics Data System (ADS)
Earl, Nick; Simmonds, Ian
2017-07-01
Throughout the world fire regimes are determined by climate, vegetation, and anthropogenic factors, and they have great spatial and temporal variability. The availability of high-quality satellite data has revolutionized fire monitoring, allowing for a more consistent and comprehensive evaluation of temporal and spatial patterns. Here we utilize a satellite based "active fire" (AF) product to statistically analyze 2001-2015 variability and trends in Australian fire activity and link this to precipitation and large-scale atmospheric structures (namely, the El Niño-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD)) known to have potential for predicting fire activity in different regions. It is found that Australian fire activity is decreasing (during summer (December-February)) or stable, with high temporal and spatial variability. Eastern New South Wales (NSW) has the strongest decreasing trend (to the 1% confidence level), especially during the winter (JJA) season. Other significantly decreasing areas are Victoria/NSW, Tasmania, and South-east Queensland. These decreasing fire regions are relatively highly populated, so we suggest that the declining trends are due to improved fire management, reducing the size and duration of bush fires. Almost half of all Australian AFs occur during spring (September-November). We show that there is considerable potential throughout Australia for a skillful forecast for future season fire activity based on current and previous precipitation activity, ENSO phase, and to a lesser degree, the IOD phase. This is highly variable, depending on location, e.g., the IOD phase is for more indicative of fire activity in southwest Western Australia than for Queensland.
NASA Astrophysics Data System (ADS)
Zhao, Zhen-tao; Huang, Wei; Li, Shi-Bin; Zhang, Tian-Tian; Yan, Li
2018-06-01
In the current study, a variable Mach number waverider design approach has been proposed based on the osculating cone theory. The design Mach number of the osculating cone constant Mach number waverider with the same volumetric efficiency of the osculating cone variable Mach number waverider has been determined by writing a program for calculating the volumetric efficiencies of waveriders. The CFD approach has been utilized to verify the effectiveness of the proposed approach. At the same time, through the comparative analysis of the aerodynamic performance, the performance advantage of the osculating cone variable Mach number waverider is studied. The obtained results show that the osculating cone variable Mach number waverider owns higher lift-to-drag ratio throughout the flight profile when compared with the osculating cone constant Mach number waverider, and it has superior low-speed aerodynamic performance while maintaining nearly the same high-speed aerodynamic performance.
Puffed-up but shaky selves: State self-esteem level and variability in narcissists.
Geukes, Katharina; Nestler, Steffen; Hutteman, Roos; Dufner, Michael; Küfner, Albrecht C P; Egloff, Boris; Denissen, Jaap J A; Back, Mitja D
2017-05-01
Different theoretical conceptualizations characterize grandiose narcissists by high, yet fragile self-esteem. Empirical evidence, however, has been inconsistent, particularly regarding the relationship between narcissism and self-esteem fragility (i.e., self-esteem variability). Here, we aim at unraveling this inconsistency by disentangling the effects of two theoretically distinct facets of narcissism (i.e., admiration and rivalry) on the two aspects of state self-esteem (i.e., level and variability). We report on data from a laboratory-based and two field-based studies (total N = 596) in realistic social contexts, capturing momentary, daily, and weekly fluctuations of state self-esteem. To estimate unbiased effects of narcissism on the level and variability of self-esteem within one model, we applied mixed-effects location scale models. Results of the three studies and their meta-analytical integration indicated that narcissism is positively linked to self-esteem level and variability. When distinguishing between admiration and rivalry, however, an important dissociation was identified: Admiration was related to high (and rather stable) levels of state self-esteem, whereas rivalry was related to (rather low and) fragile self-esteem. Analyses on underlying processes suggest that effects of rivalry on self-esteem variability are based on stronger decreases in self-esteem from one assessment to the next, particularly after a perceived lack of social inclusion. The revealed differentiated effects of admiration and rivalry explain why the analysis of narcissism as a unitary concept has led to the inconsistent past findings and provide deeper insights into the intrapersonal dynamics of grandiose narcissism governing state self-esteem. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Moving forward socio-economically focused models of deforestation.
Dezécache, Camille; Salles, Jean-Michel; Vieilledent, Ghislain; Hérault, Bruno
2017-09-01
Whilst high-resolution spatial variables contribute to a good fit of spatially explicit deforestation models, socio-economic processes are often beyond the scope of these models. Such a low level of interest in the socio-economic dimension of deforestation limits the relevancy of these models for decision-making and may be the cause of their failure to accurately predict observed deforestation trends in the medium term. This study aims to propose a flexible methodology for taking into account multiple drivers of deforestation in tropical forested areas, where the intensity of deforestation is explicitly predicted based on socio-economic variables. By coupling a model of deforestation location based on spatial environmental variables with several sub-models of deforestation intensity based on socio-economic variables, we were able to create a map of predicted deforestation over the period 2001-2014 in French Guiana. This map was compared to a reference map for accuracy assessment, not only at the pixel scale but also over cells ranging from 1 to approximately 600 sq. km. Highly significant relationships were explicitly established between deforestation intensity and several socio-economic variables: population growth, the amount of agricultural subsidies, gold and wood production. Such a precise characterization of socio-economic processes allows to avoid overestimation biases in high deforestation areas, suggesting a better integration of socio-economic processes in the models. Whilst considering deforestation as a purely geographical process contributes to the creation of conservative models unable to effectively assess changes in the socio-economic and political contexts influencing deforestation trends, this explicit characterization of the socio-economic dimension of deforestation is critical for the creation of deforestation scenarios in REDD+ projects. © 2017 John Wiley & Sons Ltd.
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less
Private Loans: Facts and Trends
ERIC Educational Resources Information Center
Institute for College Access & Success, 2014
2014-01-01
Private loans are one of the riskiest ways to finance a college education. Like credit cards, they typically have variable interest rates. Both variable and fixed rates are higher for those who can least afford them--as high as 13% in June 2014. Private loans are not eligible for the important deferment, income-based repayment, or loan forgiveness…
Genetic structure of American chestnut populations based on neutral DNA markers
Thomas L. Kubisiak; James H. Roberds
2006-01-01
Microsatellite and RAPD markers suggest that American chestnut exists as a highly variable species. Even at the margins of its natural range, with a large proportion of its genetic variability occurring within populations (~95%). A statistically significant proportion also exists among population. Although genetic differentiation among populations has taken place, no...
Forest Stand Canopy Structure Attribute Estimation from High Resolution Digital Airborne Imagery
Demetrios Gatziolis
2006-01-01
A study of forest stand canopy variable assessment using digital, airborne, multispectral imagery is presented. Variable estimation involves stem density, canopy closure, and mean crown diameter, and it is based on quantification of spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and an alternative, non-parametric approach known as...
Pharmacokinetic Variability of Drugs Used for Prophylactic Treatment of Migraine.
Tfelt-Hansen, Peer; Ågesen, Frederik Nybye; Pavbro, Agniezka; Tfelt-Hansen, Jacob
2017-05-01
In this review, we evaluate the variability in the pharmacokinetics of 11 drugs with established prophylactic effects in migraine to facilitate 'personalized medicine' with these drugs. PubMed was searched for 'single-dose' and 'steady-state' pharmacokinetic studies of these 11 drugs. The maximum plasma concentration was reported in 248 single-dose and 115 steady-state pharmacokinetic studies, and the area under the plasma concentration-time curve was reported in 299 single-dose studies and 112 steady-state pharmacokinetic studies. For each study, the coefficient of variation was calculated for maximum plasma concentration and area under the plasma concentration-time curve, and we divided the drug variability into two categories; high variability, coefficient of variation >40%, or low or moderate variability, coefficient of variation <40%. Based on the area under the plasma concentration-time curve in steady-state studies, the following drugs have high pharmacokinetic variability: propranolol in 92% (33/36), metoprolol in 85% (33/39), and amitriptyline in 60% (3/5) of studies. The following drugs have low or moderate variability: atenolol in 100% (2/2), valproate in 100% (15/15), topiramate in 88% (7/8), and naproxen and candesartan in 100% (2/2) of studies. For drugs with low or moderate pharmacokinetic variability, treatment can start without initial titration of doses, whereas titration is used to possibly enhance tolerability of topiramate and amitriptyline. The very high pharmacokinetic variability of metoprolol and propranolol can result in very high plasma concentrations in a small minority of patients, and those drugs should therefore be titrated up from a low initial dose, depending mainly on the occurrence of adverse events.
Zhang, Zhen; Ma, Cheng; Zhu, Rong
2017-08-23
Artificial Neural Networks (ANNs), including Deep Neural Networks (DNNs), have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA) architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP). The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO) real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas.
Zhang, Zhen; Zhu, Rong
2017-01-01
Artificial Neural Networks (ANNs), including Deep Neural Networks (DNNs), have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA) architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP). The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO) real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas. PMID:28832522
Patch-based iterative conditional geostatistical simulation using graph cuts
NASA Astrophysics Data System (ADS)
Li, Xue; Mariethoz, Gregoire; Lu, DeTang; Linde, Niklas
2016-08-01
Training image-based geostatistical methods are increasingly popular in groundwater hydrology even if existing algorithms present limitations that often make real-world applications difficult. These limitations include a computational cost that can be prohibitive for high-resolution 3-D applications, the presence of visual artifacts in the model realizations, and a low variability between model realizations due to the limited pool of patterns available in a finite-size training image. In this paper, we address these issues by proposing an iterative patch-based algorithm which adapts a graph cuts methodology that is widely used in computer graphics. Our adapted graph cuts method optimally cuts patches of pixel values borrowed from the training image and assembles them successively, each time accounting for the information of previously stitched patches. The initial simulation result might display artifacts, which are identified as regions of high cost. These artifacts are reduced by iteratively placing new patches in high-cost regions. In contrast to most patch-based algorithms, the proposed scheme can also efficiently address point conditioning. An advantage of the method is that the cut process results in the creation of new patterns that are not present in the training image, thereby increasing pattern variability. To quantify this effect, a new measure of variability is developed, the merging index, quantifies the pattern variability in the realizations with respect to the training image. A series of sensitivity analyses demonstrates the stability of the proposed graph cuts approach, which produces satisfying simulations for a wide range of parameters values. Applications to 2-D and 3-D cases are compared to state-of-the-art multiple-point methods. The results show that the proposed approach obtains significant speedups and increases variability between realizations. Connectivity functions applied to 2-D models transport simulations in 3-D models are used to demonstrate that pattern continuity is preserved.
Variability of Massive Young Stellar Objects in Cygnus-X
NASA Astrophysics Data System (ADS)
Thomas, Nancy H.; Hora, J. L.; Smith, H. A.
2013-01-01
Young stellar objects (YSOs) are stars in the process of formation. Several recent investigations have shown a high rate of photometric variability in YSOs at near- and mid-infrared wavelengths. Theoretical models for the formation of massive stars (1-10 solar masses) remain highly idealized, and little is known about the mechanisms that produce the variability. An ongoing Spitzer Space Telescope program is studying massive star formation in the Cygnus-X region. In conjunction with the Spitzer observations, we have conducted a ground-based near-infrared observing program of the Cygnus-X DR21 field using PAIRITEL, the automated infrared telescope at Whipple Observatory. Using the Stetson index for variability, we identified variable objects and a number of variable YSOs in our time-series PAIRITEL data of DR21. We have searched for periodicity among our variable objects using the Lomb-Scargle algorithm, and identified periodic variable objects with an average period of 8.07 days. Characterization of these variable and periodic objects will help constrain models of star formation present. This work is supported in part by the NSF REU and DOD ASSURE programs under NSF grant no. 0754568 and by the Smithsonian Institution.
Blankers, Matthijs; Frijns, Tom; Belackova, Vendula; Rossi, Carla; Svensson, Bengt; Trautmann, Franz; van Laar, Margriet
2014-01-01
Cannabis is Europe's most commonly used illicit drug. Some users do not develop dependence or other problems, whereas others do. Many factors are associated with the occurrence of cannabis-related disorders. This makes it difficult to identify key risk factors and markers to profile at-risk cannabis users using traditional hypothesis-driven approaches. Therefore, the use of a data-mining technique called binary recursive partitioning is demonstrated in this study by creating a classification tree to profile at-risk users. 59 variables on cannabis use and drug market experiences were extracted from an internet-based survey dataset collected in four European countries (Czech Republic, Italy, Netherlands and Sweden), n = 2617. These 59 potential predictors of problematic cannabis use were used to partition individual respondents into subgroups with low and high risk of having a cannabis use disorder, based on their responses on the Cannabis Abuse Screening Test. Both a generic model for the four countries combined and four country-specific models were constructed. Of the 59 variables included in the first analysis step, only three variables were required to construct a generic partitioning model to classify high risk cannabis users with 65-73% accuracy. Based on the generic model for the four countries combined, the highest risk for cannabis use disorder is seen in participants reporting a cannabis use on more than 200 days in the last 12 months. In comparison to the generic model, the country-specific models led to modest, non-significant improvements in classification accuracy, with an exception for Italy (p = 0.01). Using recursive partitioning, it is feasible to construct classification trees based on only a few variables with acceptable performance to classify cannabis users into groups with low or high risk of meeting criteria for cannabis use disorder. The number of cannabis use days in the last 12 months is the most relevant variable. The identified variables may be considered for use in future screeners for cannabis use disorders.
Atmospheric icing of structures: Observations and simulations
NASA Astrophysics Data System (ADS)
Ágústsson, H.; Elíasson, Á. J.; Thorsteins, E.; Rögnvaldsson, Ó.; Ólafsson, H.
2012-04-01
This study compares observed icing in a test span in complex orography at Hallormsstaðaháls (575 m) in East-Iceland with parameterized icing based on an icing model and dynamically downscaled weather at high horizontal resolution. Four icing events have been selected from an extensive dataset of observed atmospheric icing in Iceland. A total of 86 test-spans have been erected since 1972 at 56 locations in complex terrain with more than 1000 icing events documented. The events used here have peak observed ice load between 4 and 36 kg/m. Most of the ice accretion is in-cloud icing but it may partly be mixed with freezing drizzle and wet snow icing. The calculation of atmospheric icing is made in two steps. First the atmospheric data is created by dynamically downscaling the ECMWF-analysis to high resolution using the non-hydrostatic mesoscale Advanced Research WRF-model. The horizontal resolution of 9, 3, 1 and 0.33 km is necessary to allow the atmospheric model to reproduce correctly local weather in the complex terrain of Iceland. Secondly, the Makkonen-model is used to calculate the ice accretion rate on the conductors based on the simulated temperature, wind, cloud and precipitation variables from the atmospheric data. In general, the atmospheric model correctly simulates the atmospheric variables and icing calculations based on the atmospheric variables correctly identify the observed icing events, but underestimate the load due to too slow ice accretion. This is most obvious when the temperature is slightly below 0°C and the observed icing is most intense. The model results improve significantly when additional observations of weather from an upstream weather station are used to nudge the atmospheric model. However, the large variability in the simulated atmospheric variables results in high temporal and spatial variability in the calculated ice accretion. Furthermore, there is high sensitivity of the icing model to the droplet size and the possibility that some of the icing may be due to freezing drizzle or wet snow instead of in-cloud icing of super-cooled droplets. In addition, the icing model (Makkonen) may not be accurate for the highest icing loads observed.
Individual Variability in Aerobic Fitness Adaptations to 70-d of Bed Rest and Exercise Training
NASA Technical Reports Server (NTRS)
Downs, Meghan; Buxton, Roxanne; Goetchius, Elizabeth; DeWitt, John; Ploutz-Snyder, Lori
2016-01-01
Change in maximal aerobic capacity (VO2pk) in response to exercise training and disuse is highly variable among individuals. Factors that could contribute to the observed variability (lean mass, daily activity, diet, sleep, stress) are not routinely controlled in studies. The NASA bed rest (BR) studies use a highly controlled hospital based model as an analog of spaceflight. In this study, diet, hydration, physical activity and light/dark cycles were precisely controlled and provided the opportunity to investigate individual variability. PURPOSE. Evaluate the contribution of exercise intensity and lean mass on change in VO2pk during 70-d of BR or BR + exercise. METHODS. Subjects completed 70-d of BR alone (CON, N=9) or BR + exercise (EX, N=17). The exercise prescription included 6 d/wk of aerobic exercise at 70 - 100% of max and 3 d/wk of lower body resistance exercise. Subjects were monitored 24 hr/d. VO2pk and lean mass (iDXA) were measured pre and post BR. ANOVA was used to evaluate changes in VO2pk pre to post BR. Subjects were retrospectively divided into high and low responders based on change in VO2pk (CON > 20% loss, n=5; EX >10% loss, n=4, or 5% gain, n=4) to further understand individual variability. RESULTS. VO2pk decreased from pre to post BR in CON (P<0.05) and was maintained in EX; however, significant individual variability was observed (CON: -22%, range: -39% to -.5%; EX: -1.8%, range: -16% to 12.6%). The overlap in ranges between groups included 3 CON who experienced smaller reduction in VO2pk (<16%) than the worst responding EX subjects. Individual variability was maintained when VO2pk was normalized to lean mass (range, CON: -33.7% to -5.7%; EX: -15.8% to 11%), and the overlap included 5 CON with smaller reductions in VO2pk than the worst responding EX subjects. High responders to disuse also lost the most lean mass; however, this relationship was not maintained in EX (i.e. the largest gains/losses in lean mass were observed in both high and low responders). Change in VO2pk was not related to exercise intensity. CONCLUSION. Change in VO2pk in response to disuse and exercise was highly variable among individuals, even in this tightly controlled study. Loss in lean mass accounts for a significant degree of variability in the CON; however, training induced gains in VO2pk appear unrelated to lean mass or exercise intensity.
He, Dong; Chen, Yongfa; Zhao, Kangning; Cornelissen, J H C; Chu, Chengjin
2018-02-03
How functional traits vary with environmental conditions is of fundamental importance in trait-based community ecology. However, how intraspecific variability in functional traits is connected to species distribution is not well understood. This study investigated inter- and intraspecific variation of a key functional trait, i.e. specific leaf area (leaf area per unit dry mass; SLA), in relation to soil factors and tested if trait variation is more closely associated with specific environmental regimes for low-variability species than for high-variability species. In a subtropical evergreen forest plot (50 ha, southern China), 106 700 leaves from 5335 individuals of 207 woody species were intensively collected, with 30 individuals sampled for most species to ensure a sufficient sample size representative of intraspecific variability. Soil conditions for each plant were estimated by kriging from more than 1700 observational soil locations across the plot. Intra- and interspecific variation in SLA were separately related to environmental factors. Based on the species-specific variation of SLA, species were categorized into three groups: low-, intermediate- and high-intraspecific variability. Intraspecific habitat ranges and the strength of SLA-habitat relationships were compared among these three groups. Interspecific variation in SLA overrides the intraspecific variation (77 % vs. 8 %). Total soil nitrogen (TN, positively) and total organic carbon (TOC, negatively) are the most important explanatory factors for SLA variation at both intra- and interspecific levels. SLA, both within and between species, decreases with decreasing soil nitrogen availability. As predicted, species with low intraspecific variability in SLA have narrower habitat ranges with respect to soil TOC and TN and show a stronger SLA-habitat association than high-variability species. For woody plants low SLA is a phenotypic and probably adaptive response to nitrogen stress, which drives the predominance of species with ever-decreasing SLA towards less fertile habitats. Intraspecific variability in SLA is positively connected to species' niche breadth, suggesting that low-variability species may play a more deterministic role in structuring plant assemblages than high-variability species. This study highlights the importance of quantifying intraspecific trait variation to improve our understanding of species distributions across a vegetated landscape. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo
2008-06-01
In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.
High school science enrollment of black students
NASA Astrophysics Data System (ADS)
Goggins, Ellen O.; Lindbeck, Joy S.
How can the high school science enrollment of black students be increased? School and home counseling and classroom procedures could benefit from variables identified as predictors of science enrollment. The problem in this study was to identify a set of variables which characterize science course enrollment by black secondary students. The population consisted of a subsample of 3963 black high school seniors from The High School and Beyond 1980 Base-Year Survey. Using multiple linear regression, backward regression, and correlation analyses, the US Census regions and grades mostly As and Bs in English were found to be significant predictors of the number of science courses scheduled by black seniors.
NASA Astrophysics Data System (ADS)
Zeyringer, Marianne; Price, James; Fais, Birgit; Li, Pei-Hao; Sharp, Ed
2018-05-01
The design of cost-effective power systems with high shares of variable renewable energy (VRE) technologies requires a modelling approach that simultaneously represents the whole energy system combined with the spatiotemporal and inter-annual variability of VRE. Here, we soft-link a long-term energy system model, which explores new energy system configurations from years to decades, with a high spatial and temporal resolution power system model that captures VRE variability from hours to years. Applying this methodology to Great Britain for 2050, we find that VRE-focused power system design is highly sensitive to the inter-annual variability of weather and that planning based on a single year can lead to operational inadequacy and failure to meet long-term decarbonization objectives. However, some insights do emerge that are relatively stable to weather-year. Reinforcement of the transmission system consistently leads to a decrease in system costs while electricity storage and flexible generation, needed to integrate VRE into the system, are generally deployed close to demand centres.
Aggressive behavior in children: the role of temperament and family socialization.
González-Peña, Paloma; Egido, Begoña Delgado; Carrasco, Miguel Á; Tello, Francisco Pablo Holgado
2013-01-01
This study's objective is to analyze temperament and parenting variables as they relate to proactive and reactive aggression in children. To be specific, profiles based on these variables were analyzed in children with high levels of proactive versus reactive aggression. The sample was made up of two groups: 482 children (52.3% boys) between 1 and 3 years-old, and 422 children (42.42% boys) 3 to 6 years-old. Statistical analyses of the two age groups included: Pearson's correlations to explore the relationships among variables, Cluster Analysis to create groups with different levels of aggression, and finally discriminant analysis to determine which variables discriminate between groups. The results show that high levels of frustration/negative affect in the 1-3 year-old group and low effortful control in children 3 to 6 years old are the most relevant variables in differentiating between aggressive and non-aggressive subjects. Nevertheless, differential profiles of subjects with high levels of proactive versus reactive aggression were not observed. The implications of these different types of aggression in terms of development and prevention are discussed.
New variable stars discovered in the fields of three Galactic open clusters using the VVV survey
NASA Astrophysics Data System (ADS)
Palma, T.; Minniti, D.; Dékány, I.; Clariá, J. J.; Alonso-García, J.; Gramajo, L. V.; Ramírez Alegría, S.; Bonatto, C.
2016-11-01
This project is a massive near-infrared (NIR) search for variable stars in highly reddened and obscured open cluster (OC) fields projected on regions of the Galactic bulge and disk. The search is performed using photometric NIR data in the J-, H- and Ks- bands obtained from the Vista Variables in the Vía Láctea (VVV) Survey. We performed in each cluster field a variability search using Stetson's variability statistics to select the variable candidates. Later, those candidates were subjected to a frequency analysis using the Generalized Lomb-Scargle and the Phase Dispersion Minimization algorithms. The number of independent observations range between 63 and 73. The newly discovered variables in this study, 157 in total in three different known OCs, are classified based on their light curve shapes, periods, amplitudes and their location in the corresponding color-magnitude (J -Ks ,Ks) and color-color (H -Ks , J - H) diagrams. We found 5 possible Cepheid stars which, based on the period-luminosity relation, are very likely type II Cepheids located behind the bulge. Among the newly discovered variables, there are eclipsing binaries, δ Scuti, as well as background RR Lyrae stars. Using the new version of the Wilson & Devinney code as well as the "Physics Of Eclipsing Binaries" (PHOEBE) code, we analyzed some of the best eclipsing binaries we discovered. Our results show that these studied systems turn out to be ranging from detached to double-contact binaries, with low eccentricities and high inclinations of approximately 80°. Their surface temperatures range between 3500 K and 8000 K.
Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M
1992-12-01
The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
Antioch, Kathryn M; Walsh, Michael K
2004-06-01
Hospitals throughout the world using funding based on diagnosis-related groups (DRG) have incurred substantial budgetary deficits, despite high efficiency. We identify the limitations of DRG funding that lack risk (severity) adjustment for State-wide referral services. Methods to risk adjust DRGs are instructive. The average price in casemix funding in the Australian State of Victoria is policy based, not benchmarked. Average cost weights are too low for high-complexity DRGs relating to State-wide referral services such as heart and lung transplantation and trauma. Risk-adjusted specified grants (RASG) are required for five high-complexity respiratory, cardiology and stroke DRGs incurring annual deficits of $3.6 million due to high casemix complexity and government under-funding despite high efficiency. Five stepwise linear regressions for each DRG excluded non-significant variables and assessed heteroskedasticity and multicollinearlity. Cost per patient was the dependent variable. Significant independent variables were age, length-of-stay outliers, number of disease types, diagnoses, procedures and emergency status. Diagnosis and procedure severity markers were identified. The methodology and the work of the State-wide Risk Adjustment Working Group can facilitate risk adjustment of DRGs State-wide and for Treasury negotiations for expenditure growth. The Alfred Hospital previously negotiated RASG of $14 million over 5 years for three trauma and chronic DRGs. Some chronic diseases require risk-adjusted capitation funding models for Australian Health Maintenance Organizations as an alternative to casemix funding. The use of Diagnostic Cost Groups can facilitate State and Federal government reform via new population-based risk adjusted funding models that measure health need.
Galea, Joseph M.; Ruge, Diane; Buijink, Arthur; Bestmann, Sven; Rothwell, John C.
2013-01-01
Action selection describes the high-level process which selects between competing movements. In animals, behavioural variability is critical for the motor exploration required to select the action which optimizes reward and minimizes cost/punishment, and is guided by dopamine (DA). The aim of this study was to test in humans whether low-level movement parameters are affected by punishment and reward in ways similar to high-level action selection. Moreover, we addressed the proposed dependence of behavioural and neurophysiological variability on DA, and whether this may underpin the exploration of kinematic parameters. Participants performed an out-and-back index finger movement and were instructed that monetary reward and punishment were based on its maximal acceleration (MA). In fact, the feedback was not contingent on the participant’s behaviour but pre-determined. Blocks highly-biased towards punishment were associated with increased MA variability relative to blocks with either reward or without feedback. This increase in behavioural variability was positively correlated with neurophysiological variability, as measured by changes in cortico-spinal excitability with transcranial magnetic stimulation over the primary motor cortex. Following the administration of a DA-antagonist, the variability associated with punishment diminished and the correlation between behavioural and neurophysiological variability no longer existed. Similar changes in variability were not observed when participants executed a pre-determined MA, nor did DA influence resting neurophysiological variability. Thus, under conditions of punishment, DA-dependent processes influence the selection of low-level movement parameters. We propose that the enhanced behavioural variability reflects the exploration of kinematic parameters for less punishing, or conversely more rewarding, outcomes. PMID:23447607
NASA Astrophysics Data System (ADS)
Ten Veldhuis, M. C.; Smith, J. A.; Zhou, Z.
2017-12-01
Impacts of rainfall variability on runoff response are highly scale-dependent. Sensitivity analyses based on hydrological model simulations have shown that impacts are likely to depend on combinations of storm type, basin versus storm scale, temporal versus spatial rainfall variability. So far, few of these conclusions have been confirmed on observational grounds, since high quality datasets of spatially variable rainfall and runoff over prolonged periods are rare. Here we investigate relationships between rainfall variability and runoff response based on 30 years of radar-rainfall datasets and flow measurements for 16 hydrological basins ranging from 7 to 111 km2. Basins vary not only in scale, but also in their degree of urbanisation. We investigated temporal and spatial variability characteristics of rainfall fields across a range of spatial and temporal scales to identify main drivers for variability in runoff response. We identified 3 ranges of basin size with different temporal versus spatial rainfall variability characteristics. Total rainfall volume proved to be the dominant agent determining runoff response at all basin scales, independent of their degree of urbanisation. Peak rainfall intensity and storm core volume are of secondary importance. This applies to all runoff parameters, including runoff volume, runoff peak, volume-to-peak and lag time. Position and movement of the storm with respect to the basin have a negligible influence on runoff response, with the exception of lag times in some of the larger basins. This highlights the importance of accuracy in rainfall estimation: getting the position right but the volume wrong will inevitably lead to large errors in runoff prediction. Our study helps to identify conditions where rainfall variability matters for correct estimation of the rainfall volume as well as the associated runoff response.
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Byrne, Deidre A.
2010-01-01
Seafloor pressure records, collected at 11 stations aligned along a single ground track of the Topex/Poseidon and Jason satellites, are analyzed for their tidal content. With very low background noise levels and approximately 27 months of high-quality records, tidal constituents can be estimated with unusually high precision. This includes many high-frequency lines up through the seventh-diurnal band. The station deployment provides a unique opportunity to compare with tides estimated from satellite altimetry, point by point along the satellite track, in a region of moderately high mesoscale variability. That variability can significantly corrupt altimeter-based tide estimates, even with 17 years of data. A method to improve the along-track altimeter estimates by correcting the data for nontidal variability is found to yield much better agreement with the bottom-pressure data. The technique should prove useful in certain demanding applications, such as altimetric studies of internal tides.
ERIC Educational Resources Information Center
Howard, Donna E.; Wang, Min Qi; Yah, Fang
2008-01-01
The present study, based upon the national 2005 Youth Risk Behavior Survey of U.S. high school students, provides the most current and representative data on physical dating violence among adolescent males (N = 6,528) The dependent variable was physical dating violence. The independent variables included four dimensions: violence, suicide,…
BN-FLEMOps pluvial - A probabilistic multi-variable loss estimation model for pluvial floods
NASA Astrophysics Data System (ADS)
Roezer, V.; Kreibich, H.; Schroeter, K.; Doss-Gollin, J.; Lall, U.; Merz, B.
2017-12-01
Pluvial flood events, such as in Copenhagen (Denmark) in 2011, Beijing (China) in 2012 or Houston (USA) in 2016, have caused severe losses to urban dwellings in recent years. These floods are caused by storm events with high rainfall rates well above the design levels of urban drainage systems, which lead to inundation of streets and buildings. A projected increase in frequency and intensity of heavy rainfall events in many areas and an ongoing urbanization may increase pluvial flood losses in the future. For an efficient risk assessment and adaptation to pluvial floods, a quantification of the flood risk is needed. Few loss models have been developed particularly for pluvial floods. These models usually use simple waterlevel- or rainfall-loss functions and come with very high uncertainties. To account for these uncertainties and improve the loss estimation, we present a probabilistic multi-variable loss estimation model for pluvial floods based on empirical data. The model was developed in a two-step process using a machine learning approach and a comprehensive database comprising 783 records of direct building and content damage of private households. The data was gathered through surveys after four different pluvial flood events in Germany between 2005 and 2014. In a first step, linear and non-linear machine learning algorithms, such as tree-based and penalized regression models were used to identify the most important loss influencing factors among a set of 55 candidate variables. These variables comprise hydrological and hydraulic aspects, early warning, precaution, building characteristics and the socio-economic status of the household. In a second step, the most important loss influencing variables were used to derive a probabilistic multi-variable pluvial flood loss estimation model based on Bayesian Networks. Two different networks were tested: a score-based network learned from the data and a network based on expert knowledge. Loss predictions are made through Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. With the ability to cope with incomplete information and use expert knowledge, as well as inherently providing quantitative uncertainty information, it is shown that loss models based on BNs are superior to deterministic approaches for pluvial flood risk assessment.
Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016
Reyers, Mark; Märker, Carolin; Witthaut, Dirk
2018-01-01
Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability. PMID:29329349
Assessment of WENO-extended two-fluid modelling in compressible multiphase flows
NASA Astrophysics Data System (ADS)
Kitamura, Keiichi; Nonomura, Taku
2017-03-01
The two-fluid modelling based on an advection-upwind-splitting-method (AUSM)-family numerical flux function, AUSM+-up, following the work by Chang and Liou [Journal of Computational Physics 2007;225: 840-873], has been successfully extended to the fifth order by weighted-essentially-non-oscillatory (WENO) schemes. Then its performance is surveyed in several numerical tests. The results showed a desired performance in one-dimensional benchmark test problems: Without relying upon an anti-diffusion device, the higher-order two-fluid method captures the phase interface within a fewer grid points than the conventional second-order method, as well as a rarefaction wave and a very weak shock. At a high pressure ratio (e.g. 1,000), the interpolated variables appeared to affect the performance: the conservative-variable-based characteristic-wise WENO interpolation showed less sharper but more robust representations of the shocks and expansions than the primitive-variable-based counterpart did. In two-dimensional shock/droplet test case, however, only the primitive-variable-based WENO with a huge void fraction realised a stable computation.
Hanley, James A; Hutcheon, Jennifer A
2010-05-01
It is widely believed that young children are able to adjust their energy intake across successive meals to compensate for higher or lower intakes at a given meal. This conclusion is based on past observations that although children's intake at individual meals is highly variable, total daily intakes are relatively constant. We investigated how much of this reduction in variability could be explained by the statistical phenomenon of the variability of individual components (each meal) always being relatively larger than the variability of their sum (total daily intake), independent of any physiological compensatory mechanism. We calculated, theoretically and by simulation, how variable a child's daily intake would be if there was no correlation between intakes at individual meals. We simulated groups of children with meal/snack intakes and variability in meal/snack intakes based on previously published values. Most importantly, we assumed that there was no correlation between intakes on successive meals. In both approaches, the coefficient of variation of the daily intakes was roughly 15%, considerably less than the 34% for individual meals. Thus, most of the reduction in variability found in past studies was explained without positing strong 'compensation'. Although children's daily energy intakes are indeed considerably less variable than their individual components, this phenomenon was observed even when intakes at each meal were simulated to be totally independent. We conclude that the commonly held belief that young children have a strong physiological compensatory mechanism to adjust intake at one meal based on intake at prior meals is likely to be based on flawed statistical reasoning.
Metabolic power and energetic costs of professional Australian Football match-play.
Coutts, Aaron J; Kempton, Thomas; Sullivan, Courtney; Bilsborough, Johann; Cordy, Justin; Rampinini, Ermanno
2015-03-01
To compare the metabolic power demands between positional groups, and examine temporal changes in these parameters during Australian Football match-play. Longitudinal observational study. Global positioning system data were collected from 39 Australian Football players from the same club during 19 Australian Football League competition games over two seasons. A total of 342 complete match samples were obtained for analysis. Players were categorised into one of six positional groups: tall backs, mobile backs, midfielders, tall forwards, mobile forwards and rucks. Instantaneous raw velocity data obtained from the global positioning system units was exported to a customised spreadsheet which provided estimations of both speed-based (e.g. total and high-speed running distance) and derived metabolic power and energy expenditure variables (e.g. average metabolic power, high-power distance, total energy expenditure). There were significant differences between positional groups for both speed-based and metabolic power indices, with midfielders covering more total and high-speed distance, as well as greater average and overall energy expenditure compared to other positions (all p<0.001). There were reductions in total, high-speed, and high-power distance, as well as average metabolic power throughout the match (all p<0.001). Positional differences exist for both metabolic power and traditional running based variables. Generally, midfielders, followed by mobile forwards and mobile backs had greater activity profiles compared to other position groups. We observed that the reductions in most metabolic power variables during the course of the match are comparable to traditional running based metrics. This study demonstrates that metabolic power data may contribute to our understanding of the physical demands of Australian Football. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Current, K. Wayne; Yuk, Kelvin; McConaghy, Charles; Gascoyne, Peter R. C.; Schwartz, Jon A.; Vykoukal, Jody V.; Andrews, Craig
2010-01-01
A high-voltage (HV) integrated circuit has been demonstrated to transport droplets on programmable paths across its coated surface. This chip is the engine for a dielectrophoresis (DEP)-based micro-fluidic lab-on-a-chip system. This chip creates DEP forces that move and help inject droplets. Electrode excitation voltage and frequency are variable. With the electrodes driven with a 100V peak-to-peak periodic waveform, the maximum high-voltage electrode waveform frequency is about 200Hz. Data communication rate is variable up to 250kHz. This demonstration chip has a 32×32 array of nominally 100V electrode drivers. It is fabricated in a 130V SOI CMOS fabrication technology, dissipates a maximum of 1.87W, and is about 10.4 mm × 8.2 mm. PMID:23989241
ERIC Educational Resources Information Center
Fletcher, Edward C., Jr.
2012-01-01
The purpose of this study was to predict occupational choices based on demographic variables and high school curriculum tracks. Based on an analysis of the 1997 National Longitudinal Survey of Youth (NLSY) data set that examined high school graduates' occupational choices in 2006, findings indicated that CTE graduates were 2.7 times more likely to…
H.E. Anderson; J. Breidenbach
2007-01-01
Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...
Performance, physiological, and oculometer evaluation of VTOL landing displays
NASA Technical Reports Server (NTRS)
North, R. A.; Stackhouse, S. P.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.
A multiple-alignment based primer design algorithm for genetically highly variable DNA targets
2013-01-01
Background Primer design for highly variable DNA sequences is difficult, and experimental success requires attention to many interacting constraints. The advent of next-generation sequencing methods allows the investigation of rare variants otherwise hidden deep in large populations, but requires attention to population diversity and primer localization in relatively conserved regions, in addition to recognized constraints typically considered in primer design. Results Design constraints include degenerate sites to maximize population coverage, matching of melting temperatures, optimizing de novo sequence length, finding optimal bio-barcodes to allow efficient downstream analyses, and minimizing risk of dimerization. To facilitate primer design addressing these and other constraints, we created a novel computer program (PrimerDesign) that automates this complex procedure. We show its powers and limitations and give examples of successful designs for the analysis of HIV-1 populations. Conclusions PrimerDesign is useful for researchers who want to design DNA primers and probes for analyzing highly variable DNA populations. It can be used to design primers for PCR, RT-PCR, Sanger sequencing, next-generation sequencing, and other experimental protocols targeting highly variable DNA samples. PMID:23965160
Coswig, Victor S; Gentil, Paulo; Bueno, João C A; Follmer, Bruno; Marques, Vitor A; Del Vecchio, Fabrício B
2018-01-01
Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. The sample consisted of Judo ( n = 16) and BJJ ( n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights.
Control of variable speed variable pitch wind turbine based on a disturbance observer
NASA Astrophysics Data System (ADS)
Ren, Haijun; Lei, Xin
2017-11-01
In this paper, a novel sliding mode controller based on disturbance observer (DOB) to optimize the efficiency of variable speed variable pitch (VSVP) wind turbine is developed and analyzed. Due to the highly nonlinearity of the VSVP system, the model is linearly processed to obtain the state space model of the system. Then, a conventional sliding mode controller is designed and a DOB is added to estimate wind speed. The proposed control strategy can successfully deal with the random nature of wind speed, the nonlinearity of VSVP system, the uncertainty of parameters and external disturbance. Via adding the observer to the sliding mode controller, it can greatly reduce the chattering produced by the sliding mode switching gain. The simulation results show that the proposed control system has the effectiveness and robustness.
The antigenic evolution of influenza: drift or thrift?
Wikramaratna, Paul S.; Sandeman, Michi; Recker, Mario; Gupta, Sunetra
2013-01-01
It is commonly assumed that antibody responses against the influenza virus are polarized in the following manner: strong antibody responses are directed at highly variable antigenic epitopes, which consequently undergo ‘antigenic drift’, while weak antibody responses develop against conserved epitopes. As the highly variable epitopes are in a constant state of flux, current antibody-based vaccine strategies are focused on the conserved epitopes in the expectation that they will provide some level of clinical protection after appropriate boosting. Here, we use a theoretical model to suggest the existence of epitopes of low variability, which elicit a high degree of both clinical and transmission-blocking immunity. We show that several epidemiological features of influenza and its serological and molecular profiles are consistent with this model of ‘antigenic thrift’, and that identifying the protective epitopes of low variability predicted by this model could offer a more viable alternative to regularly update the influenza vaccine than exploiting responses to weakly immunogenic conserved regions. PMID:23382423
DOT National Transportation Integrated Search
2015-01-01
Millions of tons of graded aggregate base (GAB) materials are used in construction of : highway base layers in Maryland due to their satisfactory mechanical properties. The : fines content of a GAB material is highly variable and is often related to ...
Choosing and Leaving Science in Highly Selective Institutions.
ERIC Educational Resources Information Center
Strenta, A. Christopher; And Others
1994-01-01
A study investigated causes of initial interest in and attrition from natural sciences and engineering among 5,320 students entering 4 highly selective institutions in 1988, with attention to probable causes of disproportionate attrition of women. Reasons for high attrition were based on cognitive variables or the perceived "chilly"…
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A study of the factors affecting advancement and graduation for engineering students
NASA Astrophysics Data System (ADS)
Fletcher, John Thomas
The purpose of this study was, first, to determine whether a set of predictor variables could be identified from pre-enrollment and post-enrollment data that would differentiate students who advance to a major in engineering from non-advancers and, further, to determine if the predictor variables would differentiate students who graduate from the College of Engineering from non-graduates and graduates of other colleges at Auburn University. A second purpose was to determine if the predictor variables would correctly identify male and female students with the same degree of accuracy. The third purpose was to determine if there were significant relationships between the predictor variables studied and grades earned in a set of 15 courses that have enrollments over 100 students and are part of the pre-engineering curriculum. The population for this study was the 868 students who entered the pre-engineering program at Auburn University as freshmen during the Summer and Fall Quarters of 1991. The variables selected to differentiate the different groups were ACT scores, high school grade indices, and first quarter college grade point average. Two sets of classification matrices were developed using analysis and holdout samples that were divided based on sex. With respect to the question about advancement to the professional engineering program, structure coefficients derived from discriminant analysis procedures performed on all the cases combined indicated that first quarter college grade point average, high school math index, ACT math score, and high school science grade index were important predictor variables in classifying students who advanced to the professional engineering program and those who did not. Further, important structure coefficients with respect to graduation with a degree from the College of Engineering were first quarter college grade point average, high school math index, ACT math score, and high school science grade index. The results of this study indicated that significant differences existed in the model's ability to predict advancement and graduation for male and female students. This difference was not unexpected based on the male-dominated population. However, the models identified predicted at a high rate for both male and female students. Finally, many significant relationships were found to exist between the predictor variables and the 15 pre-engineering courses that were selected. The strength of the relationships ranged from a high of .82, p < .001 (Chemistry 103 grade with total high school grade index) to a low of .07, p > .05 (Chemistry 102 with ACT science score).
Socio-economic factors and suicide rates in European Union countries.
Ferretti, Fabio; Coluccia, Anna
2009-04-01
Are socio-economic factors valid determinants of suicide? The modern sociological theory of suicide is based on Durkheim's studies. In addition to these fundamental social determinants, modern theorists have put more attention on economic factors. The purpose of the research is to determine the relationship between suicide rates and socio-economic factors, such as demography, economic development, education, healthcare systems, living conditions and labour market. All data were collected from a Eurostat publication and they concern 25 European Union countries. In order to test this relationship, a discriminant analysis was performed using an ordinal dependent variable and a set of independent variables concerning socio-economic factors. A dataset of 37 independent variables was used. We estimated a model with five variables: annual growth rates for industry, people working in S&T (% of total employment), at-risk-of-poverty rate, all accidents (standardized rates), and healthcare expenditures (% of GDP). Highly significant values of Wilk's Lambda assess a good discriminating power of the model. The accuracy too is very high: all cases are correctly classified by the model. Countries with high suicide rate levels are marked by high levels of at-risk-of-poverty rates, high annual growth rates for industry and low healthcare expenditures.
NASA Astrophysics Data System (ADS)
Natali, Marco; Passeri, Daniele; Reggente, Melania; Tamburri, Emanuela; Terranova, Maria Letizia; Rossi, Marco
2016-06-01
Characterization of mechanical properties at the nanometer scale at variable temperature is one of the main challenges in the development of polymer-based nanocomposites for application in high temperature environments. Contact resonance atomic force microscopy (CR-AFM) is a powerful technique to characterize viscoelastic properties of materials at the nanoscale. In this work, we demonstrate the capability of CR-AFM of characterizing viscoelastic properties (i.e., storage and loss moduli, as well as loss tangent) of polymer-based nanocomposites at variable temperature. CR-AFM is first illustrated on two polymeric reference samples, i.e., low-density polyethylene (LDPE) and polycarbonate (PC). Then, temperature-dependent viscoelastic properties (in terms of loss tangent) of a nanocomposite sample constituted by a epoxy resin reinforced with single-wall carbon nanotubes (SWCNTs) are investigated.
Malaria control under unstable dynamics: reactive vs. climate-based strategies.
Baeza, Andres; Bouma, Menno J; Dhiman, Ramesh; Pascual, Mercedes
2014-01-01
In areas of the world where malaria prevails under unstable conditions, attacking the adult vector population through insecticide-based Indoor Residual Spraying (IRS) is the most common method for controlling epidemics. Defined in policy guidance, the use of Annual Parasitic Incidence (API) is an important tool for assessing the effectiveness of control and for planning new interventions. To investigate the consequences that a policy based on API in previous seasons might have on the population dynamics of the disease and on control itself in regions of low and seasonal transmission, we formulate a mathematical malaria model that couples epidemiologic and vector dynamics with IRS intervention. This model is parameterized for a low transmission and semi-arid region in northwest India, where epidemics are driven by high rainfall variability. We show that this type of feedback mechanism in control strategies can generate transient cycles in malaria even in the absence of environmental variability, and that this tendency to cycle can in turn limit the effectiveness of control in the presence of such variability. Specifically, for realistic rainfall conditions and over a range of control intensities, the effectiveness of such 'reactive' intervention is compared to that of an alternative strategy based on rainfall and therefore vector variability. Results show that the efficacy of intervention is strongly influenced by rainfall variability and the type of policy implemented. In particular, under an API 'reactive' policy, high vector populations can coincide more frequently with low control coverage, and in so doing generate large unexpected epidemics and decrease the likelihood of elimination. These results highlight the importance of incorporating information on climate variability, rather than previous incidence, in planning IRS interventions in regions of unstable malaria. These findings are discussed in the more general context of elimination and other low transmission regions such as highlands. Copyright © 2013. Published by Elsevier B.V.
Optical And Near-infrared Variability Among Distant Galactic Nuclei Of The CANDELS EGS Field
NASA Astrophysics Data System (ADS)
Grogin, Norman A.; Dahlen, T.; Donley, J.; Koekemoer, A. M.; Salvato, M.; CANDELS Collaboration
2014-01-01
The CANDELS HST Multi-cycle Treasury Program completed its observations of the EGS field in May 2013. The coverage comprises WFC3/IR exposures in J-band and H-band across a contiguous 200 square arcminutes, and coordinated parallel ACS/WFC exposures in V-band and I-band across a contiguous 270 square arcminutes that largely overlaps the WFC3/IR coverage. These observations were split between two epochs with 52-day spacing for the primary purpose of high-redshift supernovae (SNe) detection and follow-up. However, this combination of sensitivity, high resolution, and time spacing is also well-suited to detect optical and near-infrared variability ("ONIV") among moderate- to high-redshift galaxy nuclei (H<25AB mag; I<26AB mag). These data are sensitive to rest-frame variability time-scales of up to several weeks, and in combination with the original EGS ACS imaging from 2004, to time-scales of up to several years in the V- and I-bands. The overwhelming majority of these variable galaxy nuclei will be AGN; the small fraction arising from SNe have already been meticulously culled by the CANDELS high-redshift SNe search effort. These ONIV galaxy nuclei potentially represent a significant addition to the census of distant lower-luminosity AGN subject to multi-wavelength scrutiny with CANDELS. We present the preliminary results of our EGS variability analysis, including a comparison of the HST ONIVs with the known AGN candidates in the field from deep Spitzer and Chandra imaging, and from extensive ground-based optical spectroscopy as well as HST IR-grism spectroscopy. We also assess the redshift distribution of the ONIVs from both spectroscopy and from robust SED-fitting incorporating ancillary deep ground-based imaging along with the CANDELS VIJH photometry. We compare these results with our prior variability analysis of the similarly-observed CANDELS UDS field from 2011 and CANDELS COSMOS field from 2012.
Heart rate variability based on risk stratification for type 2 diabetes mellitus.
Silva-E-Oliveira, Julia; Amélio, Pâmela Marina; Abranches, Isabela Lopes Laguardia; Damasceno, Dênis Derly; Furtado, Fabianne
2017-01-01
To evaluate heart rate variability among adults with different risk levels for type 2 diabetes mellitus. The risk for type 2 diabetes mellitus was assessed in 130 participants (89 females) based on the questionnaire Finnish Diabetes Risk Score and was classified as low risk (n=26), slightly elevated risk (n=41), moderate risk (n=27) and high risk (n=32). To measure heart rate variability, a heart-rate monitor Polar S810i® was employed to obtain RR series for each individual, at rest, for 5 minutes, followed by analysis of linear and nonlinear indexes. The groups at higher risk of type 2 diabetes mellitus had significantly lower linear and nonlinear heart rate variability indexes. The individuals at high risk for type 2 diabetes mellitus have lower heart rate variability. Avaliar a variabilidade da frequência cardíaca em adultos com diferentes níveis de risco para diabetes mellitus tipo 2. O grau de risco para diabetes mellitus tipo 2 de 130 participantes (41 homens) foi avaliado pelo questionário Finnish Diabetes Risk Score. Os participantes foram classificados em baixo risco (n=26), risco levemente elevado (n=41), risco moderado (n=27) e alto risco (n=32). Para medir a variabilidade da frequência cardíaca, utilizou-se o frequencímetro Polar S810i® para obter séries de intervalo RR para cada indivíduo, em repouso, durante 5 minutos; posteriormente, realizou-se análise por meio de índices lineares e não-lineares. O grupo com maior risco para diabetes mellitus tipo 2 teve uma diminuição significante nos índices lineares e não-lineares da variabilidade da frequência cardíaca. Os resultados apontam que indivíduos com risco alto para diabetes mellitus tipo 2 tem menor variabilidade da frequência cardíaca. To evaluate heart rate variability among adults with different risk levels for type 2 diabetes mellitus. The risk for type 2 diabetes mellitus was assessed in 130 participants (89 females) based on the questionnaire Finnish Diabetes Risk Score and was classified as low risk (n=26), slightly elevated risk (n=41), moderate risk (n=27) and high risk (n=32). To measure heart rate variability, a heart-rate monitor Polar S810i® was employed to obtain RR series for each individual, at rest, for 5 minutes, followed by analysis of linear and nonlinear indexes. The groups at higher risk of type 2 diabetes mellitus had significantly lower linear and nonlinear heart rate variability indexes. The individuals at high risk for type 2 diabetes mellitus have lower heart rate variability.
2009-01-01
Background Shigella flexneri is one of the causative agents of shigellosis, a major cause of childhood mortality in developing countries. Multilocus variable-number tandem repeat (VNTR) analysis (MLVA) is a prominent subtyping method to resolve closely related bacterial isolates for investigation of disease outbreaks and provide information for establishing phylogenetic patterns among isolates. The present study aimed to develop an MLVA method for S. flexneri and the VNTR loci identified were tested on 242 S. flexneri isolates to evaluate their variability in various serotypes. The isolates were also analyzed by pulsed-field gel electrophoresis (PFGE) to compare the discriminatory power and to evaluate the usefulness of MLVA as a tool for phylogenetic analysis of S. flexneri. Results Thirty-six VNTR loci were identified by exploring the repeat sequence loci in genomic sequences of Shigella species and by testing the loci on nine isolates of different subserotypes. The VNTR loci in different serotype groups differed greatly in their variability. The discriminatory power of an MLVA assay based on four most variable VNTR loci was higher, though not significantly, than PFGE for the total isolates, a panel of 2a isolates, which were relatively diverse, and a panel of 4a/Y isolates, which were closely-related. Phylogenetic groupings based on PFGE patterns and MLVA profiles were considerably concordant. The genetic relationships among the isolates were correlated with serotypes. The phylogenetic trees constructed using PFGE patterns and MLVA profiles presented two distinct clusters for the isolates of serotype 3 and one distinct cluster for each of the serotype groups, 1a/1b/NT, 2a/2b/X/NT, 4a/Y, and 6. Isolates that had different serotypes but had closer genetic relatedness than those with the same serotype were observed between serotype Y and subserotype 4a, serotype X and subserotype 2b, subserotype 1a and 1b, and subserotype 3a and 3b. Conclusions The 36 VNTR loci identified exhibited considerably different degrees of variability among S. flexneri serotype groups. VNTR locus could be highly variable in a serotype but invariable in others. MLVA assay based on four highly variable loci could display a comparable resolving power to PFGE in discriminating isolates. MLVA is also a prominent molecular tool for phylogenetic analysis of S. flexneri; the resulting data are beneficial to establish clear clonal patterns among different serotype groups and to discern clonal groups among isolates within the same serotype. As highly variable VNTR loci could be serotype-specific, a common MLVA protocol that consists of only a small set of loci, for example four to eight loci, and that provides high resolving power to all S. flexneri serotypes may not be obtainable. PMID:20042119
High performance GPU processing for inversion using uniform grid searches
NASA Astrophysics Data System (ADS)
Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios
2017-04-01
Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.
NASA Astrophysics Data System (ADS)
Poppick, A. N.; McKinnon, K. A.; Dunn-Sigouin, E.; Deser, C.
2017-12-01
Initial condition climate model ensembles suggest that regional temperature trends can be highly variable on decadal timescales due to characteristics of internal climate variability. Accounting for trend uncertainty due to internal variability is therefore necessary to contextualize recent observed temperature changes. However, while the variability of trends in a climate model ensemble can be evaluated directly (as the spread across ensemble members), internal variability simulated by a climate model may be inconsistent with observations. Observation-based methods for assessing the role of internal variability on trend uncertainty are therefore required. Here, we use a statistical resampling approach to assess trend uncertainty due to internal variability in historical 50-year (1966-2015) winter near-surface air temperature trends over North America. We compare this estimate of trend uncertainty to simulated trend variability in the NCAR CESM1 Large Ensemble (LENS), finding that uncertainty in wintertime temperature trends over North America due to internal variability is largely overestimated by CESM1, on average by a factor of 32%. Our observation-based resampling approach is combined with the forced signal from LENS to produce an 'Observational Large Ensemble' (OLENS). The members of OLENS indicate a range of spatially coherent fields of temperature trends resulting from different sequences of internal variability consistent with observations. The smaller trend variability in OLENS suggests that uncertainty in the historical climate change signal in observations due to internal variability is less than suggested by LENS.
Flexible Control of Safety Margins for Action Based on Environmental Variability.
Hadjiosif, Alkis M; Smith, Maurice A
2015-06-17
To reduce the risk of slip, grip force (GF) control includes a safety margin above the force level ordinarily sufficient for the expected load force (LF) dynamics. The current view is that this safety margin is based on the expected LF dynamics, amounting to a static safety factor like that often used in engineering design. More efficient control could be achieved, however, if the motor system reduces the safety margin when LF variability is low and increases it when this variability is high. Here we show that this is indeed the case by demonstrating that the human motor system sizes the GF safety margin in proportion to an internal estimate of LF variability to maintain a fixed statistical confidence against slip. In contrast to current models of GF control that neglect the variability of LF dynamics, we demonstrate that GF is threefold more sensitive to the SD than the expected value of LF dynamics, in line with the maintenance of a 3-sigma confidence level. We then show that a computational model of GF control that includes a variability-driven safety margin predicts highly asymmetric GF adaptation between increases versus decreases in load. We find clear experimental evidence for this asymmetry and show that it explains previously reported differences in how rapidly GFs and manipulatory forces adapt. This model further predicts bizarre nonmonotonic shapes for GF learning curves, which are faithfully borne out in our experimental data. Our findings establish a new role for environmental variability in the control of action. Copyright © 2015 the authors 0270-6474/15/359106-16$15.00/0.
Lavender, Jason M; Wonderlich, Stephen A; Crosby, Ross D; Engel, Scott G; Mitchell, James E; Crow, Scott J; Peterson, Carol B; Le Grange, Daniel
2013-08-01
This study sought to empirically derive and validate clinically relevant personality-based subtypes of anorexia nervosa (AN). Women (N = 116) with full or subthreshold AN completed baseline measures of personality, clinical variables, and eating disorder (ED) symptoms, followed by two weeks of ecological momentary assessment (EMA). A latent profile analysis was conducted to identify personality subtypes, which were compared on baseline clinical variables and EMA variables. The best-fitting model supported three subtypes: underregulated, overregulated, and low psychopathology. The underregulated subtype (characterized by high Stimulus Seeking, Self-Harm, and Oppositionality) displayed greater baseline ED symptoms, as well as lower positive affect and greater negative affect, self-discrepancy, and binge eating in the natural environment. The overregulated subtype (characterized by high Compulsivity and low Stimulus Seeking) was more likely to have a lifetime obsessive-compulsive disorder diagnosis and exhibited greater perfectionism; levels of negative affect, positive affect, and self-discrepancy in this group were intermediate between the other subtypes. The low psychopathology subtype (characterized by normative personality) displayed the lowest levels of baseline ED symptoms, co-occurring disorders, and ED behaviors measured via EMA. Findings support the validity of these personality-based subtypes, suggesting the potential utility of addressing within-diagnosis heterogeneity in the treatment of AN. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-05-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Loisel, Julie; MacDonald, Glen M.; Thomson, Marcus J.
2017-01-01
The American Southwest has experienced a series of severe droughts interspersed with strong wet episodes over the past decades, prompting questions about future climate patterns and potential intensification of weather disruptions under warming conditions. Here we show that interannual hydroclimatic variability in this region has displayed a significant level of non-stationarity over the past millennium. Our tree ring-based analysis of past drought indicates that the Little Ice Age (LIA) experienced high interannual hydroclimatic variability, similar to projections for the 21st century. This is contrary to the Medieval Climate Anomaly (MCA), which had reduced variability and therefore may be misleading as an analog for 21st century warming, notwithstanding its warm (and arid) conditions. Given past non-stationarity, and particularly erratic LIA, a ‘warm LIA’ climate scenario for the coming century that combines high precipitation variability (similar to LIA conditions) with warm and dry conditions (similar to MCA conditions) represents a plausible situation that is supported by recent climate simulations. Our comparison of tree ring-based drought analysis and records from the tropical Pacific Ocean suggests that changing variability in El Niño Southern Oscillation (ENSO) explains much of the contrasting variances between the MCA and LIA conditions across the American Southwest. Greater ENSO variability for the 21st century could be induced by a decrease in meridional sea surface temperature gradient caused by increased greenhouse gas concentration, as shown by several recent climate modeling experiments. Overall, these results coupled with the paleo-record suggests that using the erratic LIA conditions as benchmarks for past hydroclimatic variability can be useful for developing future water-resource management and drought and flood hazard mitigation strategies in the Southwest. PMID:29036207
Risk factors correlated with plantar pressure in Chinese patients with type 2 diabetes.
Qiu, Xuan; Tian, De-Hu; Han, Chang-Ling; Chen, Wei; Wang, Zhan-Jian; Mu, Zhen-Yun; Li, Xu; Liu, Kuan-Zhi
2013-12-01
Plantar pressure is a key factor for predicting ulceration in the foot of a diabetes patient. We recruited a group of 100 Chinese patients with type 2 diabetes and an age-, sex-, weight-, and height-matched group of 100 Chinese subjects without diabetes. We obtained plantar pressure data using a Footscan(®) gait system (RsScan International, Olen, Belgium) when the subjects with and without diabetes walked barefoot across a sensor platform. We recorded the maximum force, maximum pressure, impulse, pressure-time integral, and loading rate from 10 regions of the foot. We collected the data of 11 history-based variables, 10 anthropometric variables, and three metabolic variables regarding the clinical characteristics of the diabetes patients. Weight was identified as a determining factor for high plantar pressure. Height, the Neuropathy Symptom Score (NSS), and ankle-brachial index (ABI) were correlated positively with plantar pressure measurements, respectively. The sex, history of ulcer and callus, intima-media membrane of the lower limb blood vessels, and fasting blood glucose (FBG) could also explain a portion of the variability of the plantar pressure measurements. However, the correlations were low or weak. High plantar pressure in diabetes patients could be predicted, in part, based on weight, height, NSS, ABI, sex, history of ulcer and callus, intima-media membrane of the lower limb blood vessels, and FBG. Therefore, interventions should be taken specifically before high plantar pressure emerges.
Estimation of Monthly Near Surface Air Temperature Using Geographically Weighted Regression in China
NASA Astrophysics Data System (ADS)
Wang, M. M.; He, G. J.; Zhang, Z. M.; Zhang, Z. J.; Liu, X. G.
2018-04-01
Near surface air temperature (NSAT) is a primary descriptor of terrestrial environment conditions. The availability of NSAT with high spatial resolution is deemed necessary for several applications such as hydrology, meteorology and ecology. In this study, a regression-based NSAT mapping method is proposed. This method is combined remote sensing variables with geographical variables, and uses geographically weighted regression to estimate NSAT. The altitude was selected as geographical variable; and the remote sensing variables include land surface temperature (LST) and Normalized Difference vegetation index (NDVI). The performance of the proposed method was assessed by predict monthly minimum, mean, and maximum NSAT from point station measurements in China, a domain with a large area, complex topography, and highly variable station density, and the NSAT maps were validated against the meteorology observations. Validation results with meteorological data show the proposed method achieved an accuracy of 1.58 °C. It is concluded that the proposed method for mapping NSAT is very operational and has good precision.
Religiousness as a Predictor of Alcohol Use in High School Students.
ERIC Educational Resources Information Center
Park, Hae-Seong; Bauer, Scott; Oescher, Jeffrey
2001-01-01
Examines the relationship between religiousness and alcohol use of adolescents based on a sample of high school seniors. Results provide support for examining religiousness variables as predictors of alcohol use patterns of adolescents. (Contains 16 references and 4 tables.) (GCP)
ERIC Educational Resources Information Center
Gelman, Andrew; Imbens, Guido
2014-01-01
It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…
NASA Technical Reports Server (NTRS)
Davis, Anthony B.; Frakenbert, Christian
2012-01-01
Success in three aspects of OCO-2 mission is threatened by unaccounted spa,al variability effects, all involving atmospheric scattering: 1. Low/moderately opaque clouds can escape the prescreening by mimicking a brighter surface. 2. Prescreening does not account for long-range radia,ve impact (adjacency effect) of nearby clouds. Need for extended cloud masking? 3. Oblique looks in target mode are highly exposed to surface adjacency and aerosol variability effects.We'll be covering all three bases!
A method for monitoring the variability in nuclear absorption characteristics of aviation fuels
NASA Technical Reports Server (NTRS)
Sprinkle, Danny R.; Shen, Chih-Ping
1988-01-01
A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.
Chahine, Teresa; Schultz, Bradley D.; Zartarian, Valerie G.; Xue, Jianping; Subramanian, SV; Levy, Jonathan I.
2011-01-01
Community-based cumulative risk assessment requires characterization of exposures to multiple chemical and non-chemical stressors, with consideration of how the non-chemical stressors may influence risks from chemical stressors. Residential radon provides an interesting case example, given its large attributable risk, effect modification due to smoking, and significant variability in radon concentrations and smoking patterns. In spite of this fact, no study to date has estimated geographic and sociodemographic patterns of both radon and smoking in a manner that would allow for inclusion of radon in community-based cumulative risk assessment. In this study, we apply multi-level regression models to explain variability in radon based on housing characteristics and geological variables, and construct a regression model predicting housing characteristics using U.S. Census data. Multi-level regression models of smoking based on predictors common to the housing model allow us to link the exposures. We estimate county-average lifetime lung cancer risks from radon ranging from 0.15 to 1.8 in 100, with high-risk clusters in areas and for subpopulations with high predicted radon and smoking rates. Our findings demonstrate the viability of screening-level assessment to characterize patterns of lung cancer risk from radon, with an approach that can be generalized to multiple chemical and non-chemical stressors. PMID:22016710
The metabolic power and energetic demands of elite Gaelic football match play.
Malone, Shane; Solan, Barry; Collins, Kieran; Doran, Dominic
2017-05-01
Metabolic power has not yet been investigated within elite Gaelic football. The aim of the current investigation was to compare the metabolic power demands between positional groups and examine the temporal profile of elite Gaelic football match play. Global positional satellite system (GPS) data were collected from 50 elite Gaelic football players from 4 inter-county teams during 35 elite competitive matches over a three season period. A total of 351 complete match samples were obtained for final analysis. Players were categorized based on positional groups; full-back, half-back, midfield, half-forward and full-forward. Instantaneous raw velocity data was obtained from the GPS and exported to a customized spreadsheet which provided estimations of both speed based, derived metabolic power and energy expenditure variables (total distance, high speed distance, average metabolic power, high power distance and total energy expenditure). Match mean distance was 9222±1588 m, reflective of an average metabolic power of 9.5-12.5 W·kg-1, with an average energy expenditure of 58-70 Kj·kg-1 depending on position. There were significant differences between positional groups for both speed-based and metabolic power indices. Midfielders covered more total and high-speed distance, as well as greater average and overall energy expenditure compared to other positions (P<0.001). A reduction in total, high-speed, and high-power distance, as well as average metabolic power throughout the match (P<0.001) was observed. Positional differences exist for both metabolic power and traditional running based variables. The middle three positions (midfield, half-back and half-forward) possess greater activity profiles when compared to other positional groups. The reduction in metabolic power and traditional running based variables are comparable across match play. The current study demonstrates that metabolic power may contribute to our understanding of Gaelic football match-play.
NASA Astrophysics Data System (ADS)
Mangla, Rohit; Kumar, Shashi; Nandy, Subrata
2016-05-01
SAR and LiDAR remote sensing have already shown the potential of active sensors for forest parameter retrieval. SAR sensor in its fully polarimetric mode has an advantage to retrieve scattering property of different component of forest structure and LiDAR has the capability to measure structural information with very high accuracy. This study was focused on retrieval of forest aboveground biomass (AGB) using Terrestrial Laser Scanner (TLS) based point clouds and scattering property of forest vegetation obtained from decomposition modelling of RISAT-1 fully polarimetric SAR data. TLS data was acquired for 14 plots of Timli forest range, Uttarakhand, India. The forest area is dominated by Sal trees and random sampling with plot size of 0.1 ha (31.62m*31.62m) was adopted for TLS and field data collection. RISAT-1 data was processed to retrieve SAR data based variables and TLS point clouds based 3D imaging was done to retrieve LiDAR based variables. Surface scattering, double-bounce scattering, volume scattering, helix and wire scattering were the SAR based variables retrieved from polarimetric decomposition. Tree heights and stem diameters were used as LiDAR based variables retrieved from single tree vertical height and least square circle fit methods respectively. All the variables obtained for forest plots were used as an input in a machine learning based Random Forest Regression Model, which was developed in this study for forest AGB estimation. Modelled output for forest AGB showed reliable accuracy (RMSE = 27.68 t/ha) and a good coefficient of determination (0.63) was obtained through the linear regression between modelled AGB and field-estimated AGB. The sensitivity analysis showed that the model was more sensitive for the major contributed variables (stem diameter and volume scattering) and these variables were measured from two different remote sensing techniques. This study strongly recommends the integration of SAR and LiDAR data for forest AGB estimation.
Rosa, Juliana da; Weber, Gabriela Gomes; Cardoso, Rafaela; Górski, Felipe; Da-Silva, Paulo Roberto
2017-01-01
Better knowledge of medicinal plant species and their conservation is an urgent need worldwide. Decision making for conservation strategies can be based on the knowledge of the variability and population genetic structure of the species and on the events that may influence these genetic parameters. Achyrocline flaccida (Weinm.) DC. is a native plant from the grassy fields of South America with high value in folk medicine. In spite of its importance, no genetic and conservation studies are available for the species. In this work, microsatellite and ISSR (inter-simple sequence repeat) markers were used to estimate the genetic variability and structure of seven populations of A. flaccida from southern Brazil. The microsatellite markers were inefficient in A. flaccida owing to a high number of null alleles. After the evaluation of 42 ISSR primers on one population, 10 were selected for further analysis of seven A. flaccida populations. The results of ISSR showed that the high number of exclusive absence of loci might contribute to the inter-population differentiation. Genetic variability of the species was high (Nei's diversity of 0.23 and Shannon diversity of 0.37). AMOVA indicated higher genetic variability within (64.7%) than among (33.96%) populations, and the variability was unevenly distributed (FST 0.33). Gene flow among populations ranged from 1.68 to 5.2 migrants per generation, with an average of 1.39. The results of PCoA and Bayesian analyses corroborated and indicated that the populations are structured. The observed genetic variability and population structure of A. flaccida are discussed in the context of the vegetation formation history in southern Brazil, as well as the possible anthropogenic effects. Additionally, we discuss the implications of the results in the conservation of the species.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
Gaussian-modulated coherent-state measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Ma, Xiang-Chun; Sun, Shi-Hai; Jiang, Mu-Sheng; Gui, Ming; Liang, Lin-Mei
2014-04-01
Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks.
Wang, Feng; Kaplan, Jess L.; Gold, Benjamin D.; Bhasin, Manoj K.; Ward, Naomi L.; Kellermayer, Richard; Kirschner, Barbara S.; Heyman, Melvin B.; Dowd, Scot E.; Cox, Stephen B.; Dogan, Haluk; Steven, Blaire; Ferry, George D.; Cohen, Stanley A.; Baldassano, Robert N.; Moran, Christopher J.; Garnett, Elizabeth A.; Drake, Lauren; Otu, Hasan H.; Mirny, Leonid A.; Libermann, Towia A.; Winter, Harland S.; Korolev, Kirill
2016-01-01
SUMMARY The relationship between the host and its microbiota is challenging to understand because both microbial communities and their environment are highly variable. We developed a set of techniques to address this challenge based on population dynamics and information theory. These methods identified additional bacterial taxa associated with pediatric Crohn's disease and could detect significant changes in microbial communities with fewer samples than previous statistical approaches. We also substantially improved the accuracy of the diagnosis based on the microbiota from stool samples and found that the ecological niche of a microbe predicts its role in Crohn’s disease. Bacteria typically residing in the lumen of healthy patients decrease in disease while bacteria typically residing on the mucosa of healthy patients increase in disease. Our results also show that the associations with Crohn’s disease are evolutionarily conserved and provide a mutual-information-based method to visualize dysbiosis. PMID:26804920
Poli Neto, Paulo; Faoro, Nilza Teresinha; Prado Júnior, José Carlos do; Pisco, Luís Augusto Coelho
2016-05-01
How professionals are compensated may affect how they perform their tasks. Fixed compensation may take the form of wages, payment for productivity or capitation. In addition to fixed compensation, there are numerous mechanisms for variable compensation. This article describes the experience of Curitiba and Rio de Janeiro in Brazil, and Lisbon in Portugal, using different models of performance-based compensation. In all three of these examples, management felt the need to offer monetary reward to achieve certain goals. The indicators analyzed the structure, processes and outcomes, and assessed professionals individual and as part of healthcare teams. In Lisbon, variable compensation can be as high as 40% of the base wage, while in Curitiba and Rio de Janeiro it is limited to 10%. Despite the growing use of this management tool in Brazil and the world, further studies are required to analyze the effectiveness of variable compensation.
Gene Expression Signatures Based on Variability can Robustly Predict Tumor Progression and Prognosis
Dinalankara, Wikum; Bravo, Héctor Corrada
2015-01-01
Gene expression signatures are commonly used to create cancer prognosis and diagnosis methods, yet only a small number of them are successfully deployed in the clinic since many fail to replicate performance on subsequent validation. A primary reason for this lack of reproducibility is the fact that these signatures attempt to model the highly variable and unstable genomic behavior of cancer. Our group recently introduced gene expression anti-profiles as a robust methodology to derive gene expression signatures based on the observation that while gene expression measurements are highly heterogeneous across tumors of a specific cancer type relative to the normal tissue, their degree of deviation from normal tissue expression in specific genes involved in tissue differentiation is a stable tumor mark that is reproducible across experiments and cancer types. Here we show that constructing gene expression signatures based on variability and the anti-profile approach yields classifiers capable of successfully distinguishing benign growths from cancerous growths based on deviation from normal expression. We then show that this same approach generates stable and reproducible signatures that predict probability of relapse and survival based on tumor gene expression. These results suggest that using the anti-profile framework for the discovery of genomic signatures is an avenue leading to the development of reproducible signatures suitable for adoption in clinical settings. PMID:26078586
Attitudes of High School Teachers to Educational Research Using Classification-Tree Method
ERIC Educational Resources Information Center
Akcoltekin, Alpturk; Engin, Ali Osman; Sevgin, Hikmet
2017-01-01
Purpose: The main objective is to investigate high school teachers' attitudes relating to educational research with respect to demographic variables. Research Methods: The study is based on the relational screening model. Data was obtained through an adapted scale to determine high school teachers' attitudes toward educational research. The study…
Microstructure-Sensitive Modeling of High Cycle Fatigue (Preprint)
2009-03-01
SUBJECT TERMS microplasticity , microstructure-sensitive modeling, high cycle fatigue, fatigue variability 16. SECURITY CLASSIFICATION OF: 17...3Air Force Research Laboratory Wright Patterson Air Force Base, Ohio 45433 Keywords: Microplasticity , microstructure-sensitive modeling, high cycle...cyclic microplasticity ) plays a key role in modeling fatigue resistance. Unlike effective properties such as elastic stiffness, fatigue is
Citation Rate of Highly-Cited Papers in 100 Kinesiology-Related Journals
ERIC Educational Resources Information Center
Knudson, Duane
2015-01-01
This study extended previous research on several citation-based bibliometric variables for highly cited articles in a large (N = 100) number of journals related to Kinesiology. Total citations and citation rate of the 30 most highly cited articles in each journal were identified by searchers of "Google Scholar (GS)". Other major…
ERIC Educational Resources Information Center
Yu, Rongrong; Singh, Kusum
2018-01-01
The authors examined the relationships among teacher classroom practices, student motivation, and mathematics achievement in high school. The data for this study was drawn from the base-year data of High School Longitudinal Study of 2009. Structural equation modeling method was used to estimate the relationships among variables. The results…
Cortes, Arthur Rodriguez Gonzalez; Eimar, Hazem; Barbosa, Jorge de Sá; Costa, Claudio; Arita, Emiko Saito; Tamimi, Faleh
2015-05-01
Subjective radiographic classifications of alveolar bone have been proposed and correlated with implant insertion torque (IT). The present diagnostic study aims to identify quantitative bone features influencing IT and to use these findings to develop an objective radiographic classification for predicting IT. Demographics, panoramic radiographs (taken at the beginning of dental treatment), and cone-beam computed tomographic scans (taken for implant surgical planning) of 25 patients receiving 31 implants were analyzed. Bone samples retrieved from implant sites were assessed with dual x-ray absorptiometry, microcomputed tomography, and histology. Odds ratio, sensitivity, and specificity of all variables to predict high peak IT were assessed. A ridge cortical thickness >0.75 mm and a normal appearance of the inferior mandibular cortex were the most sensitive variables for predicting high peak IT (87.5% and 75%, respectively). A classification based on the combination of both variables presented high sensitivity (90.9%) and specificity (100%) for predicting IT. Within the limitations of this study, the results suggest that it is possible to predict IT accurately based on radiographic findings of the patient. This could be useful in the treatment plan of immediate loading cases.
Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.
Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E
2016-12-20
Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.
A Spectropolarimetric Test of the Structure of the Intrinsic Absorbers in the Quasar HS 1603+3820
NASA Astrophysics Data System (ADS)
Misawa, Toru; Kawabata, Koji S.; Eracleous, Michael; Charlton, Jane C.; Kashikawa, Nobunari
2010-08-01
We report the results of a spectropolarimetric observation of the C VI "mini-broad" absorption line (mini-BAL) in the quasar HS 1603+3820 (z em = 2.542). The observations were carried out with the FOCAS instrument on the Subaru Telescope and yielded an extremely high polarization sensitivity of δp~ 0.1%, at a resolving power of R ~ 1500. HS 1603+3820 has been the target of a high-resolution spectroscopic monitoring campaign for more than four years, aimed at studying its highly variable C VI mini-BAL profile. Using the monitoring observations in an earlier paper, we were able to narrow down the causes of the variability to the following two scenarios: (1) scattering material of variable optical depth redirecting photons around the absorber and (2) a variable, highly ionized screen between the continuum source and the absorber which modulates the UV continuum incident on the absorber. The observations presented here provide a crucial test of the scattering scenario and lead us to disfavor it because (1) the polarization level is very small (p ~ 0.6%) throughout the spectrum and (2) the polarization level does not increase across the mini-BAL trough. Thus, the variable screen scenario emerges as our favored explanation of the C VI mini-BAL variability. Our conclusion is bolstered by recent X-ray observations of nearby mini-BAL quasars, which show a rapidly variable soft X-ray continuum that appears to be the result of transmission through an ionized absorber of variable ionization parameter and optical depth. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
Gentil, Paulo; Bueno, João C.A.; Follmer, Bruno; Marques, Vitor A.; Del Vecchio, Fabrício B.
2018-01-01
Background Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. Methods The sample consisted of Judo (n = 16) and BJJ (n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. Results The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. Discussion In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights. PMID:29844991
Vrijens, France; De Gendt, Cindy; Verleye, Leen; Robays, Jo; Schillemans, Viki; Camberlin, Cécile; Stordeur, Sabine; Dubois, Cécile; Van Eycken, Elisabeth; Wauters, Isabelle; Van Meerbeeck, Jan P
2018-05-01
To evaluate the quality of care for all patients diagnosed with lung cancer in Belgium based on a set of evidence-based quality indicators and to study the variability of care between hospitals. A retrospective study based on linked data from the cancer registry, insurance claims and vital status for all patients diagnosed with lung cancer between 2010 and 2011. Evidence-based quality indicators were identified from a systematic literature search. A specific algorithm to attribute patients to a centre was developed, and funnel plots were used to assess variability of care between centres. None. The proportion of patients who received appropriate care as defined by the indicator. Secondary outcome included the variability of care between centres. Twenty indicators were measured for a total of 12 839 patients. Good results were achieved for 60-day post-surgical mortality (3.9%), histopathological confirmation of diagnosis (93%) and for the use of PET-CT before treatment with curative intent (94%). Areas to be improved include the reporting of staging information to the Belgian Cancer Registry (80%), the use of brain imaging for clinical stage III patients eligible for curative treatment (79%), and the time between diagnosis and start of first active treatment (median 20 days). High variability between centres was observed for several indicators. Twenty-three indicators were found relevant but could not be measured. This study highlights the feasibility to develop a multidisciplinary set of quality indicators using population-based data. The main advantage of this approach is that not additional registration is required, but the non-measurability of many relevant indicators is a hamper. It allows however to easily point to areas of large variability in care.
A multi-fidelity framework for physics based rotor blade simulation and optimization
NASA Astrophysics Data System (ADS)
Collins, Kyle Brian
New helicopter rotor designs are desired that offer increased efficiency, reduced vibration, and reduced noise. Rotor Designers in industry need methods that allow them to use the most accurate simulation tools available to search for these optimal designs. Computer based rotor analysis and optimization have been advanced by the development of industry standard codes known as "comprehensive" rotorcraft analysis tools. These tools typically use table look-up aerodynamics, simplified inflow models and perform aeroelastic analysis using Computational Structural Dynamics (CSD). Due to the simplified aerodynamics, most design studies are performed varying structural related design variables like sectional mass and stiffness. The optimization of shape related variables in forward flight using these tools is complicated and results are viewed with skepticism because rotor blade loads are not accurately predicted. The most accurate methods of rotor simulation utilize Computational Fluid Dynamics (CFD) but have historically been considered too computationally intensive to be used in computer based optimization, where numerous simulations are required. An approach is needed where high fidelity CFD rotor analysis can be utilized in a shape variable optimization problem with multiple objectives. Any approach should be capable of working in forward flight in addition to hover. An alternative is proposed and founded on the idea that efficient hybrid CFD methods of rotor analysis are ready to be used in preliminary design. In addition, the proposed approach recognizes the usefulness of lower fidelity physics based analysis and surrogate modeling. Together, they are used with high fidelity analysis in an intelligent process of surrogate model building of parameters in the high fidelity domain. Closing the loop between high and low fidelity analysis is a key aspect of the proposed approach. This is done by using information from higher fidelity analysis to improve predictions made with lower fidelity models. This thesis documents the development of automated low and high fidelity physics based rotor simulation frameworks. The low fidelity framework uses a comprehensive code with simplified aerodynamics. The high fidelity model uses a parallel processor capable CFD/CSD methodology. Both low and high fidelity frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both the low and high fidelity frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test the process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist distribution. Approximation models are built for metrics related to rotor efficiency and vibration using the results from 60+ high fidelity (CFD/CSD) experiments and 400+ low fidelity experiments. Optimization using the approximation models found the Pareto Frontier anchor points, or the design having maximum rotor efficiency and the design having minimum vibration. Various Pareto generation methods are used to find designs on the frontier between these two anchor designs. When tested in the high fidelity framework, the Pareto anchor designs are shown to be very good designs when compared with other designs from the high fidelity database. This provides evidence that the process proposed has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors. In conclusion, the methods developed and documented in this thesis have made several novel contributions. First, an automated high fidelity CFD based forward flight simulation framework has been built for use in preliminary design optimization. The framework was built around an integrated, parallel processor capable CFD/CSD/AA process. Second, a novel method of building approximate models of high fidelity parameters has been developed. The method uses a combination of low and high fidelity results and combines Design of Experiments, statistical effects analysis, and aspects of approximation model management. And third, the determination of rotor blade shape variables through optimization using CFD based analysis in forward flight has been performed. This was done using the high fidelity CFD/CSD/AA framework and method mentioned above. While the low and high fidelity predictions methods used in the work still have inaccuracies that can affect the absolute levels of the results, a framework has been successfully developed and demonstrated that allows for an efficient process to improve rotor blade designs in terms of a selected choice of objective function(s). Using engineering judgment, this methodology could be applied today to investigate opportunities to improve existing designs. With improvements in the low and high fidelity prediction components that will certainly occur, this framework could become a powerful tool for future rotorcraft design work. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Strazisar, T. M.; Koch, M.; Madden, C. J.
2016-02-01
Seagrasses and submerged aquatic vegetation (SAV) continue to decline globally from human-induced disturbance and habitat loss in estuarine and coastal ecosystems. The SAV Ruppia maritima historically created critical habitat at the Everglades-Florida Bay ecotone, but hydrological modifications and lower freshwater flows have resulted in significant declines in recent decades. We used a population-based approach to examine factors controlling Ruppia presence and abundance at the ecotone to expand the scientific base for management and restoration of SAV species in highly variable environments and examine factors required for Ruppia restoration in the Everglades. Life history transitions from seed through sexual reproduction were established under a range of field conditions critical to seagrass and SAV persistence, including salinity, temperature, light, sediment nutrients (P) and competitor SAV. We found multiple constraints to Ruppia life history development, including an ephemeral seed bank, low rates of successful germination and seedling survival and clonal reproduction limited by variable salinity, nutrients, light and competition with the macroalga Chara hornemannii. Because of low survival rates and limited clonal reproduction, Ruppia at the Evergaldes ecotone currently depends on high rates of viable seed production. However, development of large reproductive meadows requires high vegetative shoot densities. Thus, Everglades restoration should establish lower salinities to create higher seedling and adult survival and clonal reproduction to support successful sexual reproduction that can build up the seed bank for years when adult survival is limited. This population-based data from field experiments and surveys is being incorporated into a seagrass model to enable forecasting of population sustainability and evaluate Everglades restoration targets which includes restoring Ruppia to the southern Everglades-Florida Bay ecotone.
Event-Based control of depth of hypnosis in anesthesia.
Merigo, Luca; Beschi, Manuel; Padula, Fabrizio; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio
2017-08-01
In this paper, we propose the use of an event-based control strategy for the closed-loop control of the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. A new event generator with high noise-filtering properties is employed in addition to a PIDPlus controller. The tuning of the parameters is performed off-line by using genetic algorithms by considering a given data set of patients. The effectiveness and robustness of the method is verified in simulation by implementing a Monte Carlo method to address the intra-patient and inter-patient variability. A comparison with a standard PID control structure shows that the event-based control system achieves a reduction of the total variation of the manipulated variable of 93% in the induction phase and of 95% in the maintenance phase. The use of event based automatic control in anesthesia yields a fast induction phase with bounded overshoot and an acceptable disturbance rejection. A comparison with a standard PID control structure shows that the technique effectively mimics the behavior of the anesthesiologist by providing a significant decrement of the total variation of the manipulated variable. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ayundawati, Dyah; Setyosari, Punaji; Susilo, Herawati; Sihkabuden
2016-01-01
This study aims for know influence of problem-based learning strategies and achievement motivation on learning achievement. The method used in this research is quantitative method. The instrument used in this study is two fold instruments to measure moderator variable (achievement motivation) and instruments to measure the dependent variable (the…
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E
2014-05-01
To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Variable Selection for Regression Models of Percentile Flows
NASA Astrophysics Data System (ADS)
Fouad, G.
2017-12-01
Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.
NASA Astrophysics Data System (ADS)
Amalita, N.; Fitria, D.; Distian, V.
2018-04-01
National examination is an assessment of learning outcomes that aims to assess the achievement of graduate competence nationally. The result of the national examination is used as a mapping of educational issues in order to arrange the national education policy. Therefore the results of National Examination are used, also, as a reference for the admission of new students to continue their education to a higher level. The results of National Examination in West Sumatra in 2016 decreased from the previous year, both elementary schools (SD) and Junior High School level (SMP). This paper aims to determine the characteristics of the National Examination results in each regency / city in West Sumatra for elementary and junior levels by using Bi-plot analysis. The result of Bi-plot Analysis provides the information that the results of the National Examination of Regency / City in West Sumatra Province are quite diverse. At Junior High School level there are 9 of Regencies / Cities which have similar characteristics. English subjects are the greatest diversity among all of subjects. The calculation results of the correlation of each variable in junior high school level are positively correlated. The variables with positive correlation are mathematics that correlates with English. Based on the mark of National Examination for elementary school level in West Sumatra, there are 8 Regencies / Cities have similar characteristics. The correlations of each variable at the elementary level are positively correlated. The variables that have positive correlation are Sciences (IPA) with Language.
Variable Emittance Electrochromics Using Ionic Electrolytes and Low Solar Absorptance Coatings
NASA Technical Reports Server (NTRS)
Chandrasekhar, Prasanna
2011-01-01
One of the last remaining technical hurdles with variable emittance devices or skins based on conducting polymer electrochromics is the high solar absorptance of their top surfaces. This high solar absorptance causes overheating of the skin when facing the Sun in space. Existing technologies such as mechanical louvers or loop heat pipes are virtually inapplicable to micro (< 20 kg) and nano (< 5 kg) spacecraft. Novel coatings lower the solar absorption to Alpha(s) of between 0.30 and 0.46. Coupled with the emittance properties of the variable emittance skins, this lowers the surface temperature of the skins facing the Sun to between 30 and 60 C, which is much lower than previous results of 100 C, and is well within acceptable satellite operations ranges. The performance of this technology is better than that of current new technologies such as microelectromechanical systems (MEMS), electrostatics, and electrophoretics, especially in applications involving micro and nano spacecraft. The coatings are deposited inside a high vacuum, layering multiple coatings onto the top surfaces of variable emittance skins. They are completely transparent in the entire relevant infrared region (about 2 to 45 microns), but highly reflective in the visible-NIR (near infrared) region of relevance to solar absorptance.
BASiCS: Bayesian Analysis of Single-Cell Sequencing Data
Vallejos, Catalina A.; Marioni, John C.; Richardson, Sylvia
2015-01-01
Single-cell mRNA sequencing can uncover novel cell-to-cell heterogeneity in gene expression levels in seemingly homogeneous populations of cells. However, these experiments are prone to high levels of unexplained technical noise, creating new challenges for identifying genes that show genuine heterogeneous expression within the population of cells under study. BASiCS (Bayesian Analysis of Single-Cell Sequencing data) is an integrated Bayesian hierarchical model where: (i) cell-specific normalisation constants are estimated as part of the model parameters, (ii) technical variability is quantified based on spike-in genes that are artificially introduced to each analysed cell’s lysate and (iii) the total variability of the expression counts is decomposed into technical and biological components. BASiCS also provides an intuitive detection criterion for highly (or lowly) variable genes within the population of cells under study. This is formalised by means of tail posterior probabilities associated to high (or low) biological cell-to-cell variance contributions, quantities that can be easily interpreted by users. We demonstrate our method using gene expression measurements from mouse Embryonic Stem Cells. Cross-validation and meaningful enrichment of gene ontology categories within genes classified as highly (or lowly) variable supports the efficacy of our approach. PMID:26107944
BASiCS: Bayesian Analysis of Single-Cell Sequencing Data.
Vallejos, Catalina A; Marioni, John C; Richardson, Sylvia
2015-06-01
Single-cell mRNA sequencing can uncover novel cell-to-cell heterogeneity in gene expression levels in seemingly homogeneous populations of cells. However, these experiments are prone to high levels of unexplained technical noise, creating new challenges for identifying genes that show genuine heterogeneous expression within the population of cells under study. BASiCS (Bayesian Analysis of Single-Cell Sequencing data) is an integrated Bayesian hierarchical model where: (i) cell-specific normalisation constants are estimated as part of the model parameters, (ii) technical variability is quantified based on spike-in genes that are artificially introduced to each analysed cell's lysate and (iii) the total variability of the expression counts is decomposed into technical and biological components. BASiCS also provides an intuitive detection criterion for highly (or lowly) variable genes within the population of cells under study. This is formalised by means of tail posterior probabilities associated to high (or low) biological cell-to-cell variance contributions, quantities that can be easily interpreted by users. We demonstrate our method using gene expression measurements from mouse Embryonic Stem Cells. Cross-validation and meaningful enrichment of gene ontology categories within genes classified as highly (or lowly) variable supports the efficacy of our approach.
Mortality determinants and prediction of outcome in high risk newborns.
Dalvi, R; Dalvi, B V; Birewar, N; Chari, G; Fernandez, A R
1990-06-01
The aim of this study was to determine independent patient-related predictors of mortality in high risk newborns admitted at our centre. The study population comprised 100 consecutive newborns each, from the premature unit (PU) and sick baby care unit (SBCU), respectively. Thirteen high risk factors (variables) for each of the two units, were entered into a multivariate regression analysis. Variables with independent predictive value for poor outcome (i.e., death) in PU were, weight less than 1 kg, hyaline membrane disease, neurologic problems, and intravenous therapy. High risk factors in SBCU included, blood gas abnormality, bleeding phenomena, recurrent convulsions, apnea, and congenital anomalies. Identification of these factors guided us in defining priority areas for improvement in our system of neonatal care. Also, based on these variables a simple predictive score for outcome was constructed. The prediction equation and the score were cross-validated by applying them to a 'test-set' of 100 newborns each for PU and SBCU. Results showed a comparable sensitivity, specificity and error rate.
A diagnostic model for chronic hypersensitivity pneumonitis
Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R
2017-01-01
The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist’s diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. PMID:27245779
NASA Astrophysics Data System (ADS)
Sangadji, Iriansyah; Arvio, Yozika; Indrianto
2018-03-01
to understand by analyzing the pattern of changes in value movements that can dynamically vary over a given period with relative accuracy, an equipment is required based on the utilization of technical working principles or specific analytical method. This will affect the level of validity of the output that will occur from this system. Subtractive clustering is based on the density (potential) size of data points in a space (variable). The basic concept of subtractive clustering is to determine the regions in a variable that has high potential for the surrounding points. In this paper result is segmentation of behavior pattern based on quantity value movement. It shows the number of clusters is formed and that has many members.
Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto
2010-01-11
In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.
The household-based socio-economic deprivation index in Setiu Wetlands, Malaysia
NASA Astrophysics Data System (ADS)
Zakaria, Syerrina; May, Chin Sin; Rahman, Nuzlinda Abdul
2017-08-01
Deprivation index usually used in public health study. At the same time, deprivation index can also use to measure the level of deprivation in an area or a village. These indices are also referred as the index of inequalities or disadvantage. Even though, there are many indices that have been built before. But it is believed to be less appropriate to use the existing indices to be applied in other countries or areas which had different socio-economic conditions and different geographical characteristics. The objective of this study is to construct the index based on the socio-economic factors in Setiu Wetlands (Jajaran Merang, Jajaran Setiu and Jajaran Kuala Besut) in Terengganu Malaysia which is defined as weighted household-based socioeconomic deprivation index. This study has employed the variables based on income level, education level and employment rate obtained from questionnaire which are acquired from 64 villages included 1024 respondents. The factor analysis is used to extract the latent variables or observed variables into smaller amount of components or factors. By using factor analysis, one factor is extracted from 3 latent variables. This factor known as socioeconomic deprivation index. Based on the result, the areas with a lower index values until high index values were identified.
Understanding DOC Mobilization Dynamics Through High Frequency Measurements in a Headwater Catchment
NASA Astrophysics Data System (ADS)
Werner, B.; Musolff, A.; Lechtenfeld, O.; de Rooij, G. H.; Fleckenstein, J. H.
2017-12-01
Increasing dissolved organic carbon (DOC) exports from headwater catchments impact the quality of downstream waters and pose challenges to water supply. The importance of riparian zones for DOC export from catchments in humid, temperate climates has generally been acknowledged, but the hydrological controls and biogeochemical factors that govern mobilization of DOC from riparian zones remain elusive. By analyzing high-frequency time series of UV-VIS based water quality we therefore aim at a better understanding on temporal dynamics of DOC mobilization and exports. In a first step a one year high frequency (15 minutes) data set from a headwater catchment in the Harz Mountains (Germany) was systematically analyzed for event-based patterns in DOC concentrations. Here, a simplistic linear model was generated to explain DOC concentration level and variability in the stream. Furthermore, spectral (e.g. slopes and SUVA254) and molecular (FT-ICR-MS) characterization of DOC was used to fingerprint in-stream DOC during events. Continuous DOC concentrations were best predicted (R², NSE = 0.53) by instantaneous discharge (Q) and antecede wetness conditions of the last 30 days (AWC30 = Precip.30/PET30) as well as mean air temperature (Tmean30) and mean discharge (Qmean30) of the preceding 30 days. Analyses of 36 events revealed seasonal trends for the slope, intercept and R² of linear log(DOC)-log(Q) regressions that can be best explained by the mean air temperature of the preceding 15 days. Continuously available optical DOC quality parameters SUVA254 and spectral slope (275 nm - 295 nm) systematically changed with shifts in discharge and in DOC concentration. This is underlined by selected FT-ICR-MS measurements indicating higher DOC aromaticity and oxygen content at high flow conditions. The change of DOC quality parameters during events indicate a shift in the activated source zones: DOC with a different quality was mobilized during high flow conditions when higher groundwater levels connected formerly disconnected DOC source zones to the stream. We conclude that the high concentration variability of DOC can be explained by a few controlling variables only. These variables can be linked to event-based DOC source activation and more seasonal controls of DOC production.
NASA Astrophysics Data System (ADS)
Roedig, Edna; Cuntz, Matthias; Huth, Andreas
2015-04-01
The effects of climatic inter-annual fluctuations and human activities on the global carbon cycle are uncertain and currently a major issue in global vegetation models. Individual-based forest gap models, on the other hand, model vegetation structure and dynamics on a small spatial (<100 ha) and large temporal scale (>1000 years). They are well-established tools to reproduce successions of highly-diverse forest ecosystems and investigate disturbances as logging or fire events. However, the parameterizations of the relationships between short-term climate variability and forest model processes are often uncertain in these models (e.g. daily variable temperature and gross primary production (GPP)) and cannot be constrained from forest inventories. We addressed this uncertainty and linked high-resolution Eddy-covariance (EC) data with an individual-based forest gap model. The forest model FORMIND was applied to three diverse tropical forest sites in the Amazonian rainforest. Species diversity was categorized into three plant functional types. The parametrizations for the steady-state of biomass and forest structure were calibrated and validated with different forest inventories. The parameterizations of relationships between short-term climate variability and forest model processes were evaluated with EC-data on a daily time step. The validations of the steady-state showed that the forest model could reproduce biomass and forest structures from forest inventories. The daily estimations of carbon fluxes showed that the forest model reproduces GPP as observed by the EC-method. Daily fluctuations of GPP were clearly reflected as a response to daily climate variability. Ecosystem respiration remains a challenge on a daily time step due to a simplified soil respiration approach. In the long-term, however, the dynamic forest model is expected to estimate carbon budgets for highly-diverse tropical forests where EC-measurements are rare.
Johnson, Brent A
2009-10-01
We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.
Wen, Dan; Herrmann, Anne-Kristin; Borchardt, Lars; Simon, Frank; Liu, Wei; Kaskel, Stefan; Eychmüller, Alexander
2014-02-19
We report the controllable synthesis of Pd aerogels with high surface area and porosity by destabilizing colloidal solutions of Pd nanoparticles with variable concentrations of calcium ions. Enzyme electrodes based on Pd aerogels co-immobilized with glucose oxidase show high activity toward glucose oxidation and are promising materials for applications in bioelectronics.
Examining Leisure Boredom in High School Students in Turkey
ERIC Educational Resources Information Center
Akgul, Merve Beyza
2015-01-01
High school students who do not have leisure skills are more likely to be bored during leisure time. The aim of the study is to examine leisure boredom of high school students based on some variables (gender and income), and to investigate the relationship between leisure boredom, the presence/absence of anti-social behavior and the frequency at…
Stress-based animal models of depression: Do we actually know what we are doing?
Yin, Xin; Guven, Nuri; Dietis, Nikolas
2016-12-01
Depression is one of the leading causes of disability and a significant health-concern worldwide. Much of our current understanding on the pathogenesis of depression and the pharmacology of antidepressant drugs is based on pre-clinical models. Three of the most popular stress-based rodent models are the forced swimming test, the chronic mild stress paradigm and the learned helplessness model. Despite their recognizable advantages and limitations, they are associated with an immense variability due to the high number of design parameters that define them. Only few studies have reported how minor modifications of these parameters affect the model phenotype. Thus, the existing variability in how these models are used has been a strong barrier for drug development as well as benchmark and evaluation of these pre-clinical models of depression. It also has been the source of confusing variability in the experimental outcomes between research groups using the same models. In this review, we summarize the known variability in the experimental protocols, identify the main and relevant parameters for each model and describe the variable values using characteristic examples. Our view of depression and our efforts to discover novel and effective antidepressants is largely based on our detailed knowledge of these testing paradigms, and requires a sound understanding around the importance of individual parameters to optimize and improve these pre-clinical models. Copyright © 2016 Elsevier B.V. All rights reserved.
Ferrier, Rachel; Hezard, Bernard; Lintz, Adrienne; Stahl, Valérie
2013-01-01
An individual-based modeling (IBM) approach was developed to describe the behavior of a few Listeria monocytogenes cells contaminating smear soft cheese surface. The IBM approach consisted of assessing the stochastic individual behaviors of cells on cheese surfaces and knowing the characteristics of their surrounding microenvironments. We used a microelectrode for pH measurements and micro-osmolality to assess the water activity of cheese microsamples. These measurements revealed a high variability of microscale pH compared to that of macroscale pH. A model describing the increase in pH from approximately 5.0 to more than 7.0 during ripening was developed. The spatial variability of the cheese surface characterized by an increasing pH with radius and higher pH on crests compared to that of hollows on cheese rind was also modeled. The microscale water activity ranged from approximately 0.96 to 0.98 and was stable during ripening. The spatial variability on cheese surfaces was low compared to between-cheese variability. Models describing the microscale variability of cheese characteristics were combined with the IBM approach to simulate the stochastic growth of L. monocytogenes on cheese, and these simulations were compared to bacterial counts obtained from irradiated cheeses artificially contaminated at different ripening stages. The simulated variability of L. monocytogenes counts with the IBM/microenvironmental approach was consistent with the observed one. Contrasting situations corresponding to no growth or highly contaminated foods could be deduced from these models. Moreover, the IBM approach was more effective than the traditional population/macroenvironmental approach to describe the actual bacterial behavior variability. PMID:23872572
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paliya, Vaidehi S.; Ajello, M.; Kaur, A.
We report the first results obtained from our campaign to characterize the intra-night-optical variability (INOV) properties of Fermi detected blazars, using the observations from the recently commissioned 1.3 m J. C. Bhattacharya telescope (JCBT). During the first run, we were able to observe 17 blazars in the Bessel R filter for ∼137 hr. Using C- and scaled F -statistics, we quantify the extent of INOV and derive the duty cycle (DC), which is the fraction of time during which a source exhibits a substantial flux variability. We find a high DC of 40% for BL Lac objects and the flatmore » spectrum radio quasars are relatively less variable (DC ∼ 15%). However, when estimated for blazars sub-classes, a high DC of ∼59% is found in low synchrotron peaked (LSP) blazars, whereas, intermediate and high synchrotron peaked objects have a low DC of ∼11% and 13%, respectively. We find evidence of the association of the high amplitude INOV with the γ -ray flaring state. We also notice a high polarization during the elevated INOV states (for the sources that have polarimetric data available), thus supporting the jet based origin of the observed variability. We plan to enlarge the sample and utilize the time availability from the small telescopes, such as 1.3 m JCBT, to strengthen/verify the results obtained in this work and those existing in the literature.« less
Articular cartilage degeneration classification by means of high-frequency ultrasound.
Männicke, N; Schöne, M; Oelze, M; Raum, K
2014-10-01
To date only single ultrasound parameters were regarded in statistical analyses to characterize osteoarthritic changes in articular cartilage and the potential benefit of using parameter combinations for characterization remains unclear. Therefore, the aim of this work was to utilize feature selection and classification of a Mankin subset score (i.e., cartilage surface and cell sub-scores) using ultrasound-based parameter pairs and investigate both classification accuracy and the sensitivity towards different degeneration stages. 40 punch biopsies of human cartilage were previously scanned ex vivo with a 40-MHz transducer. Ultrasound-based surface parameters, as well as backscatter and envelope statistics parameters were available. Logistic regression was performed with each unique US parameter pair as predictor and different degeneration stages as response variables. The best ultrasound-based parameter pair for each Mankin subset score value was assessed by highest classification accuracy and utilized in receiver operating characteristics (ROC) analysis. The classifications discriminating between early degenerations yielded area under the ROC curve (AUC) values of 0.94-0.99 (mean ± SD: 0.97 ± 0.03). In contrast, classifications among higher Mankin subset scores resulted in lower AUC values: 0.75-0.91 (mean ± SD: 0.84 ± 0.08). Variable sensitivities of the different ultrasound features were observed with respect to different degeneration stages. Our results strongly suggest that combinations of high-frequency ultrasound-based parameters exhibit potential to characterize different, particularly very early, degeneration stages of hyaline cartilage. Variable sensitivities towards different degeneration stages suggest that a concurrent estimation of multiple ultrasound-based parameters is diagnostically valuable. In-vivo application of the present findings is conceivable in both minimally invasive arthroscopic ultrasound and high-frequency transcutaneous ultrasound. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Bateman, Brooke L; Pidgeon, Anna M; Radeloff, Volker C; Flather, Curtis H; VanDerWal, Jeremy; Akçakaya, H Resit; Thogmartin, Wayne E; Albright, Thomas P; Vavrus, Stephen J; Heglund, Patricia J
2016-12-01
Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in recent decades and may not capture actual species occurrence well because the distributions of species, especially at the edges of their range, are typically dynamic and may respond strongly to short-term climate variability. Our goal here was to test whether bird occurrence models can be predicted by either covariates based on short-term climate variability or on long-term climate averages. We parameterized species distribution models (SDMs) based on either short-term variability or long-term average climate covariates for 320 bird species in the conterminous USA and tested whether any life-history trait-based guilds were particularly sensitive to short-term conditions. Models including short-term climate variability performed well based on their cross-validated area-under-the-curve AUC score (0.85), as did models based on long-term climate averages (0.84). Similarly, both models performed well compared to independent presence/absence data from the North American Breeding Bird Survey (independent AUC of 0.89 and 0.90, respectively). However, models based on short-term variability covariates more accurately classified true absences for most species (73% of true absences classified within the lowest quarter of environmental suitability vs. 68%). In addition, they have the advantage that they can reveal the dynamic relationship between species and their environment because they capture the spatial fluctuations of species potential breeding distributions. With this information, we can identify which species and guilds are sensitive to climate variability, identify sites of high conservation value where climate variability is low, and assess how species' potential distributions may have already shifted due recent climate change. However, long-term climate averages require less data and processing time and may be more readily available for some areas of interest. Where data on short-term climate variability are not available, long-term climate information is a sufficient predictor of species distributions in many cases. However, short-term climate variability data may provide information not captured with long-term climate data for use in SDMs. © 2016 by the Ecological Society of America.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
Kinematic modeling of a double octahedral Variable Geometry Truss (VGT) as an extensible gimbal
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1994-01-01
This paper presents the complete forward and inverse kinematics solutions for control of the three degree-of-freedom (DOF) double octahedral variable geometry truss (VGT) module as an extensible gimbal. A VGT is a truss structure partially comprised of linearly actuated members. A VGT can be used as joints in a large, lightweight, high load-bearing manipulator for earth- and space-based remote operations, plus industrial applications. The results have been used to control the NASA VGT hardware as an extensible gimbal, demonstrating the capability of this device to be a joint in a VGT-based manipulator. This work is an integral part of a VGT-based manipulator design, simulation, and control tool.
Shivange, Amol V; Hoeffken, Hans Wolfgang; Haefner, Stefan; Schwaneberg, Ulrich
2016-12-01
Protein consensus-based surface engineering (ProCoS) is a simple and efficient method for directed protein evolution combining computational analysis and molecular biology tools to engineer protein surfaces. ProCoS is based on the hypothesis that conserved residues originated from a common ancestor and that these residues are crucial for the function of a protein, whereas highly variable regions (situated on the surface of a protein) can be targeted for surface engineering to maximize performance. ProCoS comprises four main steps: ( i ) identification of conserved and highly variable regions; ( ii ) protein sequence design by substituting residues in the highly variable regions, and gene synthesis; ( iii ) in vitro DNA recombination of synthetic genes; and ( iv ) screening for active variants. ProCoS is a simple method for surface mutagenesis in which multiple sequence alignment is used for selection of surface residues based on a structural model. To demonstrate the technique's utility for directed evolution, the surface of a phytase enzyme from Yersinia mollaretii (Ymphytase) was subjected to ProCoS. Screening just 1050 clones from ProCoS engineering-guided mutant libraries yielded an enzyme with 34 amino acid substitutions. The surface-engineered Ymphytase exhibited 3.8-fold higher pH stability (at pH 2.8 for 3 h) and retained 40% of the enzyme's specific activity (400 U/mg) compared with the wild-type Ymphytase. The pH stability might be attributed to a significantly increased (20 percentage points; from 9% to 29%) number of negatively charged amino acids on the surface of the engineered phytase.
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
Leitch, Michael; Macefield, Vaughan G
2017-08-01
Ballistic contractions are induced by brief, high-frequency (60-100 Hz) trains of action potentials in motor axons. During ramp voluntary contractions, human motoneurons exhibit significant discharge variability of ∼20% and have been shown to be advantageous to the neuromuscular system. We hypothesized that ballistic contractions incorporating discharge variability would generate greater isometric forces than regular trains with zero variability. High-impedance tungsten microelectrodes were inserted into human fibular nerve, and single motor axons were stimulated with both irregular and constant-frequency stimuli at mean frequencies ranging from 57.8 to 68.9 Hz. Irregular trains generated significantly greater isometric peak forces than regular trains over identical mean frequencies. The high forces generated by ballistic contractions are not based solely on high frequencies, but rather a combination of high firing rates and discharge irregularity. It appears that irregular ballistic trains take advantage of the "catchlike property" of muscle, allowing augmentation of force. Muscle Nerve 56: 292-297, 2017. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.
2015-10-01
Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Aristi, Ibon; Díez, Jose Ramon; Larrañaga, Aitor; Navarro-Ortega, Alícia; Barceló, Damià; Elosegi, Arturo
2012-12-01
Mediterranean rivers in the Iberian Peninsula are being increasingly affected by human activities, which threaten their ecological status. A clear picture of how do these multiple stressors affect river ecosystem functioning is still lacking. We addressed this question by measuring a key ecosystem process, namely breakdown of organic matter, at 66 sites distributed across Mediterranean Spain. We performed breakdown experiments by measuring the mass lost by wood sticks for 54 to 106 days. Additionally, we gathered data on physico-chemical, biological and geomorphological characteristics of study sites. Study sites spanned a broad range of environmental characteristics and breakdown rates varied fiftyfold across sites. No clear geographic patterns were found between or within basins. 90th quantile regressions performed to link breakdown rates with environmental characteristics included the following 7 variables in the model, in decreasing order of importance: altitude, water content in phosphorus, catchment area, toxicity, invertebrate-based biotic index, riparian buffer width, and diatom-based quality index. Breakdown rate was systematically low in high-altitude rivers with few human impacts, but showed a high variability in areas affected by human activity. This increase in variability is the result of the influence of multiple stressors acting simultaneously, as some of these can promote whereas others slow down the breakdown of organic matter. Therefore, stick breakdown gives information on the intensity of a key ecosystem process, which would otherwise be very difficult to predict based on environmental variables. Copyright © 2012 Elsevier B.V. All rights reserved.
The complete genome sequences of 65 Campylobacter jejuni and C. coli strains
USDA-ARS?s Scientific Manuscript database
Campylobacter jejuni (Cj) and C. coli (Cc) are genetically highly diverse based on various molecular methods including MLST, microarray-based comparisons and the whole genome sequences of a few strains. Cj and Cc diversity is also exhibited by variable capsular polysaccharides (CPS) that are the maj...
Grimm, Fabian A; Russell, William K; Luo, Yu-Syuan; Iwata, Yasuhiro; Chiu, Weihsueh A; Roy, Tim; Boogaard, Peter J; Ketelslegers, Hans B; Rusyn, Ivan
2017-06-20
Substances of Unknown or Variable composition, Complex reaction products, and Biological materials (UVCBs), including many refined petroleum products, present a major challenge in regulatory submissions under the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) and US High Production Volume regulatory regimes. The inherent complexity of these substances, as well as variability in composition obfuscates detailed chemical characterization of each individual substance and their grouping for human and environmental health evaluation through read-across. In this study, we applied ion mobility mass spectrometry in conjunction with cheminformatics-based data integration and visualization to derive substance-specific signatures based on the distribution and abundance of various heteroatom classes. We used petroleum substances from four petroleum substance manufacturing streams and evaluated their chemical composition similarity based on high-dimensional substance-specific quantitative parameters including m/z distribution, drift time, carbon number range, and associated double bond equivalents and hydrogen-to-carbon ratios. Data integration and visualization revealed group-specific similarities for petroleum substances. Observed differences within a product group were indicative of batch- or manufacturer-dependent variation. We demonstrate how high-resolution analytical chemistry approaches can be used effectively to support categorization of UVCBs based on their heteroatom composition and how such data can be used in regulatory decision-making.
Aldous, Jeffrey W F; Akubat, Ibrahim; Chrismas, Bryna C R; Watkins, Samuel L; Mauger, Alexis R; Midgley, Adrian W; Abt, Grant; Taylor, Lee
2014-07-01
This study investigated the reliability and validity of a novel nonmotorised treadmill (NMT)-based soccer simulation using a novel activity category called a "variable run" to quantify fatigue during high-speed running. Twelve male University soccer players completed 3 familiarization sessions and 1 peak speed assessment before completing the intermittent soccer performance test (iSPT) twice. The 2 iSPTs were separated by 6-10 days. The total distance, sprint distance, and high-speed running distance (HSD) were 8,968 ± 430 m, 980 ± 75 m and 2,122 ± 140 m, respectively. No significant difference (p > 0.05) was found between repeated trials of the iSPT for all physiological and performance variables. Reliability measures between iSPT1 and iSPT2 showed good agreement (coefficient of variation: <4.6%; intraclass correlation coefficient: >0.80). Furthermore, the variable run phase showed HSD significantly decreased (p ≤ 0.05) in the last 15 minutes (89 ± 6 m) compared with the first 15 minutes (85 ± 7 m), quantifying decrements in high-speed exercise compared with the previous literature. This study validates the iSPT as a NMT-based soccer simulation compared with the previous match-play data and is a reliable tool for assessing and monitoring physiological and performance variables in soccer players. The iSPT could be used in a number of ways including player rehabilitation, understanding the efficacy of nutritional interventions, and also the quantification of environmentally mediated decrements on soccer-specific performance.
NASA Astrophysics Data System (ADS)
Boissard, C.; Chervier, F.; Dutot, A. L.
2007-08-01
Using a statistical approach based on artificial neural networks, an emission algorithm (ISO_LF) accounting for high (instantaneous) to low (seasonal) frequency variations was developed for isoprene. ISO_LF was optimised using an isoprene emission data base (ISO-DB) specifically designed for this work. ISO-DB consists of 1321 emission rates collected in the literature, together with 34 environmental variables, measured or assessed using NCDC (National Climatic Data Center) or NCEP (National Centers for Environmental Predictions) meteorological databases. ISO-DB covers a large variety of emitters (25 species) and environmental conditions (10° S to 60° N). When only instantaneous environmental regressors (air temperature and photosynthetic active radiation, PAR) were used, a maximum of 60% of the overall isoprene variability was assessed and the highest emissions were underestimated. Considering a total of 9 high (instantaneous) to low (up to 3 weeks) frequency regressors, ISO_LF accounts for up to 91% of the isoprene emission variability, whatever the emission range, species or climate. Diurnal and seasonal variations are correctly reproduced for textit{Ulex europaeus} with a maximum factor of discrepancy of 4. ISO-LF was found to be mainly sensitive to air temperature cumulated over 3 weeks T21 and to instantaneous light L0 and air temperature T0 variations. T21, T0 and L0 only accounts for 76% of the overall variability. The use of ISO-LF for non stored monoterpene emissions was shown to give poor results.
NASA Astrophysics Data System (ADS)
Wang, Kaicun; Ma, Qian; Li, Zhijun; Wang, Jiankai
2015-07-01
Existing studies have shown that observed surface incident solar radiation (Rs) over China may have important inhomogeneity issues. This study provides metadata and reference data to homogenize observed Rs, from which the decadal variability of Rs over China can be accurately derived. From 1958 to 1990, diffuse solar radiation (Rsdif) and direct solar radiation (Rsdir) were measured separately, and Rs was calculated as their sum. The pyranometers used to measure Rsdif had a strong sensitivity drift problem, which introduced a spurious decreasing trend into the observed Rsdif and Rs data, whereas the observed Rsdir did not suffer from this sensitivity drift problem. From 1990 to 1993, instruments and measurement methods were replaced and measuring stations were restructured in China, which introduced an abrupt increase in the observed Rs. Intercomparisons between observation-based and model-based Rs performed in this research show that sunshine duration (SunDu)-derived Rs is of high quality and can be used as reference data to homogenize observed Rs data. The homogenized and adjusted data of observed Rs combines the advantages of observed Rs in quantifying hourly to monthly variability and SunDu-derived Rs in depicting decadal variability and trend. Rs averaged over 105 stations in China decreased at -2.9 W m-2 per decade from 1961 to 1990 and remained stable afterward. This decadal variability is confirmed by the observed Rsdir and diurnal temperature ranges, and can be reproduced by high-quality Earth System Models. However, neither satellite retrievals nor reanalyses can accurately reproduce such decadal variability over China.
NASA Astrophysics Data System (ADS)
Yu, Wei; Chen, Xinjun; Yi, Qian
2016-06-01
The neon flying squid, Ommastrephes bartramii, is a species of economically important cephalopod in the Northwest Pacific Ocean. Its short lifespan increases the susceptibility of the distribution and abundance to the direct impact of the environmental conditions. Based on the generalized linear model (GLM) and generalized additive model (GAM), the commercial fishery data from the Chinese squid-jigging fleets during 1995 to 2011 were used to examine the interannual and seasonal variability in the abundance of O. bartramii, and to evaluate the influences of variables on the abundance (catch per unit effort, CPUE). The results from GLM suggested that year, month, latitude, sea surface temperature (SST), mixed layer depth (MLD), and the interaction term ( SST×MLD) were significant factors. The optimal model based on GAM included all the six significant variables and could explain 42.43% of the variance in nominal CPUE. The importance of the six variables was ranked by decreasing magnitude: year, month, latitude, SST, MLD and SST×MLD. The squid was mainly distributed in the waters between 40°N and 44°N in the Northwest Pacific Ocean. The optimal ranges of SST and MLD were from 14 to 20°C and from 10 to 30 m, respectively. The squid abundance greatly fluctuated from 1995 to 2011. The CPUE was low during 1995-2002 and high during 2003-2008. Furthermore, the squid abundance was typically high in August. The interannual and seasonal variabilities in the squid abundance were associated with the variations of marine environmental conditions and the life history characteristics of squid.
NASA Astrophysics Data System (ADS)
Dhungel, S.; Barber, M. E.
2016-12-01
The objectives of this paper are to use an automated satellite-based remote sensing evapotranspiration (ET) model to assist in parameterization of a cropping system model (CropSyst) and to examine the variability of consumptive water use of various crops across the watershed. The remote sensing model is a modified version of the Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC™) energy balance model. We present the application of an automated python-based implementation of METRIC to estimate ET as consumptive water use for agricultural areas in three watersheds in Eastern Washington - Walla Walla, Lower Yakima and Okanogan. We used these ET maps with USDA crop data to identify the variability of crop growth and water use for the major crops in these three watersheds. Some crops, such as grapes and alfalfa, showed high variability in water use in the watershed while others, such as corn, had comparatively less variability. The results helped us to estimate the range and variability of various crop parameters that are used in CropSyst. The paper also presents a systematic approach to estimate parameters of CropSyst for a crop in a watershed using METRIC results. Our initial application of this approach was used to estimate irrigation application rate for CropSyst for a selected farm in Walla Walla and was validated by comparing crop growth (as Leaf Area Index - LAI) and consumptive water use (ET) from METRIC and CropSyst. This coupling of METRIC with CropSyst will allow for more robust parameters in CropSyst and will enable accurate predictions of changes in irrigation practices and crop rotation, which are a challenge in many cropping system models.
NASA Astrophysics Data System (ADS)
Ahmad, Rifandi Raditya; Fuad, Muhammad
2018-02-01
Some functions of mangrove areas in coastal ecosystems as a green belt, because mangrove serves as a protector of the beach from the sea waves, as a good habitat for coastal biota and for nutrition supply. Decreased condition or degradation of mangrove habitat caused by several oceanographic factors. Mangrove habitats have some specific characteristics such as salinity, tides, and muddy substrates. Considering the role of mangrove area is very important, it is necessary to study about the potential of mangrove habitat so that the habitat level of mangrove habitat in the east coast of Semarang city is known. The purpose of this research is to obtain an index and condition of habitat of mangrove habitat at location of research based on tidal, salinity, substrate type, coastline change. Observation by using purposive method and calculation of habitat index value of mangrove habitat using CVI (Coastal Vulnerability Index) method with scores divided into 3 groups namely low, medium and high. The results showed that there is a zone of research belonging to the medium vulnerability category with the most influential variables is because there is abrasion that sweeps the mangrove substrate. Trimulyo mangrove habitat has high vulnerable variable of tidal frequency, then based on value variable Salinity is categorized as low vulnerability, whereas for mangrove habitat vulnerability based on variable type of substrate belong to low and medium vulnerability category. The CVI values of mangrove habitats divided into zones 1; 2; and 3 were found to varying values of 1.54; 3.79; 1.09, it indicates that there is a zone with the vulnerability of mangrove habitat at the study site belonging to low and medium vulnerability category.
Using Kepler Light Curves for Astronomy Education and Public Outreach
NASA Astrophysics Data System (ADS)
Cash, Jennifer; Rivers, S.; Eleby, J.; Gould, A.; Komatsu, T.
2014-01-01
We will present our efforts related to Education and Public Outreach activities using Kepler Light Curves. We are currently developing interactive web based activities to introduce the public to the general topic of Stellar Variability and Intrinsic Variable Stars in particular using the high quality light curves of over a dozen Kepler targets. Along with the public website, we are exploring areas to develop teacher guides to use Kepler Light Curves in the middle and high school classrooms. These efforts are supported through a NASA EPSCoR grant "South Carolina Joint Venture Program" via a subaward to SC State University.
NASA Technical Reports Server (NTRS)
Holben, Brent; Slutsker, Ilya; Giles, David; Eck, Thomas; Smirnov, Alexander; Sinyuk, Aliaksandr; Schafer, Joel; Sorokin, Mikhail; Rodriguez, Jon; Kraft, Jason;
2016-01-01
Aerosols are highly variable in space, time and properties. Global assessment from satellite platforms and model predictions rely on validation from AERONET, a highly accurate ground-based network. Ver. 3 represents a significant improvement in accuracy and quality.
Levesque, Danielle L; Lobban, Kerileigh D; Lovegrove, Barry G
2014-12-01
Tenrecs (Order Afrosoricida) exhibit some of the lowest body temperatures (T b) of any eutherian mammal. They also have a high level of variability in both active and resting T bs and, at least in cool temperatures in captivity, frequently employ both short- and long-term torpor. The use of heterothermy by captive animals is, however, generally reduced during gestation and lactation. We present data long-term T b recordings collected from free-ranging S. setosus over the course of two reproductive seasons. In general, reproductive females had slightly higher (~32 °C) and less variable T b, whereas non-reproductive females and males showed both a higher propensity for torpor as well as lower (~30.5 °C) and more variable rest-phase T bs. Torpor expression defined using traditional means (using a threshold or cut-off T b) was much lower than predicted based on the high degree of heterothermy in captive tenrecs. However, torpor defined in this manner is likely to be underestimated in habitats where ambient temperature is close to T b. Our results caution against inferring metabolic states from T b alone and lend support to the recent call to define torpor in free-ranging animals based on mechanistic and not descriptive variables. In addition, lower variability in T b observed during gestation and lactation confirms that homeothermy is essential for reproduction in this species and probably for basoendothermic mammals in general. The relatively low costs of maintaining homeothermy in a sub-tropical environment might help shed light on how homeothermy could have evolved incrementally from an ancestral heterothermic condition.
Wekre, Lena Lande; Frøslie, Kathrine Frey; Haugen, Lena; Falch, Jan A
2010-01-01
To describe demographical variables, and to study functional ability to perform activities of daily life in adults with osteogenesis imperfecta (OI). Population-based study. Ninety-seven patients aged 25 years and older, 41 men and 56 women, were included. For the demographical variables, comparison was made to a matched control-group (475 persons) from the Norwegian general population. Structured interviews concerning social conditions, employment and educational issues and clinical examination were performed. The Sunnaas Activities of Daily Living (ADL) Index was used to assess the ability to perform ADL. The prevalence of clinical manifestations according to Sillence was in accordance with other studies. Demographical variables showed that most adults with OI are married and have children. They had a higher educational level than the control group, but the employment rate was significantly lower. However, the rate of employed men was similar in both groups. Adult persons with OI achieved a high score when tested for ADL. Adults with OI are well educated compared with the general population, and most of them are employed. High scores when tested for ADL indicate that most of them are able to live their lives independently, even though there are some differences according to the severity of the disorder.
Glosser, D.; Kutchko, B.; Benge, G.; ...
2016-03-21
Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.
Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei
2016-01-11
Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.
Vulnerability in Determining the Cost of Information System Project to Avoid Loses
NASA Astrophysics Data System (ADS)
Haryono, Kholid; Ikhsani, Zulfa Amalia
2018-03-01
Context: This study discusses the priority of cost required in software development projects. Objectives: To show the costing models, the variables involved, and how practitioners assess and decide the priorities of each variable. To strengthen the information, each variable also confirmed the risk if ignored. Method: The method is done by two approaches. First, systematic literature reviews to find the models and variables used to decide the cost of software development. Second, confirm and take judgments about the level of importance and risk of each variable to the software developer. Result: Obtained about 54 variables that appear on the 10 models discussed. The variables are categorized into 15 groups based on the similarity of meaning. Each group becomes a variable. Confirmation results with practitioners on the level of importance and risk. It shown there are two variables that are considered very important and high risk if ignored. That is duration and effort. Conclusion: The relationship of variable rates between the results of literature studies and confirmation of practitioners contributes to the use of software business actors in considering project cost variables.
Fast γ-Ray Variability in Blazars beyond Redshift 3
NASA Astrophysics Data System (ADS)
Li, Shang; Xia, Zi-Qing; Liang, Yun-Feng; Liao, Neng-Hui; Fan, Yi-Zhong
2018-02-01
High-redshift blazars are one of the most powerful sources in the universe and γ-ray variability carries crucial information about their relativistic jets. In this work we present results of the first systematical temporal analysis of Fermi-LAT data of all known seven γ-ray blazars beyond redshift 3. Significant long-term γ-ray variability is found from five sources in monthly γ-ray light curves, in which three of them are reported for the first time. Furthermore, intraday γ-ray variations are detected from NVSS J053954‑283956 and NVSS J080518+614423. The doubling variability timescale of the former source is limited as short as ≲1 hr (at the source frame). Together with variability amplitude over one order of magnitude, NVSS J053954‑283956 is the most distant γ-ray flaring blazar so far. Meanwhile, intraday optical variability of NVSS J163547+362930 is found based on an archival PTF/iPTF light curve. Benefiting from the multi-wavelength activity of these sources, constraints on their Doppler factors, as well as the locations of the γ-ray radiation region and indications for the SDSS high redshift jetted active galactic nuclei deficit are discussed.
Apelfröjd, Senad; Eriksson, Sandra
2014-01-01
Results from experiments on a tap transformer based grid connection system for a variable speed vertical axis wind turbine are presented. The tap transformer based system topology consists of a passive diode rectifier, DC-link, IGBT inverter, LCL-filter, and tap transformer. Full range variable speed operation is enabled by using the different step-up ratios of a tap transformer. Simulations using MATLAB/Simulink have been performed in order to study the behavior of the system. A full experimental set up of the system has been used in the laboratory study, where a clone of the on-site generator was driven by an induction motor and the system was connected to a resistive load to better evaluate the performance. Furthermore, the system is run and evaluated for realistic wind speeds and variable speed operation. For a more complete picture of the system performance, a case study using real site Weibull parameters is done, comparing different tap selection options. The results show high system efficiency at nominal power and an increase in overall power output for full tap operation in comparison with the base case, a standard transformer. In addition, the loss distribution at different wind speeds is shown, which highlights the dominant losses at low and high wind speeds. Finally, means for further increasing the overall system efficiency are proposed.
2014-01-01
Results from experiments on a tap transformer based grid connection system for a variable speed vertical axis wind turbine are presented. The tap transformer based system topology consists of a passive diode rectifier, DC-link, IGBT inverter, LCL-filter, and tap transformer. Full range variable speed operation is enabled by using the different step-up ratios of a tap transformer. Simulations using MATLAB/Simulink have been performed in order to study the behavior of the system. A full experimental set up of the system has been used in the laboratory study, where a clone of the on-site generator was driven by an induction motor and the system was connected to a resistive load to better evaluate the performance. Furthermore, the system is run and evaluated for realistic wind speeds and variable speed operation. For a more complete picture of the system performance, a case study using real site Weibull parameters is done, comparing different tap selection options. The results show high system efficiency at nominal power and an increase in overall power output for full tap operation in comparison with the base case, a standard transformer. In addition, the loss distribution at different wind speeds is shown, which highlights the dominant losses at low and high wind speeds. Finally, means for further increasing the overall system efficiency are proposed. PMID:25258733
Stability measures in arid ecosystems
NASA Astrophysics Data System (ADS)
Nosshi, M. I.; Brunsell, N. A.; Koerner, S.
2015-12-01
Stability, the capacity of ecosystems to persist in the face of change, has proven its relevance as a fundamental component of ecological theory. Here, we would like to explore meaningful and quantifiable metrics to define stability, with a focus on highly variable arid and semi-arid savanna ecosystems. Recognizing the importance of a characteristic timescale to any definition of stability, our metrics will be focused scales from annual to multi-annual, capturing different aspects of stability. Our three measures of stability, in increasing order of temporal scale, are: (1) Ecosystem resistance, quantified as the degree to which the system maintains its mean state in response to a perturbation (drought), based on inter-annual variability in Normalized Difference Vegetation Index (NDVI). (2) An optimization approach, relevant to arid systems with pulse dynamics, that models vegetation structure and function based on a trade off between the ability to respond to resource availability and avoid stress. (3) Community resilience, measured as species turnover rate (β diversity). Understanding the nature of stability in structurally-diverse arid ecosystems, which are highly variable, yields theoretical insight which has practical implications.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Variability Analysis based on POSS1/POSS2 Photometry
NASA Astrophysics Data System (ADS)
Mickaelian, Areg M.; Sarkissian, Alain; Sinamyan, Parandzem K.
2012-04-01
We introduce accurate magnitudes as combined calculations from catalogues based on accurate measurements of POSS1- and POSS2-epoch plates. The photometric accuracy of various catalogues was established, and statistical weights for each of them have been calculated. To achieve the best possible magnitudes, we used weighted averaging of data from APM, MAPS, USNO-A2.0, USNO-B1.0 (for POSS1-epoch), and USNO-B1.0 and GSC 2.3.2 (for POSS2-epoch) catalogues. The r.m.s. accuracy of magnitudes achieved for POSS1 is 0.184 in B and 0.173 mag in R, or 0.138 in B and 0.128 in R for POSS2. By adopting those new magnitudes we examined the First Byurakan Survey (FBS) of blue stellar objects for variability, and uncovered 336 probable and possible variables among 1103 objects with POSS2-POSS1 >= 3σ of the errors, including 161 highly probable variables. We have developed methods to control and exclude accidental errors for any survey. We compared and combined our results with those given in Northern Sky Variability Survey (NSVS) database, and obtained firm candidates for variability. By such an approach it will be possible to conduct investigations of variability for large numbers of objects.
NASA Astrophysics Data System (ADS)
Shauly, Eitan; Parag, Allon; Khmaisy, Hafez; Krispil, Uri; Adan, Ofer; Levi, Shimon; Latinski, Sergey; Schwarzband, Ishai; Rotstein, Israel
2011-04-01
A fully automated system for process variability analysis of high density standard cell was developed. The system consists of layout analysis with device mapping: device type, location, configuration and more. The mapping step was created by a simple DRC run-set. This database was then used as an input for choosing locations for SEM images and for specific layout parameter extraction, used by SPICE simulation. This method was used to analyze large arrays of standard cell blocks, manufactured using Tower TS013LV (Low Voltage for high-speed applications) Platforms. Variability of different physical parameters like and like Lgate, Line-width-roughness and more as well as of electrical parameters like drive current (Ion), off current (Ioff) were calculated and statistically analyzed, in order to understand the variability root cause. Comparison between transistors having the same W/L but with different layout configurations and different layout environments (around the transistor) was made in terms of performances as well as process variability. We successfully defined "robust" and "less-robust" transistors configurations, and updated guidelines for Design-for-Manufacturing (DfM).
Skaret, E; Weinstein, P; Milgrom, P; Kaakko, T; Getz, T
2004-01-01
In this case-control study of rural adolescents we identified factors to discriminate those who have high levels of tooth decay and receive treatment from those with similar levels who receive no treatment. The sample was drawn from all 12-20-year-olds (n = 439) in a rural high school in Washington State, U.S. The criterion for being included was 5 or more decayed, missing or filled teeth. The questionnaire included structure, history, cognition and expectation variables based on a model by Grembowski, Andersen and Chen. No structural variable was related to the dependent variable. Two of 10 history variables were related: perceived poor own dental health and perceived poor mother's dental health. Four of eight cognition variables were also predictive: negative beliefs about the dentist, not planning to go to a dentist even if having severe problems, not being in any club or playing on a sports team and not having a best friend. No relationship was found for the expectation variable 'usual source of care'. These data are consistent with the hypothesis that untreated tooth decay is associated with avoidance of care and point to the importance of history and cognition variables in planning efforts to improve oral health of rural adolescents.
Time-Variable Transit Time Distributions in the Hyporheic Zone of a Headwater Mountain Stream
NASA Astrophysics Data System (ADS)
Ward, Adam S.; Schmadel, Noah M.; Wondzell, Steven M.
2018-03-01
Exchange of water between streams and their hyporheic zones is known to be dynamic in response to hydrologic forcing, variable in space, and to exist in a framework with nested flow cells. The expected result of heterogeneous geomorphic setting, hydrologic forcing, and between-feature interaction is hyporheic transit times that are highly variable in both space and time. Transit time distributions (TTDs) are important as they reflect the potential for hyporheic processes to impact biogeochemical transformations and ecosystems. In this study we simulate time-variable transit time distributions based on dynamic vertical exchange in a headwater mountain stream with observed, heterogeneous step-pool morphology. Our simulations include hyporheic exchange over a 600 m river corridor reach driven by continuously observed, time-variable hydrologic conditions for more than 1 year. We found that spatial variability at an instance in time is typically larger than temporal variation for the reach. Furthermore, we found reach-scale TTDs were marginally variable under all but the most extreme hydrologic conditions, indicating that TTDs are highly transferable in time. Finally, we found that aggregation of annual variation in space and time into a "master TTD" reasonably represents most of the hydrologic dynamics simulated, suggesting that this aggregation approach may provide a relevant basis for scaling from features or short reaches to entire networks.
Variability in monthly serum bicarbonate measures in hemodialysis patients: a cohort study.
Patel, Ravi; Paredes, William; Hall, Charles B; Nader, Mark A; Sapkota, Deepak; Folkert, Vaughn W; Abramowitz, Matthew K
2015-12-21
Some nephrologists have advocated an individualized approach to the prescription of bicarbonate hemodialysis. However, the utility of monthly serum bicarbonate levels for guiding and evaluating such treatment decisions has not been evaluated. We sought to define the variability of these measurements and to determine factors that are associated with month-to-month variability in pre-dialysis serum bicarbonate. We examined the monthly variability in serum bicarbonate measurements among 181 hemodialysis patients admitted to a free-standing dialysis unit in the Bronx, NY from 1/1/2008-6/30/2012. All patients were treated with a uniform bicarbonate dialysis prescription (bicarbonate 35 mEq/L, acetate 8 mEq/L). Pre-dialysis serum bicarbonate values were obtained from monthly laboratory reports. Month-to-month variability was defined using a rolling measurement for each time point. Only 34 % of high serum bicarbonate values (>26 mEq/L) remained high in the subsequent month, whereas 60 % converted to normal (22-26 mEq/L). Of all low values (<22 mEq/L), 41 % were normal the following month, while 58 % remained low. Using the mean 3-month bicarbonate, only 29 % of high values remained high in the next 3-month period. In multivariable-adjusted longitudinal models, both low and high serum bicarbonate values were associated with greater variability than were normal values (β = 0.12 (95 % CI 0.09-0.15) and 0.24 (0.18 to 0.29) respectively). Variability decreased with time, and was significantly associated with age, phosphate binder use, serum creatinine, potassium, and normalized protein catabolic rate. Monthly pre-dialysis serum bicarbonate levels are highly variable. Even if a clinician takes no action, approximately 50 % of bicarbonate values outside a normal range of 22-26 mEq/L will return to normal in the subsequent month. The decision to change the bicarbonate dialysis prescription should not be based on a single bicarbonate value, and even a 3-month mean may be insufficient.
A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables
NASA Astrophysics Data System (ADS)
Huang, Laura X.; Isaac, George A.; Sheng, Grant
2014-01-01
This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.
NASA Astrophysics Data System (ADS)
Porter, Christopher H.
The purpose of this study was to examine the variables which influence a high school student to enroll in an engineering discipline versus a physical science discipline. Data was collected utilizing the High School Activities, Characteristics, and Influences Survey, which was administered to students who were freshmen in an engineering or physical science major at an institution in the Southeastern United States. A total of 413 students participated in the survey. Collected data were analyzed using descriptive statistics, two-sample Wilcoxon tests, and binomial logistic regression techniques. A total of 29 variables were deemed significant between the general engineering and physical science students. The 29 significant variables were further analyzed to see which have an independent impact on a student to enroll in an undergraduate engineering program, as opposed to an undergraduate physical science program. Four statistically significant variables were found to have an impact on a student's decision to enroll in a engineering undergraduate program versus a physical science program: father's influence, participation in Project Lead the Way, and the subjects of mathematics and physics. Recommendations for theory, policy, and practice were discussed based on the results of the study. This study presented suggestions for developing ways to attract, educate, and move future engineers into the workforce.
Optoacoustic Monitoring of Physiologic Variables
Esenaliev, Rinat O.
2017-01-01
Optoacoustic (photoacoustic) technique is a novel diagnostic platform that can be used for noninvasive measurements of physiologic variables, functional imaging, and hemodynamic monitoring. This technique is based on generation and time-resolved detection of optoacoustic (thermoelastic) waves generated in tissue by short optical pulses. This provides probing of tissues and individual blood vessels with high optical contrast and ultrasound spatial resolution. Because the optoacoustic waves carry information on tissue optical and thermophysical properties, detection, and analysis of the optoacoustic waves allow for measurements of physiologic variables with high accuracy and specificity. We proposed to use the optoacoustic technique for monitoring of a number of important physiologic variables including temperature, thermal coagulation, freezing, concentration of molecular dyes, nanoparticles, oxygenation, and hemoglobin concentration. In this review we present origin of contrast and high spatial resolution in these measurements performed with optoacoustic systems developed and built by our group. We summarize data obtained in vitro, in experimental animals, and in humans on monitoring of these physiologic variables. Our data indicate that the optoacoustic technology may be used for monitoring of cerebral blood oxygenation in patients with traumatic brain injury and in neonatal patients, central venous oxygenation monitoring, total hemoglobin concentration monitoring, hematoma detection and characterization, monitoring of temperature, and coagulation and freezing boundaries during thermotherapy. PMID:29311964
Optoacoustic Monitoring of Physiologic Variables.
Esenaliev, Rinat O
2017-01-01
Optoacoustic (photoacoustic) technique is a novel diagnostic platform that can be used for noninvasive measurements of physiologic variables, functional imaging, and hemodynamic monitoring. This technique is based on generation and time-resolved detection of optoacoustic (thermoelastic) waves generated in tissue by short optical pulses. This provides probing of tissues and individual blood vessels with high optical contrast and ultrasound spatial resolution. Because the optoacoustic waves carry information on tissue optical and thermophysical properties, detection, and analysis of the optoacoustic waves allow for measurements of physiologic variables with high accuracy and specificity. We proposed to use the optoacoustic technique for monitoring of a number of important physiologic variables including temperature, thermal coagulation, freezing, concentration of molecular dyes, nanoparticles, oxygenation, and hemoglobin concentration. In this review we present origin of contrast and high spatial resolution in these measurements performed with optoacoustic systems developed and built by our group. We summarize data obtained in vitro , in experimental animals, and in humans on monitoring of these physiologic variables. Our data indicate that the optoacoustic technology may be used for monitoring of cerebral blood oxygenation in patients with traumatic brain injury and in neonatal patients, central venous oxygenation monitoring, total hemoglobin concentration monitoring, hematoma detection and characterization, monitoring of temperature, and coagulation and freezing boundaries during thermotherapy.
NASA Astrophysics Data System (ADS)
Wilmking, Martin; Buras, Allan; Heinrich, Ingo; Scharnweber, Tobias; Simard, Sonia; Smiljanic, Marko; van der Maaten, Ernst; van der Maaten-Theunissen, Marieke
2014-05-01
Trees are sessile, long-living organisms and as such constantly need to adapt to changing environmental conditions. Accordingly, they often show high phenotypic plasticity (the ability to change phenotypic traits, such as allocation of resources) in response to environmental change. This high phenotypic plasticity is generally considered as one of the main ingredients for a sessile organism to survive and reach high ages. Precisely because of the ability of trees to reach old age and their in-ability to simply run away when conditions get worse, growth information recorded in tree rings has long been used as a major environmental proxy, covering time scales from decades to millennia. Past environmental conditions (e.g. climate) are recorded in i.e. annual tree-ring width, early- and latewood width, wood density, isotopic concentrations, cell anatomy or wood chemistry. One prerequisite for a reconstruction is that the relationship between the environmental variable influencing tree growth and the tree-growth variable itself is stable through time. This, however, might contrast the ecological theory of high plasticity and the trees ability to adapt to change. To untangle possible mechanisms leading to stable or unstable relationships between tree growth and environmental variables, it is helpful to have exact site information and several proxy variables of each tree-ring series available. Although we gain insight into the environmental history of a sampling site when sampling today, this is extremely difficult when using archeological wood. In this latter case, we face the additional challenge of unknown origin, provenance and (or) site conditions, making it even more important to use multiple proxy time-series from the same sample. Here, we review typical examples, where the relationship between tree growth and environmental variables seems 1) stable and 2) instable through time, and relate these two cases to ecological theory. Based on ecological theory, we then give recommendations to improve the reliability of environmental reconstructions using tree rings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafiq ul Hassan, M; Zhang, G; Moros, E
2016-06-15
Purpose: A simple approach to investigate Interscanner variability of Radiomics features in computed tomography (CT) using a standard ACR phantom. Methods: The standard ACR phantom was scanned on CT scanners from three different manufacturers. Scanning parameters of 120 KVp, 200 mA were used while slice thickness of 3.0 mm on two scanners and 3.27 mm on third scanner was used. Three spherical regions of interest (ROI) from water, medium density and high density inserts were contoured. Ninety four Radiomics features were extracted using an in-house program. These features include shape (11), intensity (22), GLCM (26), GLZSM (11), RLM (11), andmore » NGTDM (5) and 8 fractal dimensions features. To evaluate the Interscanner variability across three scanners, a coefficient of variation (COV) is calculated for each feature group. Each group is further classified according to the COV- by calculating the percentage of features in each of the following categories: COV less than 2%, between 2 and 10% and greater than 10%. Results: For all feature groups, similar trend was observed for three different inserts. Shape features were the most robust for all scanners as expected. 70% of the shape features had COV <2%. For intensity feature group, 2% COV varied from 9 to 32% for three scanners. All features in four groups GLCM, GLZSM, RLM and NGTDM were found to have Interscanner variability ≥2%. The fractal dimensions dependence for medium and high density inserts were similar while it was different for water inserts. Conclusion: We concluded that even for similar scanning conditions, Interscanner variability across different scanners was significant. The texture features based on GLCM, GLZSM, RLM and NGTDM are highly scanner dependent. Since the inserts of the ACR Phantom are not heterogeneous in HU values suggests that matrix based 2nd order features are highly affected by variation in noise. Research partly funded by NIH/NCI R01CA190105-01.« less
NASA Astrophysics Data System (ADS)
Ouellette, G., Jr.; DeLong, K. L.
2016-02-01
High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.
Studying the photometric and spectroscopic variability of the magnetic hot supergiant ζ Orionis Aa
NASA Astrophysics Data System (ADS)
Buysschaert, B.; Neiner, C.; Richardson, N. D.; Ramiaramanantsoa, T.; David-Uraz, A.; Pablo, H.; Oksala, M. E.; Moffat, A. F. J.; Mennickent, R. E.; Legeza, S.; Aerts, C.; Kuschnig, R.; Whittaker, G. N.; Popowicz, A.; Handler, G.; Wade, G. A.; Weiss, W. W.
2017-06-01
Massive stars play a significant role in the chemical and dynamical evolution of galaxies. However, much of their variability, particularly during their evolved supergiant stage, is poorly understood. To understand the variability of evolved massive stars in more detail, we present a study of the O9.2Ib supergiant ζ Ori Aa, the only currently confirmed supergiant to host a magnetic field. We have obtained two-color space-based BRIght Target Explorer photometry (BRITE) for ζ Ori Aa during two observing campaigns, as well as simultaneous ground-based, high-resolution optical CHIRON spectroscopy. We perform a detailed frequency analysis to detect and characterize the star's periodic variability. We detect two significant, independent frequencies, their higher harmonics, and combination frequencies: the stellar rotation period Prot = 6.82 ± 0.18 d, most likely related to the presence of the stable magnetic poles, and a variation with a period of 10.0 ± 0.3 d attributed to circumstellar environment, also detected in the Hα and several He I lines, yet absent in the purely photospheric lines. We confirm the variability with Prot/4, likely caused by surface inhomogeneities, being the possible photospheric drivers of the discrete absorption components. No stellar pulsations were detected in the data. The level of circumstellar activity clearlydiffers between the two BRITE observing campaigns. We demonstrate that ζ Ori Aa is a highly variable star with both periodic and non-periodic variations, as well as episodic events. The rotation period we determined agrees well with the spectropolarimetric value from the literature. The changing activity level observed with BRITE could explain why the rotational modulation of the magnetic measurements was not clearly detected at all epochs. Based on data collected by the BRITE Constellation satellite mission, designed, built, launched, operated and supported by the Austrian Research Promotion Agency (FFG), the University of Vienna, the Technical University of Graz, the Canadian Space Agency (CSA), the University of Toronto Institute for Aerospace Studies (UTIAS), the Foundation for Polish Science & Technology (FNiTP MNiSW), and National Science Centre (NCN).Based on CHIRON spectra collected under CNTAC proposal CN2015A-122.
Garcia-Vicente, Ana María; Molina, David; Pérez-Beteta, Julián; Amo-Salas, Mariano; Martínez-González, Alicia; Bueno, Gloria; Tello-Galán, María Jesús; Soriano-Castrejón, Ángel
2017-12-01
To study the influence of dual time point 18F-FDG PET/CT in textural features and SUV-based variables and their relation among them. Fifty-six patients with locally advanced breast cancer (LABC) were prospectively included. All of them underwent a standard 18F-FDG PET/CT (PET-1) and a delayed acquisition (PET-2). After segmentation, SUV variables (SUVmax, SUVmean, and SUVpeak), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were obtained. Eighteen three-dimensional (3D) textural measures were computed including: run-length matrices (RLM) features, co-occurrence matrices (CM) features, and energies. Differences between all PET-derived variables obtained in PET-1 and PET-2 were studied. Significant differences were found between the SUV-based parameters and MTV obtained in the dual time point PET/CT, with higher values of SUV-based variables and lower MTV in the PET-2 with respect to the PET-1. In relation with the textural parameters obtained in dual time point acquisition, significant differences were found for the short run emphasis, low gray-level run emphasis, short run high gray-level emphasis, run percentage, long run emphasis, gray-level non-uniformity, homogeneity, and dissimilarity. Textural variables showed relations with MTV and TLG. Significant differences of textural features were found in dual time point 18F-FDG PET/CT. Thus, a dynamic behavior of metabolic characteristics should be expected, with higher heterogeneity in delayed PET acquisition compared with the standard PET. A greater heterogeneity was found in bigger tumors.
QUEST1 Variability Survey. II. Variability Determination Criteria and 200k Light Curve Catalog
NASA Astrophysics Data System (ADS)
Rengstorf, A. W.; Mufson, S. L.; Andrews, P.; Honeycutt, R. K.; Vivas, A. K.; Abad, C.; Adams, B.; Bailyn, C.; Baltay, C.; Bongiovanni, A.; Briceño, C.; Bruzual, G.; Coppi, P.; Della Prugna, F.; Emmet, W.; Ferrín, I.; Fuenmayor, F.; Gebhard, M.; Hernández, J.; Magris, G.; Musser, J.; Naranjo, O.; Oemler, A.; Rosenzweig, P.; Sabbey, C. N.; Sánchez, Ge.; Sánchez, Gu.; Schaefer, B.; Schenner, H.; Sinnott, J.; Snyder, J. A.; Sofia, S.; Stock, J.; van Altena, W.
2004-12-01
The QUEST (QUasar Equatorial Survey Team) Phase 1 camera has collected multibandpass photometry on a large strip of high Galactic latitude sky over a period of 26 months. This robust data set has been reduced and nightly catalogs compared to determine the photometric variability of the ensemble objects. Subsequent spectroscopic observations have confirmed a subset of the photometric variables as quasars, as previously reported. This paper reports on the details of the data reduction and analysis pipeline and presents multiple bandpass light curves for 198,213 QUEST1 objects, along with global variability information and matched Sloan photometry. Based on observations obtained at the Llano del Hato National Astronomical Observatory, operated by the Centro de Investigaciones de Astronomía for the Ministerio de Ciencia y Tecnologia of Venezuela.
NASA Astrophysics Data System (ADS)
Soundharajan, Bankaru-Swamy; Adeloye, Adebayo J.; Remesan, Renji
2016-07-01
This study employed a Monte-Carlo simulation approach to characterise the uncertainties in climate change induced variations in storage requirements and performance (reliability (time- and volume-based), resilience, vulnerability and sustainability) of surface water reservoirs. Using a calibrated rainfall-runoff (R-R) model, the baseline runoff scenario was first simulated. The R-R inputs (rainfall and temperature) were then perturbed using plausible delta-changes to produce simulated climate change runoff scenarios. Stochastic models of the runoff were developed and used to generate ensembles of both the current and climate-change-perturbed future runoff scenarios. The resulting runoff ensembles were used to force simulation models of the behaviour of the reservoir to produce 'populations' of required reservoir storage capacity to meet demands, and the performance. Comparing these parameters between the current and the perturbed provided the population of climate change effects which was then analysed to determine the variability in the impacts. The methodology was applied to the Pong reservoir on the Beas River in northern India. The reservoir serves irrigation and hydropower needs and the hydrology of the catchment is highly influenced by Himalayan seasonal snow and glaciers, and Monsoon rainfall, both of which are predicted to change due to climate change. The results show that required reservoir capacity is highly variable with a coefficient of variation (CV) as high as 0.3 as the future climate becomes drier. Of the performance indices, the vulnerability recorded the highest variability (CV up to 0.5) while the volume-based reliability was the least variable. Such variabilities or uncertainties will, no doubt, complicate the development of climate change adaptation measures; however, knowledge of their sheer magnitudes as obtained in this study will help in the formulation of appropriate policy and technical interventions for sustaining and possibly enhancing water security for irrigation and other uses served by Pong reservoir.
Model-based Clustering of High-Dimensional Data in Astrophysics
NASA Astrophysics Data System (ADS)
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Regression Analysis of Optical Coherence Tomography Disc Variables for Glaucoma Diagnosis.
Richter, Grace M; Zhang, Xinbo; Tan, Ou; Francis, Brian A; Chopra, Vikas; Greenfield, David S; Varma, Rohit; Schuman, Joel S; Huang, David
2016-08-01
To report diagnostic accuracy of optical coherence tomography (OCT) disc variables using both time-domain (TD) and Fourier-domain (FD) OCT, and to improve the use of OCT disc variable measurements for glaucoma diagnosis through regression analyses that adjust for optic disc size and axial length-based magnification error. Observational, cross-sectional. In total, 180 normal eyes of 112 participants and 180 eyes of 138 participants with perimetric glaucoma from the Advanced Imaging for Glaucoma Study. Diagnostic variables evaluated from TD-OCT and FD-OCT were: disc area, rim area, rim volume, optic nerve head volume, vertical cup-to-disc ratio (CDR), and horizontal CDR. These were compared with overall retinal nerve fiber layer thickness and ganglion cell complex. Regression analyses were performed that corrected for optic disc size and axial length. Area-under-receiver-operating curves (AUROC) were used to assess diagnostic accuracy before and after the adjustments. An index based on multiple logistic regression that combined optic disc variables with axial length was also explored with the aim of improving diagnostic accuracy of disc variables. Comparison of diagnostic accuracy of disc variables, as measured by AUROC. The unadjusted disc variables with the highest diagnostic accuracies were: rim volume for TD-OCT (AUROC=0.864) and vertical CDR (AUROC=0.874) for FD-OCT. Magnification correction significantly worsened diagnostic accuracy for rim variables, and while optic disc size adjustments partially restored diagnostic accuracy, the adjusted AUROCs were still lower. Axial length adjustments to disc variables in the form of multiple logistic regression indices led to a slight but insignificant improvement in diagnostic accuracy. Our various regression approaches were not able to significantly improve disc-based OCT glaucoma diagnosis. However, disc rim area and vertical CDR had very high diagnostic accuracy, and these disc variables can serve to complement additional OCT measurements for diagnosis of glaucoma.
NASA Astrophysics Data System (ADS)
Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.
2014-01-01
We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.
Measuring cardiac waste: the premier cardiac waste measures.
Lowe, Timothy J; Partovian, Chohreh; Kroch, Eugene; Martin, John; Bankowitz, Richard
2014-01-01
The authors developed 8 measures of waste associated with cardiac procedures to assist hospitals in comparing their performance with peer facilities. Measure selection was based on review of the research literature, clinical guidelines, and consultation with key stakeholders. Development and validation used the data from 261 hospitals in a split-sample design. Measures were risk adjusted using Premier's CareScience methodologies or mean peer value based on Medicare Severity Diagnosis-Related Group assignment. High variability was found in resource utilization across facilities. Validation of the measures using item-to-total correlations (range = 0.27-0.78), Cronbach α (.88), and Spearman rank correlation (0.92) showed high reliability and discriminatory power. Because of the level of variability observed among hospitals, this study suggests that there is opportunity for facilities to design successful waste reduction programs targeting cardiac-device procedures.
Santacana, Martí; Arias, Bárbara; Mitjans, Marina; Bonillo, Albert; Montoro, María; Rosado, Sílvia; Guillamat, Roser; Vallès, Vicenç; Pérez, Víctor; Forero, Carlos G; Fullana, Miquel A
2016-01-01
Anxiety disorders are highly prevalent and result in low quality of life and a high social and economic cost. The efficacy of cognitive-behavioural therapy (CBT) for anxiety disorders is well established, but a substantial proportion of patients do not respond to this treatment. Understanding which genetic and environmental factors are responsible for this differential response to treatment is a key step towards "personalized medicine". Based on previous research, our objective was to test whether the BDNF Val66Met polymorphism and/or childhood maltreatment are associated with response trajectories during exposure-based CBT for panic disorder (PD). We used Growth Mixture Modeling to identify latent classes of change (response trajectories) in patients with PD (N = 97) who underwent group manualized exposure-based CBT. We conducted logistic regression to investigate the effect on these trajectories of the BDNF Val66Met polymorphism and two different types of childhood maltreatment, abuse and neglect. We identified two response trajectories ("high response" and "low response"), and found that they were not significantly associated with either the genetic (BDNF Val66Met polymorphism) or childhood trauma-related variables of interest, nor with an interaction between these variables. We found no evidence to support an effect of the BDNF gene or childhood trauma-related variables on CBT outcome in PD. Future studies in this field may benefit from looking at other genotypes or using different (e.g. whole-genome) approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowan, J.H.; Rose, K.A.; Rutherford, E.S.
1993-05-01
An individual-based model of the population dynamics of young-of-the-year striped bass Morone saxatilis in the Potomac River, Maryland, was used to test the hypothesis that historically high recruitment variability can be explained by changes in environmental and biological factors that result in relatively small changes in growth and mortality rates of striped bass larvae. The four factors examined were (1) size distribution of female parents, (2) zooplankton prey density during the development of striped bass larvae, (3) density of completing larval white perch M. americana, and (4) temperature during larval development. Simulation results suggest that variations in female size andmore » in prey for larvae alone could cause 10-fold variability in recruitment. But no single factor alone caused changes in vital rates of age-0 fish that could account for the 145-fold variability in the Potomac River index of juvenile recruitment. However, combined positive or negative effects of two or more factors resulted in more than a 150-fold simulated recruitment variability, suggesting that combinations of factors can account for the high observed annual variability in striped bass recruitment success. Higher cumulative mortality of feeding larvae and younger life stages than of juveniles was common to all simulations. supporting the contention that striped bass year-class strength is determined prior to metamorphosis. 76 refs., 7 figs., 4 tabs.« less
Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander R; Taraldsen, Gunnar; Grunewaldt, Kristine H; Støen, Ragnhild
2010-08-01
The aim of this study was to investigate the predictive value of a computer-based video analysis of the development of cerebral palsy (CP) in young infants. A prospective study of general movements used recordings from 30 high-risk infants (13 males, 17 females; mean gestational age 31wks, SD 6wks; range 23-42wks) between 10 and 15 weeks post term when fidgety movements should be present. Recordings were analysed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analyses. CP status was reported at 5 years. Thirteen infants developed CP (eight hemiparetic, four quadriparetic, one dyskinetic; seven ambulatory, three non-ambulatory, and three unknown function), of whom one had fidgety movements. Variability of the centroid of motion had a sensitivity of 85% and a specificity of 71% in identifying CP. By combining this with variables reflecting the amount of motion, specificity increased to 88%. Nine out of 10 children with CP, and for whom information about functional level was available, were correctly predicted with regard to ambulatory and non-ambulatory function. Prediction of CP can be provided by computer-based video analysis in young infants. The method may serve as an objective and feasible tool for early prediction of CP in high-risk infants.
High-temperature thermal destruction of poultry derived wastes for energy recovery in Australia.
Florin, N H; Maddocks, A R; Wood, S; Harris, A T
2009-04-01
The high-temperature thermal destruction of poultry derived wastes (e.g., manure and bedding) for energy recovery is viable in Australia when considering resource availability and equivalent commercial-scale experience in the UK. In this work, we identified and examined the opportunities and risks associated with common thermal destruction techniques, including: volume of waste, costs, technological risks and environmental impacts. Typical poultry waste streams were characterised based on compositional analysis, thermodynamic equilibrium modelling and non-isothermal thermogravimetric analysis coupled with mass spectrometry (TG-MS). Poultry waste is highly variable but otherwise comparable with other biomass fuels. The major technical and operating challenges are associated with this variability in terms of: moisture content, presence of inorganic species and type of litter. This variability is subject to a range of parameters including: type and age of bird, and geographical and seasonal inconsistencies. There are environmental and health considerations associated with combustion and gasification due to the formation of: NO(X), SO(X), H(2)S and HCl gas. Mitigation of these emissions is achievable through correct plant design and operation, however, with significant economic penalty. Based on our analysis and literature data, we present cost estimates for generic poultry-waste-fired power plants with throughputs of 2 and 8 tonnes/h.
ENSO related variability in the Southern Hemisphere, 1948-2000
NASA Astrophysics Data System (ADS)
Ribera, Pedro; Mann, Michael E.
2003-01-01
The spatiotemporal evolution of Southern Hemisphere climate variability is diagnosed based on the use of the NCEP reanalysis (1948-2000) dataset. Using the MTM-SVD analysis method, significant narrowband variability is isolated from the multi-variate dataset. It is found that the ENSO signal exhibits statistically significant behavior at quasiquadrennial (3-6 yr) timescales for the full time-period. A significant quasibiennial (2-3 yr) timescales emerges only for the latter half of period. Analyses of the spatial evolution of the two reconstructed signals shed additional light on linkages between low and high-latitude Southern Hemisphere climate anomalies.
Path Finding on High-Dimensional Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Díaz Leines, Grisell; Ensing, Bernd
2012-07-01
We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.
Comparison of correlated correlations.
Cohen, A
1989-12-01
We consider a problem where kappa highly correlated variables are available, each being a candidate for predicting a dependent variable. Only one of the kappa variables can be chosen as a predictor and the question is whether there are significant differences in the quality of the predictors. We review several tests derived previously and propose a method based on the bootstrap. The motivating medical problem was to predict 24 hour proteinuria by protein-creatinine ratio measured at either 08:00, 12:00 or 16:00. The tests which we discuss are illustrated by this example and compared using a small Monte Carlo study.
NASA Astrophysics Data System (ADS)
Unger, André J. A.
2010-02-01
This work is the second installment in a two-part series, and focuses on object-oriented programming methods to implement an augmented-state variable approach to aggregate the PCS index and introduce the Bermudan-style call feature into the proposed CAT bond model. The PCS index is aggregated quarterly using a discrete Asian running-sum formulation. The resulting aggregate PCS index augmented-state variable is used to specify the payoff (principle) on the CAT bond based on reinsurance layers. The purpose of the Bermudan-style call option is to allow the reinsurer to minimize their interest rate risk exposure on making fixed coupon payments under prevailing interest rates. A sensitivity analysis is performed to determine the impact of uncertainty in the frequency and magnitude of hurricanes on the price of the CAT bond. Results indicate that while the CAT bond is highly sensitive to the natural variability in the frequency of landfalling hurricanes between El Ninõ and non-El Ninõ years, it remains relatively insensitive to uncertainty in the magnitude of damages. In addition, results indicate that the maximum price of the CAT bond is insensitive to whether it is engineered to cover low frequency high magnitude events in a 'high' reinsurance layer relative to high frequency low magnitude events in a 'low' reinsurance layer. Also, while it is possible for the reinsurer to minimize their interest rate risk exposure on the fixed coupon payments, the impact of this risk on the price of the CAT bond appears small relative to the natural variability in the CAT bond price, and consequently catastrophic risk, due to uncertainty in the frequency and magnitude of landfalling hurricanes.
Quantitative variability of renewable energy resources in Norway
NASA Astrophysics Data System (ADS)
Christakos, Konstantinos; Varlas, George; Cheliotis, Ioannis; Aalstad, Kristoffer; Papadopoulos, Anastasios; Katsafados, Petros; Steeneveld, Gert-Jan
2017-04-01
Based on European Union (EU) targets for 2030, the share of renewable energy (RE) consumption should be increased at 27%. RE resources such as hydropower, wind, wave power and solar power are strongly depending on the chaotic behavior of the weather conditions and climate. Due to this dependency, the prediction of the spatiotemporal variability of the RE resources is more crucial factor than in other energy resources (i.e. carbon based energy). The fluctuation of the RE resources can affect the development of the RE technologies, the energy grid, supply and prices. This study investigates the variability of the potential RE resources in Norway. More specifically, hydropower, wind, wave, and solar power are quantitatively analyzed and correlated with respect to various spatial and temporal scales. In order to analyze the diversities and their interrelationships, reanalysis and observational data of wind, precipitation, wave, and solar radiation are used for a quantitative assessment. The results indicate a high variability of marine RE resources in the North Sea and the Norwegian Sea.
Liu, Shiyong; Triantis, Konstantinos P; Zhao, Li; Wang, Youfa
2018-01-01
In practical research, it was found that most people made health-related decisions not based on numerical data but on perceptions. Examples include the perceptions and their corresponding linguistic values of health risks such as, smoking, syringe sharing, eating energy-dense food, drinking sugar-sweetened beverages etc. For the sake of understanding the mechanisms that affect the implementations of health-related interventions, we employ fuzzy variables to quantify linguistic variable in healthcare modeling where we employ an integrated system dynamics and agent-based model. In a nonlinear causal-driven simulation environment driven by feedback loops, we mathematically demonstrate how interventions at an aggregate level affect the dynamics of linguistic variables that are captured by fuzzy agents and how interactions among fuzzy agents, at the same time, affect the formation of different clusters(groups) that are targeted by specific interventions. In this paper, we provide an innovative framework to capture multi-stage fuzzy uncertainties manifested among interacting heterogeneous agents (individuals) and intervention decisions that affect homogeneous agents (groups of individuals) in a hybrid model that combines an agent-based simulation model (ABM) and a system dynamics models (SDM). Having built the platform to incorporate high-dimension data in a hybrid ABM/SDM model, this paper demonstrates how one can obtain the state variable behaviors in the SDM and the corresponding values of linguistic variables in the ABM. This research provides a way to incorporate high-dimension data in a hybrid ABM/SDM model. This research not only enriches the application of fuzzy set theory by capturing the dynamics of variables associated with interacting fuzzy agents that lead to aggregate behaviors but also informs implementation research by enabling the incorporation of linguistic variables at both individual and institutional levels, which makes unstructured linguistic data meaningful and quantifiable in a simulation environment. This research can help practitioners and decision makers to gain better understanding on the dynamics and complexities of precision intervention in healthcare. It can aid the improvement of the optimal allocation of resources for targeted group (s) and the achievement of maximum utility. As this technology becomes more mature, one can design policy flight simulators by which policy/intervention designers can test a variety of assumptions when they evaluate different alternatives interventions.
Lee, Jong-Ho; Kim, Kyu-Hyeong; Hong, Jin-Woo; Lee, Won-Chul; Koo, Sungtae
2011-06-01
This study aimed to compare the effects of high frequency electroacupuncture (EA) and low-frequency EA on the autonomic nervous system by using a heart rate variability measuring device in normal individuals. Fourteen participants were recruited and each participated in the high-frequency and low-frequency sessions (crossover design). The order of sessions was randomized and the interval between the two sessions was over 2 weeks. Participants received needle insertion with 120-Hz stimulation during the high-frequency session (high-frequency EA group), and with 2-Hz stimulation during the low-frequency session (low-frequency EA group). Acupuncture needles were directly inserted perpendicularly to LI 4 and LI 11 acupoints followed by delivery of electric pulses to these points for 15 minutes. Heart rate variability was measured 5 minutes before and after EA stimulation by a heart rate variability measuring system. We found a significant increase in the standard deviation of the normal-to-normal interval in the high-frequency EA group, with no change in the low-frequency EA group. Both the high-frequency and low-frequency EA groups showed no significant differences in other parameters including high-frequency power, low-frequency power, and the ratio of low-frequency power to high-frequency power. Based on these findings, we concluded that high-frequency EA stimulation is more effective than low-frequency EA stimulation in increasing autonomic nervous activity and there is no difference between the two EA frequencies in enhancing sympathovagal balance. Copyright © 2011 Korean Pharmacopuncture Institute. Published by .. All rights reserved.
NASA Astrophysics Data System (ADS)
Saghafian, Amirreza; Pitsch, Heinz
2012-11-01
A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.
Central venous access and handwashing: variability in policies and practices.
Galway, Robyn; Harrod, Mary Ellen; Crisp, Jackie; Donnellan, Robyn; Hardy, Jan; Harvey, Alice; Maurice, Lucy; Petty, Sheila; Senner, Anne
2003-12-01
This study examined variability in handwashing policy between hospitals, variability in handwashing practices in nurses and how practice differed from policy in tertiary paediatric hospitals in Australia and New Zealand. Eight of the possible nine major paediatric hospitals provided a copy of their handwashing and/or central venous access device (CVAD) policies, and 67 nurses completed a survey on their handwashing practices associated with CVAD management. A high degree of variability was found in relation to all the questions posed in the study. There was little consistency between policies and little agreement between policies and clinical practice, with many nurses washing for longer than required by policy. Rigour of handwashing also varied according to the procedure undertaken and the type of CVAD with activities undertaken farther from the insertion site of the device more likely to be performed using a clean rather than an aseptic handwashing technique. As both patients and nursing staff move within and between hospitals, a uniform and evidence-based approach to handwashing is highly desirable.
Embroidered Electromyography: A Systematic Design Guide.
Shafti, Ali; Ribas Manero, Roger B; Borg, Amanda M; Althoefer, Kaspar; Howard, Matthew J
2017-09-01
Muscle activity monitoring or electromyography (EMG) is a useful tool. However, EMG is typically invasive, expensive and difficult to use for untrained users. A possible solution is textile-based surface EMG (sEMG) integrated into clothing as a wearable device. This is, however, challenging due to 1) uncertainties in the electrical properties of conductive threads used for electrodes, 2) imprecise fabrication technologies (e.g., embroidery, sewing), and 3) lack of standardization in design variable selection. This paper, for the first time, provides a design guide for such sensors by performing a thorough examination of the effect of design variables on sEMG signal quality. Results show that imprecisions in digital embroidery lead to a trade-off between low electrode impedance and high manufacturing consistency. An optimum set of variables for this trade-off is identified and tested with sEMG during a variable force isometric grip exercise with n = 12 participants, compared with conventional gel-based electrodes. Results show that thread-based electrodes provide a similar level of sensitivity to force variation as gel-based electrodes with about 90% correlation to expected linear behavior. As proof of concept, jogging leggings with integrated embroidered sEMG are made and successfully tested for detection of muscle fatigue while running on different surfaces.
NASA Astrophysics Data System (ADS)
Brown, C.; Carriquiry, M.; Souza Filho, F. A.
2006-12-01
Hydroclimatological variability presents acute challenges to urban water supply providers. The impact is often most severe in developing nations where hydrologic and climate variability can be very high, water demand is unmet and increasing, and the financial resources to mitigate the social effects of that variability are limited. Furthermore, existing urban water systems face a reduced solution space, constrained by competing and conflicting interests, such as irrigation demand, recreation and hydropower production, and new (relative to system design) demands to satisfy environmental flow requirements. These constraints magnify the impacts of hydroclimatic variability and increase the vulnerability of urban areas to climate change. The high economic and social costs of structural responses to hydrologic variability, such as groundwater utilization and the construction or expansion of dams, create a need for innovative alternatives. Advances in hydrologic and climate forecasting, and the increasing sophistication and acceptance of incentive-based mechanisms for achieving economically efficient water allocation offer potential for improving the resilience of existing water systems to the challenge of variable supply. This presentation will explore the performance of a system of climate informed economic instruments designed to facilitate the reduction of hydroclimatologic variability-induced impacts on water-sensitive stakeholders. The system is comprised of bulk water option contracts between urban water suppliers and agricultural users and insurance indexed on reservoir inflows designed to cover the financial needs of the water supplier in situations where the option is likely to be exercised. Contract and insurance parameters are linked to forecasts and the evolution of seasonal precipitation and streamflow and designed for financial and political viability. A simulation of system performance is presented based on ongoing work in Metro Manila, Philippines. The system is further evaluated as an alternative strategy to infrastructure expansion for climate change adaptation in the water resources sector.
Wang, Feng; Kaplan, Jess L; Gold, Benjamin D; Bhasin, Manoj K; Ward, Naomi L; Kellermayer, Richard; Kirschner, Barbara S; Heyman, Melvin B; Dowd, Scot E; Cox, Stephen B; Dogan, Haluk; Steven, Blaire; Ferry, George D; Cohen, Stanley A; Baldassano, Robert N; Moran, Christopher J; Garnett, Elizabeth A; Drake, Lauren; Otu, Hasan H; Mirny, Leonid A; Libermann, Towia A; Winter, Harland S; Korolev, Kirill S
2016-02-02
The relationship between the host and its microbiota is challenging to understand because both microbial communities and their environments are highly variable. We have developed a set of techniques based on population dynamics and information theory to address this challenge. These methods identify additional bacterial taxa associated with pediatric Crohn disease and can detect significant changes in microbial communities with fewer samples than previous statistical approaches required. We have also substantially improved the accuracy of the diagnosis based on the microbiota from stool samples, and we found that the ecological niche of a microbe predicts its role in Crohn disease. Bacteria typically residing in the lumen of healthy individuals decrease in disease, whereas bacteria typically residing on the mucosa of healthy individuals increase in disease. Our results also show that the associations with Crohn disease are evolutionarily conserved and provide a mutual information-based method to depict dysbiosis. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Towards an integrated set of surface meterological observations for climate science and applications
NASA Astrophysics Data System (ADS)
Dunn, Robert; Thorne, Peter
2017-04-01
We cannot predict what is not observed, and we cannot analyse what is not archived. To meet current scientific and societal demands, as well as future requirements for climate services, it is vital that the management and curation of land-based meteorological data holdings is improved. A comprehensive global set of data holdings, of known provenance, integrated across both climate variable and timescale are required to meet the wide range of user needs. Presently, the land-based holdings are highly fractured into global, region and national holdings for different variables and timescales, from a variety of sources, and in a mixture of formats. We present a high level overview, based on broad community input, of the steps that are required to bring about this integration and progress towards such a database. Any long-term, international, program creating such an integrated database will transform the our collective ability to provide societally relevant research, analysis and predictions across the globe.
Automated combinatorial method for fast and robust prediction of lattice thermal conductivity
NASA Astrophysics Data System (ADS)
Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano
The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.
Variable high gradient permanent magnet quadrupole (QUAPEVA)
NASA Astrophysics Data System (ADS)
Marteau, F.; Ghaith, A.; N'Gotta, P.; Benabderrahmane, C.; Valléau, M.; Kitegi, C.; Loulergue, A.; Vétéran, J.; Sebdaoui, M.; André, T.; Le Bec, G.; Chavanne, J.; Vallerand, C.; Oumbarek, D.; Cosson, O.; Forest, F.; Jivkov, P.; Lancelot, J. L.; Couprie, M. E.
2017-12-01
Different applications such as laser plasma acceleration, colliders, and diffraction limited light sources require high gradient quadrupoles, with strength that can reach up to 200 T/m for a typical 10 mm bore diameter. We present here a permanent magnet based quadrupole (so-called QUAPEVA) composed of a Halbach ring and surrounded by four permanent magnet cylinders. Its design including magnetic simulation modeling enabling us to reach 201 T/m with a gradient variability of 45% and mechanical issues are reported. Magnetic measurements of seven systems of different lengths are presented and confirmed the theoretical expectations. The variation of the magnetic center while changing the gradient strength is ±10 μm. A triplet of QUAPEVA magnets is used to efficiently focus a beam with large energy spread and high divergence that is generated by a Laser Plasma Acceleration source for a free electron laser demonstration and has enabled us to perform beam based alignment and control the dispersion of the beam.
NASA Astrophysics Data System (ADS)
Hofer, Marlis; Nemec, Johanna
2016-04-01
This study presents first steps towards verifying the hypothesis that uncertainty in global and regional glacier mass simulations can be reduced considerably by reducing the uncertainty in the high-resolution atmospheric input data. To this aim, we systematically explore the potential of different predictor strategies for improving the performance of regression-based downscaling approaches. The investigated local-scale target variables are precipitation, air temperature, wind speed, relative humidity and global radiation, all at a daily time scale. Observations of these target variables are assessed from three sites in geo-environmentally and climatologically very distinct settings, all within highly complex topography and in the close proximity to mountain glaciers: (1) the Vernagtbach station in the Northern European Alps (VERNAGT), (2) the Artesonraju measuring site in the tropical South American Andes (ARTESON), and (3) the Brewster measuring site in the Southern Alps of New Zealand (BREWSTER). As the large-scale predictors, ERA interim reanalysis data are used. In the applied downscaling model training and evaluation procedures, particular emphasis is put on appropriately accounting for the pitfalls of limited and/or patchy observation records that are usually the only (if at all) available data from the glacierized mountain sites. Generalized linear models and beta regression are investigated as alternatives to ordinary least squares regression for the non-Gaussian target variables. By analyzing results for the three different sites, five predictands and for different times of the year, we look for systematic improvements in the downscaling models' skill specifically obtained by (i) using predictor data at the optimum scale rather than the minimum scale of the reanalysis data, (ii) identifying the optimum predictor allocation in the vertical, and (iii) considering multiple (variable, level and/or grid point) predictor options combined with state-of-art empirical feature selection tools. First results show that in particular for air temperature, those downscaling models based on direct predictor selection show comparative skill like those models based on multiple predictors. For all other target variables, however, multiple predictor approaches can considerably outperform those models based on single predictors. Including multiple variable types emerges as the most promising predictor option (in particular for wind speed at all sites), even if the same predictor set is used across the different cases.
Ahlström, Anders; Raupach, Michael R; Schurgers, Guy; Smith, Benjamin; Arneth, Almut; Jung, Martin; Reichstein, Markus; Canadell, Josep G; Friedlingstein, Pierre; Jain, Atul K; Kato, Etsushi; Poulter, Benjamin; Sitch, Stephen; Stocker, Benjamin D; Viovy, Nicolas; Wang, Ying Ping; Wiltshire, Andy; Zaehle, Sönke; Zeng, Ning
2015-05-22
The growth rate of atmospheric carbon dioxide (CO2) concentrations since industrialization is characterized by large interannual variability, mostly resulting from variability in CO2 uptake by terrestrial ecosystems (typically termed carbon sink). However, the contributions of regional ecosystems to that variability are not well known. Using an ensemble of ecosystem and land-surface models and an empirical observation-based product of global gross primary production, we show that the mean sink, trend, and interannual variability in CO2 uptake by terrestrial ecosystems are dominated by distinct biogeographic regions. Whereas the mean sink is dominated by highly productive lands (mainly tropical forests), the trend and interannual variability of the sink are dominated by semi-arid ecosystems whose carbon balance is strongly associated with circulation-driven variations in both precipitation and temperature. Copyright © 2015, American Association for the Advancement of Science.
Operation ranges and dynamic capabilities of variable-speed pumped-storage hydropower
NASA Astrophysics Data System (ADS)
Mercier, Thomas; Olivier, Mathieu; Dejaeger, Emmanuel
2017-04-01
The development of renewable and intermittent power generation creates incentives for the development of both energy storage solutions and more flexible power generation assets. Pumped-storage hydropower (PSH) is the most established and mature energy storage technology, but recent developments in power electronics have created a renewed interest by providing PSH units with a variable-speed feature, thereby increasing their flexibility. This paper reviews technical considerations related to variable-speed PSH in link with the provision of primary frequency control, also referred to as frequency containment reserves (FCRs). Based on the detailed characteristics of a scale model pump-turbine, the variable-speed operation ranges in pump and turbine modes are precisely assessed and the implications for the provision of FCRs are highlighted. Modelling and control for power system studies are discussed, both for fixed- and variable-speed machines and simulation results are provided to illustrate the high dynamic capabilities of variable-speed PSH.
Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units.
Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic
2017-01-01
Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction).
Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units
Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic
2017-01-01
Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction). PMID:28352243
NASA Astrophysics Data System (ADS)
Liu, Shengqiang; Zhao, Juan; Huang, Jiang; Yu, Junsheng
2016-12-01
Organic light-emitting devices (OLEDs) with three different exciton adjusting interlayers (EALs), which are inserted between two complementary blue and yellow emitting layers, are fabricated to demonstrate the relationship between the EAL and device performance. The results show that the variations of type and thickness of EAL have different adjusting capability and distribution control on excitons. However, we also find that the reverse Dexter transfer of triplet exciton from the light-emitting layer to the EAL is an energy loss path, which detrimentally affects electroluminescent (EL) spectral performance and device efficiency in different EAL-based devices. Based on exciton distribution and integration, an estimation of exciton reverse transfer is developed through a triplet energy level barrier to simulate the exciton behavior. Meanwhile, the estimation results also demonstrate the relationship between the EAL and device efficiency by a parameter of exciton reverse transfer probability. The estimation of exciton reverse transfer discloses a crucial role of the EALs in the interlayer-based OLEDs to achieve variable EL spectra and high efficiency.
Validation of Ocean Color Remote Sensing Reflectance Using Autonomous Floats
NASA Technical Reports Server (NTRS)
Gerbi, Gregory P.; Boss, Emanuel; Werdell, P. Jeremy; Proctor, Christopher W.; Haentjens, Nils; Lewis, Marlon R.; Brown, Keith; Sorrentino, Diego; Zaneveld, J. Ronald V.; Barnard, Andrew H.;
2016-01-01
The use of autonomous proling oats for observational estimates of radiometric quantities in the ocean is explored, and the use of this platform for validation of satellite-based estimates of remote sensing reectance in the ocean is examined. This effort includes comparing quantities estimated from oat and satellite data at nominal wavelengths of 412, 443, 488, and 555 nm, and examining sources and magnitudes of uncertainty in the oat estimates. This study had 65 occurrences of coincident high-quality observations from oats and MODIS Aqua and 15 occurrences of coincident high-quality observations oats and Visible Infrared Imaging Radi-ometer Suite (VIIRS). The oat estimates of remote sensing reectance are similar to the satellite estimates, with disagreement of a few percent in most wavelengths. The variability of the oatsatellite comparisons is similar to the variability of in situsatellite comparisons using a validation dataset from the Marine Optical Buoy (MOBY). This, combined with the agreement of oat-based and satellite-based quantities, suggests that oats are likely a good platform for validation of satellite-based estimates of remote sensing reectance.
A discriminant function model for admission at undergraduate university level
NASA Astrophysics Data System (ADS)
Ali, Hamdi F.; Charbaji, Abdulrazzak; Hajj, Nada Kassim
1992-09-01
The study is aimed at predicting objective criteria based on a statistically tested model for admitting undergraduate students to Beirut University College. The University is faced with a dual problem of having to select only a fraction of an increasing number of applicants, and of trying to minimize the number of students placed on academic probation (currently 36 percent of new admissions). Out of 659 new students, a sample of 272 students (45 percent) were selected; these were all the students on the Dean's list and on academic probation. With academic performance as the dependent variable, the model included ten independent variables and their interactions. These variables included the type of high school, the language of instruction in high school, recommendations, sex, academic average in high school, score on the English Entrance Examination, the major in high school, and whether the major was originally applied for by the student. Discriminant analysis was used to evaluate the relative weight of the independent variables, and from the analysis three equations were developed, one for each academic division in the College. The predictive power of these equations was tested by using them to classify students not in the selected sample into successful and unsuccessful ones. Applicability of the model to other institutions of higher learning is discussed.
Optical Variability and Classification of High Redshift (3.5 < z < 5.5) Quasars on SDSS Stripe 82
NASA Astrophysics Data System (ADS)
AlSayyad, Yusra; McGreer, Ian D.; Fan, Xiaohui; Connolly, Andrew J.; Ivezic, Zeljko; Becker, Andrew C.
2015-01-01
Recent studies have shown promise in combining optical colors with variability to efficiently select and estimate the redshifts of low- to mid-redshift quasars in upcoming ground-based time-domain surveys. We extend these studies to fainter and less abundant high-redshift quasars using light curves from 235 sq. deg. and 10 years of Stripe 82 imaging reprocessed with the prototype LSST data management stack. Sources are detected on the i-band co-adds (5σ: i ~ 24) but measured on the single-epoch (ugriz) images, generating complete and unbiased lightcurves for sources fainter than the single-epoch detection threshold. Using these forced photometry lightcurves, we explore optical variability characteristics of high redshift quasars and validate classification methods with particular attention to the low signal limit. In this low SNR limit, we quantify the degradation of the uncertainties and biases on variability parameters using simulated light curves. Completeness/efficiency and redshift accuracy are verified with new spectroscopic observations on the MMT and APO 3.5m. These preliminary results are part of a survey to measure the z~4 luminosity function for quasars (i < 23) on Stripe 82 and to validate purely photometric classification techniques for high redshift quasars in LSST.
1981-12-01
addressing the "at-sea equivalence issue." * Request that the radar simulator-based training schools stress the importance of multiple navigational...potentially high cost simulator/training program variables, namely: e Target maneuverability Independent versus canned * Color visual scene Color versus... high wind conditions (40 knots). It appears that this may be due to insufficient understanding of: (1) responsiveness of the vessel to various rudder
GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.
Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua
2018-06-19
Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Temperature, routine activities, and domestic violence: a reanalysis.
Rotton, J; Cohn, E G
2001-04-01
It was hypothesized that base rate differences in the number of complaints made during daylight and nighttime hours were responsible for a previously reported, nonlinear relationship between temperature and domestic violence. This hypothesis was tested by subjecting calls for service in 1987 and 1988 in Minneapolis, to moderator-variable regression analyses with controls for time of day, day of the week, season, and their interactions as well as linear trend, major holidays, public school closings, the first day of the month, and other weather variables. Temporal variables explained 75% of the variance in calls for service. As hypothesized, the base rate artifact was responsible for an apparent downturn in violence at high temperatures: Fewer complaints were received during afternoon hours, because they happen to be the warmest time of the day. The results were interpreted in terms of routine activity theory.
ERIC Educational Resources Information Center
Owusu, Andrew; Hart, Peter; Oliver, Brittney; Kang, Minsoo
2011-01-01
Background: School-based bullying, a global challenge, negatively impacts the health and development of both victims and perpetrators. This study examined the relationship between bullying victimization and selected psychological variables among senior high school (SHS) students in Ghana, West Africa. Methods: This study utilized data from the…
A Model for Investigating Predictive Validity at Highly Selective Institutions.
ERIC Educational Resources Information Center
Gross, Alan L.; And Others
A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…
Determination of the High School Students' Attitudes towards Their Teachers
ERIC Educational Resources Information Center
Gelisli, Yücel; Baidrahmanov, Dossym Kh.; Beisenbaeva, Lyazzat; Sultanbek, Malik
2017-01-01
In the current study, the aim is to determine the high school students' attitudes towards their teachers depending on some variables and the relationship between their attitudes and achievements. Thus, the study was designed according to relational survey model. The population of the study, which was specified based on the purposive sampling…
Test Anxiety and High-Stakes Test Performance between School Settings: Implications for Educators
ERIC Educational Resources Information Center
von der Embse, Nathaniel; Hasson, Ramzi
2012-01-01
With the enactment of standards-based accountability in education, high-stakes tests have become the dominant method for measuring school effectiveness and student achievement. Schools and educators are under increasing pressure to meet achievement standards. However, there are variables which may interfere with the authentic measurement of…
The Social Competence of Highly Gifted Math and Science Adolescents
ERIC Educational Resources Information Center
Lee, Seon-Young; Olszewski-Kubilius, Paula; Thomson, Dana
2012-01-01
Involving 740 highly gifted math and science students from two different countries, Korea and the United States, this study examined how these gifted adolescents perceived their interpersonal ability and peer relationships and whether there were differences between these two groups by demographic variables. Based on the survey data, results showed…
ERIC Educational Resources Information Center
Pummill, Bret L.; Edson, Jerry C.; Loftin, Michelle M.; Robinson, Matthew A.
2011-01-01
This report describes a problem based learning project focusing on superintendents' knowledge of the characteristics of high quality teachers. Current research findings offer evidence teacher quality is an important school variable related to student achievement. School district leaders are faced with the problem of identifying the characteristics…
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
NASA Astrophysics Data System (ADS)
Pérez-Ruzafa, Angel; Quispe, Jhoni I.; Umgiesser, Georg; Ghezzo, Michol; De Pascalis, Francesca; Marcos, Concepción
2014-05-01
Fish assemblages in coastal lagoons are constituted by species with different gilds and life stories including estuarine residents but also a high percentage of marine stragglers and marine migrants. Previous studies showed that different ichthyoplancton assemblages can be identified inside a lagoon, depending on hydrological conditions, but at the same time a high spatial and temporal variability haven observed. The proposed models to explain lagoon assemblages configuration based on probabilities of colonization from the open sea involves an important stochastic component and introduces some randomness that could lead to that high spatial and temporal variability at short and long-term scales. In this work we analyze the relationship between ichthyoplankton assemblages in the Mar Menor lagoon and the adjacent open sea in the framework of the hydrodynamics of the lagoon and connectivity between sampling stations using hydrodynamic models. The results, show a complex interaction between the different factors that lead to a highly variable system with high accumulated richness and diversity of species, and a large proportion of occasional visitors and stragglers suggesting that the mechanisms of competitive lottery can play an important role in the maintenance of communities of coastal lagoons , where environmental variability occurs in a system with strong differences in colonization rates and connectivity, not only with the open sea, but also between locations within the lagoon.
NASA Astrophysics Data System (ADS)
Moreno de Castro, Maria; Schartau, Markus; Wirtz, Kai
2017-04-01
Mesocosm experiments on phytoplankton dynamics under high CO2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Displacement Based Multilevel Structural Optimization
NASA Technical Reports Server (NTRS)
Sobieszezanski-Sobieski, J.; Striz, A. G.
1996-01-01
In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.
Randler, Christoph
2017-01-01
Chronotype or morningness-eveningness (M/E) is an individual trait with a biological basis. In this study, I analysed the relationship between M/E and nationwide available data, such as economic variables, school achievement, intelligence and conscientiousness, which is a personality trait. These variables have been chosen because, first, they are linked on the individual level with circadian preference, and, second these associations have been found based on meta-analyses, which gives these findings a high plausibility. In addition, economic status has also been proposed to be related to M/E. Higher developed countries showed a lower morningness, based on both, the ranking of countries as well as on the HDI value. Similarly, GNI was related to morningness, while higher intelligence and performance in PISA were related to eveningness. Conscientiousness was related to morningness, although the results failed the significance level marginally. When using IQ as a control variable in partial correlations, the relationship between GNI and morningness disappeared, as did the correlation between eveningness and PISA results.
Ciffroy, Philippe; Charlatchka, Rayna; Ferreira, Daniel; Marang, Laura
2013-07-01
The biotic ligand model (BLM) theoretically enables the derivation of environmental quality standards that are based on true bioavailable fractions of metals. Several physicochemical variables (especially pH, major cations, dissolved organic carbon, and dissolved metal concentrations) must, however, be assigned to run the BLM, but they are highly variable in time and space in natural systems. This article describes probabilistic approaches for integrating such variability during the derivation of risk indexes. To describe each variable using a probability density function (PDF), several methods were combined to 1) treat censored data (i.e., data below the limit of detection), 2) incorporate the uncertainty of the solid-to-liquid partitioning of metals, and 3) detect outliers. From a probabilistic perspective, 2 alternative approaches that are based on log-normal and Γ distributions were tested to estimate the probability of the predicted environmental concentration (PEC) exceeding the predicted non-effect concentration (PNEC), i.e., p(PEC/PNEC>1). The probabilistic approach was tested on 4 real-case studies based on Cu-related data collected from stations on the Loire and Moselle rivers. The approach described in this article is based on BLM tools that are freely available for end-users (i.e., the Bio-Met software) and on accessible statistical data treatments. This approach could be used by stakeholders who are involved in risk assessments of metals for improving site-specific studies. Copyright © 2013 SETAC.
Relationship of suicide rates with climate and economic variables in Europe during 2000-2012.
Fountoulakis, Konstantinos N; Chatzikosta, Isaia; Pastiadis, Konstantinos; Zanis, Prodromos; Kawohl, Wolfram; Kerkhof, Ad J F M; Navickas, Alvydas; Höschl, Cyril; Lecic-Tosevski, Dusica; Sorel, Eliot; Rancans, Elmars; Palova, Eva; Juckel, Georg; Isacsson, Goran; Jagodic, Helena Korosec; Botezat-Antonescu, Ileana; Rybakowski, Janusz; Azorin, Jean Michel; Cookson, John; Waddington, John; Pregelj, Peter; Demyttenaere, Koen; Hranov, Luchezar G; Stevovic, Lidija Injac; Pezawas, Lucas; Adida, Marc; Figuera, Maria Luisa; Jakovljević, Miro; Vichi, Monica; Perugi, Giulio; Andreassen, Ole A; Vukovic, Olivera; Mavrogiorgou, Paraskevi; Varnik, Peeter; Dome, Peter; Winkler, Petr; Salokangas, Raimo K R; From, Tiina; Danileviciute, Vita; Gonda, Xenia; Rihmer, Zoltan; Forsman, Jonas; Grady, Anne; Hyphantis, Thomas; Dieset, Ingrid; Soendergaard, Susan; Pompili, Maurizio; Bech, Per
2016-01-01
It is well known that suicidal rates vary considerably among European countries and the reasons for this are unknown, although several theories have been proposed. The effect of economic variables has been extensively studied but not that of climate. Data from 29 European countries covering the years 2000-2012 and concerning male and female standardized suicidal rates (according to WHO), economic variables (according World Bank) and climate variables were gathered. The statistical analysis included cluster and principal component analysis and categorical regression. The derived models explained 62.4 % of the variability of male suicidal rates. Economic variables alone explained 26.9 % and climate variables 37.6 %. For females, the respective figures were 41.7, 11.5 and 28.1 %. Male suicides correlated with high unemployment rate in the frame of high growth rate and high inflation and low GDP per capita, while female suicides correlated negatively with inflation. Both male and female suicides correlated with low temperature. The current study reports that the climatic effect (cold climate) is stronger than the economic one, but both are present. It seems that in Europe suicidality follows the climate/temperature cline which interestingly is not from south to north but from south to north-east. This raises concerns that climate change could lead to an increase in suicide rates. The current study is essentially the first successful attempt to explain the differences across countries in Europe; however, it is an observational analysis based on aggregate data and thus there is a lack of control for confounders.
A global perspective on Glacial- to Interglacial variability change
NASA Astrophysics Data System (ADS)
Rehfeld, Kira; Münch, Thomas; Ho, Sze Ling; Laepple, Thomas
2017-04-01
Changes in climate variability are more important for society than changes in the mean state alone. While we will be facing a large-scale shift of the mean climate in the future, its implications for climate variability are not well constrained. Here we quantify changes in temperature variability as climate shifted from the Last Glacial cold to the Holocene warm period. Greenland ice core oxygen isotope records provide evidence of this climatic shift, and are used as reference datasets in many palaeoclimate studies worldwide. A striking feature in these records is pronounced millennial variability in the Glacial, and a distinct reduction in variance in the Holocene. We present quantitative estimates of the change in variability on 500- to 1500-year timescales based on a global compilation of high-resolution proxy records for temperature which span both the Glacial and the Holocene. The estimates are derived based on power spectral analysis, and corrected using estimates of the proxy signal-to-noise ratios. We show that, on a global scale, variability at the Glacial maximum is five times higher than during the Holocene, with a possible range of 3-10 times. The spatial pattern of the variability change is latitude-dependent. While the tropics show no changes in variability, mid-latitude changes are higher. A slight overall reduction in variability in the centennial to millennial range is found in Antarctica. The variability decrease in the Greenland ice core oxygen isotope records is larger than in any other proxy dataset. These results therefore contradict the view of a globally quiescent Holocene following the instable Glacial, and imply that, in terms of centennial to millennial temperature variability, the two states may be more similar than previously thought.
Heritability of mandibular cephalometric variables in twins with completed craniofacial growth.
Šidlauskas, Mantas; Šalomskienė, Loreta; Andriuškevičiūtė, Irena; Šidlauskienė, Monika; Labanauskas, Žygimantas; Vasiliauskas, Arūnas; Kupčinskas, Limas; Juzėnas, Simonas; Šidlauskas, Antanas
2016-10-01
To determine genetic and environmental impact on mandibular morphology using lateral cephalometric analysis of twins with completed mandibular growth and deoxyribonucleic acid (DNA) based zygosity determination. The 39 cephalometric variables of 141 same gender adult pair of twins were analysed. Zygosity was determined using 15 specific DNA markers and cervical vertebral maturation method was used to assess completion of the mandibular growth. A genetic analysis was performed using maximum likelihood genetic structural equation modelling (GSEM). The genetic heritability estimates of angular variables describing horizontal mandibular position in relationship to cranial base and maxilla were considerably higher than in those describing vertical position. The mandibular skeletal cephalometric variables also showed high heritability estimates with angular measurements being considerably higher than linear ones. Results of this study indicate that the angular measurements representing mandibular skeletal morphology (mandibular form) have greater genetic determination than the linear measurements (mandibular size). The shape and sagittal position of the mandible is under stronger genetic control, than is its size and vertical relationship to cranial base. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could be detected. The skill of the model for the dry summer season in Central Asia and the transition seasons over South Asia is found to be low. A sensitivity analysis by means on well known climate indices reveals the major large scale controlling mechanisms for the seasonal precipitation climate of each target area. For the Central Asian target areas, both, the El Nino Southern Oscillation and the North Atlantic Oscillation are identified as important controlling factors for precipitation totals during moist spring season. Drought conditions are found to be triggered by a warm ENSO phase in combination with a positive phase of the NAO. For the monsoonal summer precipitation amounts over Southern Asia, the model suggests a distinct negative response to El Nino events.
Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin
2014-03-10
The star tracker is one of the most promising attitude measurement devices widely used in spacecraft for its high accuracy. High dynamic performance is becoming its major restriction, and requires immediate focus and promotion. A star image restoration approach based on the motion degradation model of variable angular velocity is proposed in this paper. This method can overcome the problem of energy dispersion and signal to noise ratio (SNR) decrease resulting from the smearing of the star spot, thus preventing failed extraction and decreased star centroid accuracy. Simulations and laboratory experiments are conducted to verify the proposed methods. The restoration results demonstrate that the described method can recover the star spot from a long motion trail to the shape of Gaussian distribution under the conditions of variable angular velocity and long exposure time. The energy of the star spot can be concentrated to ensure high SNR and high position accuracy. These features are crucial to the subsequent star extraction and the whole performance of the star tracker.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Data re-arranging techniques leading to proper variable selections in high energy physics
NASA Astrophysics Data System (ADS)
Kůs, Václav; Bouř, Petr
2017-12-01
We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.
Impact of Monsoon to Aquatic Productivity and Fish Landing at Pesawaran Regency Waters
NASA Astrophysics Data System (ADS)
Kunarso; Zainuri, Muhammad; Ario, Raden; Munandar, Bayu; Prayogi, Harmon
2018-02-01
Monsoon variability influences the productivity processes in the ocean and has different responses in each waters. Furthermore, variability of marine productivity affects to the fisheries resources fluctuation. This research has conducted using descriptive method to investigate the consequences of monsoon variability to aquatic productivity, sea surface temperature (SST), fish catches, and fish season periods at Pesawaran Regency waters, Lampung. Variability of aquatic productivity was determined based on chlorophyll-a indicator from MODIS satellite images. Monsoon variability was governed based on wind parameters and fish catches from fish landing data of Pesawaran fish market. The result showed that monsoon variability had affected to aquatic productivity, SST, and fish catches at Pesawaran Regency waters. Maximum wind speed and lowest SST occurred twice in a year, December to March and August to October, which the peaks were on January (2.55 m/s of wind speed and 29.66°C of SST) and September (2.44 m/s of wind speed and 29.06°C of SST). Also, Maximum aquatic productivity happened on January to March and July to September, which it was arisen simultaneously with maximum wind speed and the peaks was 0.74 mg/m3 and 0.78 mg/m3, on February and August respectively. The data showed that fish catches decreased along with strong wind speed and low SST. However, when weak wind speed and high SST occurred, fish catches increased. The correlation between Catch per Unit Effort (CPUE) with SST, wind speed, and chlorophyll-a was at value 0.76, -0.67, and -0.70, respectively. The high rate fish catches in Pesawaran emerged on March-May and September-December.
NASA Astrophysics Data System (ADS)
Krishna, Shubham; Schartau, Markus
2017-04-01
The effect of ocean acidification on growth and calcification of the marine algae Emiliania huxleyi was investigated in a series of mesocosm experiments where enclosed water volumes that comprised a natural plankton community were exposed to different carbon dioxide (CO2) concentrations. Calcification rates observed during those experiments were found to be highly variable, even among replicate mesocosms that were subject to similar CO2 perturbations. Here, data from an ocean acidification mesocosm experiment are reanalysed with an optimality-based dynamical plankton model. According to our model approach, cellular calcite formation is sensitive to variations in CO2 at the organism level. We investigate the temporal changes and variability in observations, with a focus on resolving observed differences in total alkalinity and particulate inorganic carbon (PIC). We explore how much of the variability in the data can be explained by variations of the initial conditions and by the level of CO2 perturbation. Nine mesocosms of one experiment were sorted into three groups of high, medium, and low calcification rates and analysed separately. The spread of the three optimised ensemble model solutions captures most of the observed variability. Our results show that small variations in initial abundance of coccolithophores and the prevailing physiological acclimation states generate differences in calcification that are larger than those induced by ocean acidification. Accordingly, large deviations between optimal mass flux estimates of carbon and of nitrogen are identified even between mesocosms that were subject to similar ocean acidification conditions. With our model-based data analysis we document how an ocean acidification response signal in calcification can be disentangled from the observed variability in PIC.
Cook, Andrea M; Moritz, Andreas; Freeman, Kathleen P; Bauer, Natali
2016-09-01
Scarce information exists about quality requirements and objective evaluation of performance of large veterinary bench top hematology analyzers. The study was aimed at comparing the observed total error (TEobs ) derived from meta-analysis of published method validation data to the total allowable error (TEa ) for veterinary hematology variables in small animals based on experts' opinions. Ideally, TEobs should be < TEa . An online survey was sent to veterinary experts in clinical pathology and small animal internal medicine for providing the maximal allowable deviation from a given result for each variable. Percent of TEa = (allowable median deviation/clinical threshold) * 100%. Second, TEobs for 3 laser-based bench top hematology analyzers (ADVIA 2120; Sysmex XT2000iV, and CellDyn 3500) was calculated based on method validation studies published between 2005 and 2013 (n = 4). The percent TEobs = 2 * CV (%) + bias (%). The CV was derived from published studies except for the ADVIA 2120 (internal data), and bias was estimated from the regression equation. A total of 41 veterinary experts (19 diplomates, 8 residents, 10 postgraduate students, 4 anonymous specialists) responded. The proposed range of TEa was wide, but generally ≤ 20%. The TEobs was < TEa for all variables and analyzers except for canine and feline HGB (high bias, low CV) and platelet counts (high bias, high CV). Overall, veterinary bench top analyzers fulfilled experts' requirements except for HGB due to method-related bias, and platelet counts due to known preanalytic/analytic issues. © 2016 American Society for Veterinary Clinical Pathology.
Zhao, Tian; Villéger, Sébastien; Lek, Sovan; Cucherousset, Julien
2014-01-01
Investigations on the functional niche of organisms have primarily focused on differences among species and tended to neglect the potential effects of intraspecific variability despite the fact that its potential ecological and evolutionary importance is now widely recognized. In this study, we measured the distribution of functional traits in an entire population of largemouth bass (Micropterus salmoides) to quantify the magnitude of intraspecific variability in functional traits and niche (size, position, and overlap) between age classes. Stable isotope analyses (δ13C and δ15N) were also used to determine the association between individual trophic ecology and intraspecific functional trait variability. We observed that functional traits were highly variable within the population (mean coefficient variation: 15.62% ± 1.78% SE) and predominantly different between age classes. In addition, functional and trophic niche overlap between age classes was extremely low. Differences in functional niche between age classes were associated with strong changes in trophic niche occurring during ontogeny while, within age classes, differences among individuals were likely driven by trophic specialization. Each age class filled only a small portion of the total functional niche of the population and age classes occupied distinct portions in the functional space, indicating the existence of ontogenetic specialists with different functional roles within the population. The high amplitude of intraspecific variability in functional traits and differences in functional niche position among individuals reported here supports the recent claims for an individual-based approach in functional ecology. PMID:25558359
Evaluation of a pilot workload metric for simulated VTOL landing tasks
NASA Technical Reports Server (NTRS)
North, R. A.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Multivariate discriminant functions were formed from conventional flight performance and/or visual response variables to maximize detection of experimental differences. The flight performance variable discriminant showed maximum differentiation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition/trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus, represented higher workload levels.
NASA Astrophysics Data System (ADS)
Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev
2016-03-01
Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Added-values of high spatiotemporal remote sensing data in crop yield estimation
NASA Astrophysics Data System (ADS)
Gao, F.; Anderson, M. C.
2017-12-01
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing derived parameters have been used for estimating crop yield by using either empirical or crop growth models. The uses of remote sensing vegetation index (VI) in crop yield modeling have been typically evaluated at regional and country scales using coarse spatial resolution (a few hundred to kilo-meters) data or assessed over a small region at field level using moderate resolution spatial resolution data (10-100m). Both data sources have shown great potential in capturing spatial and temporal variability in crop yield. However, the added value of data with both high spatial and temporal resolution data has not been evaluated due to the lack of such data source with routine, global coverage. In recent years, more moderate resolution data have become freely available and data fusion approaches that combine data acquired from different spatial and temporal resolutions have been developed. These make the monitoring crop condition and estimating crop yield at field scale become possible. Here we investigate the added value of the high spatial and temporal VI for describing variability of crop yield. The explanatory ability of crop yield based on high spatial and temporal resolution remote sensing data was evaluated in a rain-fed agricultural area in the U.S. Corn Belt. Results show that the fused Landsat-MODIS (high spatial and temporal) VI explains yield variability better than single data source (Landsat or MODIS alone), with EVI2 performing slightly better than NDVI. The maximum VI describes yield variability better than cumulative VI. Even though VI is effective in explaining yield variability within season, the inter-annual variability is more complex and need additional information (e.g. weather, water use and management). Our findings augment the importance of high spatiotemporal remote sensing data and supports new moderate resolution satellite missions for agricultural applications.
NASA Astrophysics Data System (ADS)
Fujiki, Shogoro; Okada, Kei-ichi; Nishio, Shogo; Kitayama, Kanehiro
2016-09-01
We developed a new method to estimate stand ages of secondary vegetation in the Bornean montane zone, where local people conduct traditional shifting cultivation and protected areas are surrounded by patches of recovering secondary vegetation of various ages. Identifying stand ages at the landscape level is critical to improve conservation policies. We combined a high-resolution satellite image (WorldView-2) with time-series Landsat images. We extracted stand ages (the time elapsed since the most recent slash and burn) from a change-detection analysis with Landsat time-series images and superimposed the derived stand ages on the segments classified by object-based image analysis using WorldView-2. We regarded stand ages as a response variable, and object-based metrics as independent variables, to develop regression models that explain stand ages. Subsequently, we classified the vegetation of the target area into six age units and one rubber plantation unit (1-3 yr, 3-5 yr, 5-7 yr, 7-30 yr, 30-50 yr, >50 yr and 'rubber plantation') using regression models and linear discriminant analyses. Validation demonstrated an accuracy of 84.3%. Our approach is particularly effective in classifying highly dynamic pioneer vegetation younger than 7 years into 2-yr intervals, suggesting that rapid changes in vegetation canopies can be detected with high accuracy. The combination of a spectral time-series analysis and object-based metrics based on high-resolution imagery enabled the classification of dynamic vegetation under intensive shifting cultivation and yielded an informative land cover map based on stand ages.
NASA Astrophysics Data System (ADS)
Chouaib, Wafa; Caldwell, Peter V.; Alila, Younes
2018-04-01
This paper advances the physical understanding of the flow duration curve (FDC) regional variation. It provides a process-based analysis of the interaction between climate and landscape properties to explain disparities in FDC shapes. We used (i) long term measured flow and precipitation data over 73 catchments from the eastern US. (ii) We calibrated the Sacramento model (SAC-SMA) to simulate soil moisture and flow components FDCs. The catchments classification based on storm characteristics pointed to the effect of catchments landscape properties on the precipitation variability and consequently on the FDC shapes. The landscape properties effect was pronounce such that low value of the slope of FDC (SFDC)-hinting at limited flow variability-were present in regions of high precipitation variability. Whereas, in regions with low precipitation variability the SFDCs were of larger values. The topographic index distribution, at the catchment scale, indicated that saturation excess overland flow mitigated the flow variability under conditions of low elevations with large soil moisture storage capacity and high infiltration rates. The SFDCs increased due to the predominant subsurface stormflow in catchments at high elevations with limited soil moisture storage capacity and low infiltration rates. Our analyses also highlighted the major role of soil infiltration rates on the FDC despite the impact of the predominant runoff generation mechanism and catchment elevation. In conditions of slow infiltration rates in soils of large moisture storage capacity (at low elevations) and predominant saturation excess, the SFDCs were of larger values. On the other hand, the SFDCs decreased in catchments of prevalent subsurface stormflow and poorly drained soils of small soil moisture storage capacity. The analysis of the flow components FDCs demonstrated that the interflow contribution to the response was the higher in catchments with large value of slope of the FDC. The surface flow FDC was the most affected by the precipitation as it tracked the precipitation duration curve (PDC). In catchments with low SFDCs, this became less applicable as surface flow FDC diverged from PDC at the upper tail (> 40% of the flow percentile). The interflow and baseflow FDCs illustrated most the filtering effect on the precipitation. The process understanding we achieved in this study is key for flow simulation and assessment in addition to future works focusing on process-based FDC predictions.
Ecosystem functioning is enveloped by hydrometeorological variability.
Pappas, Christoforos; Mahecha, Miguel D; Frank, David C; Babst, Flurin; Koutsoyiannis, Demetris
2017-09-01
Terrestrial ecosystem processes, and the associated vegetation carbon dynamics, respond differently to hydrometeorological variability across timescales, and so does our scientific understanding of the underlying mechanisms. Long-term variability of the terrestrial carbon cycle is not yet well constrained and the resulting climate-biosphere feedbacks are highly uncertain. Here we present a comprehensive overview of hydrometeorological and ecosystem variability from hourly to decadal timescales integrating multiple in situ and remote-sensing datasets characterizing extra-tropical forest sites. We find that ecosystem variability at all sites is confined within a hydrometeorological envelope across sites and timescales. Furthermore, ecosystem variability demonstrates long-term persistence, highlighting ecological memory and slow ecosystem recovery rates after disturbances. However, simulation results with state-of-the-art process-based models do not reflect this long-term persistent behaviour in ecosystem functioning. Accordingly, we develop a cross-time-scale stochastic framework that captures hydrometeorological and ecosystem variability. Our analysis offers a perspective for terrestrial ecosystem modelling and paves the way for new model-data integration opportunities in Earth system sciences.
Overcoming multicollinearity in multiple regression using correlation coefficient
NASA Astrophysics Data System (ADS)
Zainodin, H. J.; Yap, S. J.
2013-09-01
Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Robustness of quantum key distribution with discrete and continuous variables to channel noise
NASA Astrophysics Data System (ADS)
Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.
2017-06-01
We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.
MHA admission criteria and program performance: do they predict career performance?
Porter, J; Galfano, V J
1987-01-01
The purpose of this study was to determine to what extent admission criteria predict graduate school and career performance. The study also analyzed which objective and subjective criteria served as the best predictors. MHA graduates of the University of Minnesota from 1974 to 1977 were surveyed to assess career performance. Student files served as the data base on admission criteria and program performance. Career performance was measured by four variables: total compensation, satisfaction, fiscal responsibility, and level of authority. High levels of MHA program performance were associated with women who had high undergraduate GPAs from highly selective undergraduate colleges, were undergraduate business majors, and participated in extracurricular activities. High levels of compensation were associated with relatively low undergraduate GPAs, high levels of participation in undergraduate extracurricular activities, and being single at admission to graduate school. Admission to MHA programs should be based upon both objective and subjective criteria. Emphasis should be placed upon the selection process for MHA students since admission criteria are shown to explain 30 percent of the variability in graduate program performance, and as much as 65 percent of the variance in level of position authority.
Ross Sea Till Properties: Implications for Ice Sheet Bed Interaction
NASA Astrophysics Data System (ADS)
Halberstadt, A. R.; Anderson, J. B.; Simkins, L.; Prothro, L. O.; Bart, P. J.
2015-12-01
Since the discovery of a pervasive shearing till layer underlying Ice Stream B, the scientific community has categorized subglacial diamictons as either deformation till or lodgement till primarily based on shear strength. Deformation till is associated with streaming ice, formed through subglacial deformation of unconsolidated sediments. Lodgement till is believed to be deposited by the plastering of sediment entrained at the base of slow-flowing ice onto a rigid bed. Unfortunately, there has been a paucity of quantitative data on the spatial distribution of shear strength across the continental shelf. Cores collected from the Ross Sea on cruises NBP1502 and NBP9902 provide a rich dataset that can be used to interpret till shear strength variability. Till strengths are analyzed within the context of: (1) geologic substrate; (2) water content and other geotechnical properties; (3) ice sheet retreat history; and (4) geomorphic framework. Tills display a continuum of shear strengths rather than a bimodal distribution, suggesting that shear strength cannot be used to distinguish between lodgement and deformation till. Where the substrate below the LGM unconformity is comprised of older lithified deposits, till shear strengths are both highly variable within the till unit, as well as highly variable between cores. Conversely, where ice streams flowed across unconsolidated Plio-Pleistocene deposits, shear strengths are low and less variable within the unit and between cores. This suggests greater homogenization of cannibalized tills, and possibly a deeper pervasive shear layer. Coarser-grained tills are observed on banks and bank slopes, with finer tills in troughs. Highly variable and more poorly sorted tills are found in close proximity to sediment-based subglacial meltwater channels, attesting to a change in ice-bed interaction as subglacial water increases. Pellets (rounded sedimentary clasts of till matrix) are observed in Ross Sea cores, suggesting a history of deformation responsible for pellet formation. Till strength was measured in a variety of environments, including mega-scale lineations and grounding zone wedges; ongoing work focuses on evaluating till shear strengths within a geomorphic context. These analyses are used to re-evaluate till genesis, transport, and characterization.
Semi-Active Control of Precast RC Columns under Seismic Action
NASA Astrophysics Data System (ADS)
Caterino, Nicola; Spizzuoco, Mariacristina
2017-10-01
This work is inspired by the idea of dissipating seismic energy at the base of prefabricated RC columns via semi-active (SA) variable dampers exploiting the base rocking. It was performed a wide numerical campaign to investigate the seismic behaviour of a pre-cast RC column with a variable base restraint. The latter is based on the combined use of a hinge, elastic springs, and magnetorheological (MR) dampers remotely controlled according to the instantaneous response of the structural component. The MR devices are driven by a SA control algorithm purposely written to modulate the dissipative capability so as to reduce base bending moment without causing excessive displacement at the top. The proposed strategy results to be really promising, since the base restraint relaxation, that favours the base moment demand reduction, is accompanied by a high enhancement of the dissipated energy due to rocking that can be even able to reduce top displacement in respect to the “fixed base rotation” conditions.
Adolescent religiosity and attitudes to HIV and AIDS in Ghana.
Amoako-Agyeman, Kofi Nyame
2012-11-01
This study investigated the relationships between adolescent religiosity and attitudes to HIV/AIDS based on two major techniques of analysis, factor and regression analysis towards informing preventive school education strategies. Using cross-sectional data of 448 adolescents in junior high school, the study incorporated survey in a self-administered questionnaire and sought to identify underlying factors that affect pupils' responses, delineate the pattern of relationships between variables and select models which best explain and predict relationships among variables. A seven-factor solution described the 'attitude' construct including abstinence and protection, and six for 'religiosity'. The results showed relatively high levels of religiosity and a preference for private religiosity as opposed to organisational religiosity. The regression analysis produced significant relationships between factors of attitudes to HIV/AIDS and of religiosity. Adolescent with very high private religiosity are more likely to abstain from sex but less likely to use condoms once they initiate: protection is inversely related to religiosity. The findings suggest that religious-based adolescent interventions should focus on intrinsic religiosity. Additionally, increasing HIV prevention information and incorporating culturally relevant and socially acceptable values might lend support to improved adolescent school-based HIV/AIDS prevention programmes.
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Climate-based archetypes for the environmental fate assessment of chemicals.
Ciuffo, Biagio; Sala, Serenella
2013-11-15
Emissions of chemicals have been on the rise for years, and their impacts are greatly influenced by spatial differentiation. Chemicals are usually emitted locally but their impact can be felt both locally and globally, due to their chemical properties and persistence. The variability of environmental parameters in the emission compartment may affect the chemicals' fate and the exposure at different orders of magnitude. The assessment of the environmental fate of chemicals and the inherent spatial differentiation requires the use of multimedia models at various levels of complexity (from a simple box model to complex computational and high-spatial-resolution models). The objective of these models is to support ecological and human health risk assessment, by reducing the uncertainty of chemical impact assessments. The parameterisation of spatially resolved multimedia models is usually based on scenarios of evaluative environments, or on geographical resolutions related to administrative boundaries (e.g. countries/continents) or landscape areas (e.g. watersheds, eco-regions). The choice of the most appropriate scale and scenario is important from a management perspective, as a balance should be reached between a simplified approach and computationally intensive multimedia models. In this paper, which aims to go beyond the more traditional approach based on scale/resolution (cell, country, and basin), we propose and assess climate-based archetypes for the impact assessment of chemicals released in air. We define the archetypes based on the main drivers of spatial variability, which we systematically identify by adopting global sensitivity analysis techniques. A case study that uses the high resolution multimedia model MAPPE (Multimedia Assessment of Pollutant Pathways in the Environment) is presented. Results of the analysis showed that suitable archetypes should be both climate- and chemical-specific, as different chemicals (or groups of them) have different traits that influence their spatial variability. This hypothesis was tested by comparing the variability of the output of MAPPE for four different climatic zones on four different continents for four different chemicals (which represent different combinations of physical and chemical properties). Results showed the high suitability of climate-based archetypes in assessing the impacts of chemicals released in air. However, further research work is still necessary to test these findings. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Yao; Xiao, Xiangming; Guanter, Luis; Zhou, Sha; Ciais, Philippe; Joiner, Joanna; Sitch, Stephen; Wu, Xiaocui; Nabel, Julia; Dong, Jinwei; Kato, Etsushi; Jain, Atul K.; Wiltshire, Andy; Stocker, Benjamin D.
2016-12-01
Carbon uptake by terrestrial ecosystems is increasing along with the rising of atmospheric CO2 concentration. Embedded in this trend, recent studies suggested that the interannual variability (IAV) of global carbon fluxes may be dominated by semi-arid ecosystems, but the underlying mechanisms of this high variability in these specific regions are not well known. Here we derive an ensemble of gross primary production (GPP) estimates using the average of three data-driven models and eleven process-based models. These models are weighted by their spatial representativeness of the satellite-based solar-induced chlorophyll fluorescence (SIF). We then use this weighted GPP ensemble to investigate the GPP variability for different aridity regimes. We show that semi-arid regions contribute to 57% of the detrended IAV of global GPP. Moreover, in regions with higher GPP variability, GPP fluctuations are mostly controlled by precipitation and strongly coupled with evapotranspiration (ET). This higher GPP IAV in semi-arid regions is co-limited by supply (precipitation)-induced ET variability and GPP-ET coupling strength. Our results demonstrate the importance of semi-arid regions to the global terrestrial carbon cycle and posit that there will be larger GPP and ET variations in the future with changes in precipitation patterns and dryland expansion.
Bisson, P.A.; Dunham, J.B.; Reeves, G.H.
2009-01-01
In spite of numerous habitat restoration programs in fresh waters with an aggregate annual funding of millions of dollars, many populations of Pacific salmon remain significantly imperiled. Habitat restoration strategies that address limited environmental attributes and partial salmon life-history requirements or approaches that attempt to force aquatic habitat to conform to idealized but ecologically unsustainable conditions may partly explain this lack of response. Natural watershed processes generate highly variable environmental conditions and population responses, i.e., multiple life histories, that are often not considered in restoration. Examples from several locations underscore the importance of natural variability to the resilience of Pacific salmon. The implication is that habitat restoration efforts will be more likely to foster salmon resilience if they consider processes that generate and maintain natural variability in fresh water. We identify three specific criteria for management based on natural variability: the capacity of aquatic habitat to recover from disturbance, a range of habitats distributed across stream networks through time sufficient to fulfill the requirements of diverse salmon life histories, and ecological connectivity. In light of these considerations, we discuss current threats to habitat resilience and describe how regulatory and restoration approaches can be modified to better incorporate natural variability. ?? 2009 by the author(s).
NASA Technical Reports Server (NTRS)
Zhang, Yao; Xiao, Xiangming; Guanter, Luis; Zhou, Sha; Ciais, Philippe; Joiner, Joanna; Sitch, Stephen; Wu, Xiaocui; Nabel, Julian; Dong, Jinwei;
2016-01-01
Carbon uptake by terrestrial ecosystems is increasing along with the rising of atmospheric CO2 concentration. Embedded in this trend, recent studies suggested that the interannual variability (IAV) of global carbon fluxes may be dominated by semi-arid ecosystems, but the underlying mechanisms of this high variability in these specific regions are not well known. Here we derive an ensemble of gross primary production (GPP) estimates using the average of three data-driven models and eleven process-based models. These models are weighted by their spatial representativeness of the satellite-based solar-induced chlorophyll fluorescence (SIF). We then use this weighted GPP ensemble to investigate the GPP variability for different aridity regimes. We show that semi-arid regions contribute to 57% of the detrended IAV of global GPP. Moreover, in regions with higher GPP variability, GPP fluctuations are mostly controlled by precipitation and strongly coupled with evapotranspiration (ET). This higher GPP IAV in semi-arid regions is co-limited by supply (precipitation)-induced ET variability and GPP-ET coupling strength. Our results demonstrate the importance of semi-arid regions to the global terrestrial carbon cycle and posit that there will be larger GPP and ET variations in the future with changes in precipitation patterns and dryland expansion.
Health and psychosocial effects of flexible working hours.
Janssen, Daniela; Nachreiner, Friedhelm
2004-12-01
To examine whether any impairments in health and social lives can be found under different kinds of flexible working hours, and whether such effects are related to specific characteristics of these working hours. Two studies -- a company based survey (N=660) and an internet survey (N=528) -- have been conducted. The first one was a questionnaire study (paper and pencil) on employees working under some 'typical' kinds of different flexible working time arrangements in different companies and different occupational fields (health care, manufacturing, retail, administration, call centres). The second study was an internet-based survey, using an adaptation of the questionnaire from the first study. The results of both studies consistently show that high variability of working hours is associated with increased impairments in health and well-being and this is especially true if this variability is company controlled. These effects are less pronounced if variability is self-controlled; however, autonomy does not compensate the effects of variability. Recommendations for an appropriate design of flexible working hours should be developed in order to minimize any impairing effects on health and psychosocial well-being; these recommendations should include -- besides allowing for discretion in controlling one's (flexible) working hours -- that variability in flexible working hours should be kept low (or at least moderate), even if this variability is self-controlled.
Lindholm, Daniel; Lindbäck, Johan; Armstrong, Paul W; Budaj, Andrzej; Cannon, Christopher P; Granger, Christopher B; Hagström, Emil; Held, Claes; Koenig, Wolfgang; Östlund, Ollie; Stewart, Ralph A H; Soffer, Joseph; White, Harvey D; de Winter, Robbert J; Steg, Philippe Gabriel; Siegbahn, Agneta; Kleber, Marcus E; Dressel, Alexander; Grammer, Tanja B; März, Winfried; Wallentin, Lars
2017-08-15
Currently, there is no generally accepted model to predict outcomes in stable coronary heart disease (CHD). This study evaluated and compared the prognostic value of biomarkers and clinical variables to develop a biomarker-based prediction model in patients with stable CHD. In a prospective, randomized trial cohort of 13,164 patients with stable CHD, we analyzed several candidate biomarkers and clinical variables and used multivariable Cox regression to develop a clinical prediction model based on the most important markers. The primary outcome was cardiovascular (CV) death, but model performance was also explored for other key outcomes. It was internally bootstrap validated, and externally validated in 1,547 patients in another study. During a median follow-up of 3.7 years, there were 591 cases of CV death. The 3 most important biomarkers were N-terminal pro-B-type natriuretic peptide (NT-proBNP), high-sensitivity cardiac troponin T (hs-cTnT), and low-density lipoprotein cholesterol, where NT-proBNP and hs-cTnT had greater prognostic value than any other biomarker or clinical variable. The final prediction model included age (A), biomarkers (B) (NT-proBNP, hs-cTnT, and low-density lipoprotein cholesterol), and clinical variables (C) (smoking, diabetes mellitus, and peripheral arterial disease). This "ABC-CHD" model had high discriminatory ability for CV death (c-index 0.81 in derivation cohort, 0.78 in validation cohort), with adequate calibration in both cohorts. This model provided a robust tool for the prediction of CV death in patients with stable CHD. As it is based on a small number of readily available biomarkers and clinical factors, it can be widely employed to complement clinical assessment and guide management based on CV risk. (The Stabilization of Atherosclerotic Plaque by Initiation of Darapladib Therapy Trial [STABILITY]; NCT00799903). Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu
2018-02-01
Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.
VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal
NASA Astrophysics Data System (ADS)
Satheeskumaran, S.; Sabrigiriraj, M.
2016-06-01
Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.
Integrated Microfluidic Variable Optical Attenuator
2005-11-28
Quantum Electron. 5, pp. 1289–1297 (1999). 5. G. Z. Xiao, Z. Zhang, and C. P. Grover, “A variable optical attenuator based on a straight polymer –silica...1998). 18. Y. Huang, G.T. Paloczi, J. K. S. Poon, and A. Yariv, “Bottom-up soft-lithographic fabrication of three- dimensional multilayer polymer ...quality without damaging polymer materials under high temperatures, resulting in a core index of 1.561 and cladding index of 1.546. The refractive
Sartipi, Majid; Nedjat, Saharnaz; Mansournia, Mohammad Ali; Baigi, Vali; Fotouhi, Akbar
2016-11-01
Some variables like Socioeconomic Status (SES) cannot be directly measured, instead, so-called 'latent variables' are measured indirectly through calculating tangible items. There are different methods for measuring latent variables such as data reduction methods e.g. Principal Components Analysis (PCA) and Latent Class Analysis (LCA). The purpose of our study was to measure assets index- as a representative of SES- through two methods of Non-Linear PCA (NLPCA) and LCA, and to compare them for choosing the most appropriate model. This was a cross sectional study in which 1995 respondents filled the questionnaires about their assets in Tehran. The data were analyzed by SPSS 19 (CATPCA command) and SAS 9.2 (PROC LCA command) to estimate their socioeconomic status. The results were compared based on the Intra-class Correlation Coefficient (ICC). The 6 derived classes from LCA based on BIC, were highly consistent with the 6 classes from CATPCA (Categorical PCA) (ICC = 0.87, 95%CI: 0.86 - 0.88). There is no gold standard to measure SES. Therefore, it is not possible to definitely say that a specific method is better than another one. LCA is a complicated method that presents detailed information about latent variables and required one assumption (local independency), while NLPCA is a simple method, which requires more assumptions. Generally, NLPCA seems to be an acceptable method of analysis because of its simplicity and high agreement with LCA.
Weather variability, tides, and Barmah Forest virus disease in the Gladstone region, Australia.
Naish, Suchithra; Hu, Wenbiao; Nicholls, Neville; Mackenzie, John S; McMichael, Anthony J; Dale, Pat; Tong, Shilu
2006-05-01
In this study we examined the impact of weather variability and tides on the transmission of Barmah Forest virus (BFV) disease and developed a weather-based forecasting model for BFV disease in the Gladstone region, Australia. We used seasonal autoregressive integrated moving-average (SARIMA) models to determine the contribution of weather variables to BFV transmission after the time-series data of response and explanatory variables were made stationary through seasonal differencing. We obtained data on the monthly counts of BFV cases, weather variables (e.g., mean minimum and maximum temperature, total rainfall, and mean relative humidity), high and low tides, and the population size in the Gladstone region between January 1992 and December 2001 from the Queensland Department of Health, Australian Bureau of Meteorology, Queensland Department of Transport, and Australian Bureau of Statistics, respectively. The SARIMA model shows that the 5-month moving average of minimum temperature (b=0.15, p-value<0.001) was statistically significantly and positively associated with BFV disease, whereas high tide in the current month (b=-1.03, p-value=0.04) was statistically significantly and inversely associated with it. However, no significant association was found for other variables. These results may be applied to forecast the occurrence of BFV disease and to use public health resources in BFV control and prevention.
Using computer-based video analysis in the study of fidgety movements.
Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild
2009-09-01
Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.
Evaluation of Rgb-Based Vegetation Indices from Uav Imagery to Estimate Forage Yield in Grassland
NASA Astrophysics Data System (ADS)
Lussem, U.; Bolten, A.; Gnyp, M. L.; Jasper, J.; Bareth, G.
2018-04-01
Monitoring forage yield throughout the growing season is of key importance to support management decisions on grasslands/pastures. Especially on intensely managed grasslands, where nitrogen fertilizer and/or manure are applied regularly, precision agriculture applications are beneficial to support sustainable, site-specific management decisions on fertilizer treatment, grazing management and yield forecasting to mitigate potential negative impacts. To support these management decisions, timely and accurate information is needed on plant parameters (e.g. forage yield) with a high spatial and temporal resolution. However, in highly heterogeneous plant communities such as grasslands, assessing their in-field variability non-destructively to determine e.g. adequate fertilizer application still remains challenging. Especially biomass/yield estimation, as an important parameter in assessing grassland quality and quantity, is rather laborious. Forage yield (dry or fresh matter) is mostly measured manually with rising plate meters (RPM) or ultrasonic sensors (handheld or mounted on vehicles). Thus the in-field variability cannot be assessed for the entire field or only with potential disturbances. Using unmanned aerial vehicles (UAV) equipped with consumer grade RGB cameras in-field variability can be assessed by computing RGB-based vegetation indices. In this contribution we want to test and evaluate the robustness of RGB-based vegetation indices to estimate dry matter forage yield on a recently established experimental grassland site in Germany. Furthermore, the RGB-based VIs are compared to indices computed from the Yara N-Sensor. The results show a good correlation of forage yield with RGB-based VIs such as the NGRDI with R2 values of 0.62.
Prediction of Psilocybin Response in Healthy Volunteers
Studerus, Erich; Gamma, Alex; Kometer, Michael; Vollenweider, Franz X.
2012-01-01
Responses to hallucinogenic drugs, such as psilocybin, are believed to be critically dependent on the user's personality, current mood state, drug pre-experiences, expectancies, and social and environmental variables. However, little is known about the order of importance of these variables and their effect sizes in comparison to drug dose. Hence, this study investigated the effects of 24 predictor variables, including age, sex, education, personality traits, drug pre-experience, mental state before drug intake, experimental setting, and drug dose on the acute response to psilocybin. The analysis was based on the pooled data of 23 controlled experimental studies involving 409 psilocybin administrations to 261 healthy volunteers. Multiple linear mixed effects models were fitted for each of 15 response variables. Although drug dose was clearly the most important predictor for all measured response variables, several non-pharmacological variables significantly contributed to the effects of psilocybin. Specifically, having a high score in the personality trait of Absorption, being in an emotionally excitable and active state immediately before drug intake, and having experienced few psychological problems in past weeks were most strongly associated with pleasant and mystical-type experiences, whereas high Emotional Excitability, low age, and an experimental setting involving positron emission tomography most strongly predicted unpleasant and/or anxious reactions to psilocybin. The results confirm that non-pharmacological variables play an important role in the effects of psilocybin. PMID:22363492
Prediction of psilocybin response in healthy volunteers.
Studerus, Erich; Gamma, Alex; Kometer, Michael; Vollenweider, Franz X
2012-01-01
Responses to hallucinogenic drugs, such as psilocybin, are believed to be critically dependent on the user's personality, current mood state, drug pre-experiences, expectancies, and social and environmental variables. However, little is known about the order of importance of these variables and their effect sizes in comparison to drug dose. Hence, this study investigated the effects of 24 predictor variables, including age, sex, education, personality traits, drug pre-experience, mental state before drug intake, experimental setting, and drug dose on the acute response to psilocybin. The analysis was based on the pooled data of 23 controlled experimental studies involving 409 psilocybin administrations to 261 healthy volunteers. Multiple linear mixed effects models were fitted for each of 15 response variables. Although drug dose was clearly the most important predictor for all measured response variables, several non-pharmacological variables significantly contributed to the effects of psilocybin. Specifically, having a high score in the personality trait of Absorption, being in an emotionally excitable and active state immediately before drug intake, and having experienced few psychological problems in past weeks were most strongly associated with pleasant and mystical-type experiences, whereas high Emotional Excitability, low age, and an experimental setting involving positron emission tomography most strongly predicted unpleasant and/or anxious reactions to psilocybin. The results confirm that non-pharmacological variables play an important role in the effects of psilocybin.
Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars
2015-10-01
A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Variability survey of brightest stars in selected OB associations
NASA Astrophysics Data System (ADS)
Laur, Jaan; Kolka, Indrek; Eenmäe, Tõnis; Tuvikene, Taavi; Leedjärv, Laurits
2017-02-01
Context. The stellar evolution theory of massive stars remains uncalibrated with high-precision photometric observational data mainly due to a small number of luminous stars that are monitored from space. Automated all-sky surveys have revealed numerous variable stars but most of the luminous stars are often overexposed. Targeted campaigns can improve the time base of photometric data for those objects. Aims: The aim of this investigation is to study the variability of luminous stars at different timescales in young open clusters and OB associations. Methods: We monitored 22 open clusters and associations from 2011 to 2013 using a 0.25-m telescope. Variable stars were detected by comparing the overall light-curve scatter with measurement uncertainties. Variability was analysed by the light curve feature extraction tool FATS. Periods of pulsating stars were determined using the discrete Fourier transform code SigSpec. We then classified the variable stars based on their pulsation periods and available spectral information. Results: We obtained light curves for more than 20 000 sources of which 354 were found to be variable. Amongst them we find 80 eclipsing binaries, 31 α Cyg, 13 β Cep, 62 Be, 16 slowly pulsating B, 7 Cepheid, 1 γ Doradus, 3 Wolf-Rayet and 63 late-type variable stars. Up to 55% of these stars are potential new discoveries as they are not present in the Variable Star Index (VSX) database. We find the cluster membership fraction for variable stars to be 13% with an upper limit of 35%. Variable star catalogue (Tables A.1-A.10) and light curves are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A108
Nilsson, Daniel; Lindman, Magdalena; Victor, Trent; Dozza, Marco
2018-04-01
Single-vehicle run-off-road crashes are a major traffic safety concern, as they are associated with a high proportion of fatal outcomes. In addressing run-off-road crashes, the development and evaluation of advanced driver assistance systems requires test scenarios that are representative of the variability found in real-world crashes. We apply hierarchical agglomerative cluster analysis to define similarities in a set of crash data variables, these clusters can then be used as the basis in test scenario development. Out of 13 clusters, nine test scenarios are derived, corresponding to crashes characterised by: drivers drifting off the road in daytime and night-time, high speed departures, high-angle departures on narrow roads, highways, snowy roads, loss-of-control on wet roadways, sharp curves, and high speeds on roads with severe road surface conditions. In addition, each cluster was analysed with respect to crash variables related to the crash cause and reason for the unintended lane departure. The study shows that cluster analysis of representative data provides a statistically based method to identify relevant properties for run-off-road test scenarios. This was done to support development of vehicle-based run-off-road countermeasures and driver behaviour models used in virtual testing. Future studies should use driver behaviour from naturalistic driving data to further define how test-scenarios and behavioural causation mechanisms should be included. Copyright © 2018 Elsevier Ltd. All rights reserved.
Liu, Rui; Liang, Xiao; Xiang, Dandan; Guo, Yirong; Liu, Yihua; Zhu, Guonian
2016-01-01
Triazophos is a widely used organophosphorous insecticide that has potentially adverse effects to organisms. In the present study, a high-affinity single-chain variable fragment (scFv) antibody with specific lambda light chain was developed for residue monitoring. First, the specific variable regions were correctly amplified from a hybridoma cell line 8C10 that secreted monoclonal antibody (mAb) against triazophos. The regions were then assembled as scFv via splicing by overlap extension polymerase chain reaction. Subsequently, the recombinant anti-triazophos scFv-8C10 was successfully expressed in Escherichia coli strain HB2151 in soluble form, purified through immobilized metal ion affinity chromatography, and verified via Western blot and peptide mass fingerprinting analyses. Afterward, an indirect competitive enzyme-linked immunosorbent assay was established based on the purified anti-triazophos scFv-8C10 antibody. The assay exhibited properties similar to those based on the parent mAb, with a high sensitivity (IC50 of 1.73 ng/mL) to triazophos and no cross reaction for other organophosphorus pesticides; it was reliable in detecting triazophos residues in spiked water samples. Moreover, kinetic measurement using a surface plasmon resonance biosensor indicated that the purified scFv-8C10 antibody had a high affinity of 1.8 × 10−10 M and exhibited good binding stability. Results indicated that the recombinant high-affinity scFv-8C10 antibody was an effective detection material that would be promising for monitoring triazophos residues in environment samples. PMID:27338340
[Predicting individual risk of high healthcare cost to identify complex chronic patients].
Coderch, Jordi; Sánchez-Pérez, Inma; Ibern, Pere; Carreras, Marc; Pérez-Berruezo, Xavier; Inoriza, José M
2014-01-01
To develop a predictive model for the risk of high consumption of healthcare resources, and assess the ability of the model to identify complex chronic patients. A cross-sectional study was performed within a healthcare management organization by using individual data from 2 consecutive years (88,795 people). The dependent variable consisted of healthcare costs above the 95th percentile (P95), including all services provided by the organization and pharmaceutical consumption outside of the institution. The predictive variables were age, sex, morbidity-based on clinical risk groups (CRG)-and selected data from previous utilization (use of hospitalization, use of high-cost drugs in ambulatory care, pharmaceutical expenditure). A univariate descriptive analysis was performed. We constructed a logistic regression model with a 95% confidence level and analyzed sensitivity, specificity, positive predictive values (PPV), and the area under the ROC curve (AUC). Individuals incurring costs >P95 accumulated 44% of total healthcare costs and were concentrated in ACRG3 (aggregated CRG level 3) categories related to multiple chronic diseases. All variables were statistically significant except for sex. The model had a sensitivity of 48.4% (CI: 46.9%-49.8%), specificity of 97.2% (CI: 97.0%-97.3%), PPV of 46.5% (CI: 45.0%-47.9%), and an AUC of 0.897 (CI: 0.892 to 0.902). High consumption of healthcare resources is associated with complex chronic morbidity. A model based on age, morbidity, and prior utilization is able to predict high-cost risk and identify a target population requiring proactive care. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
ERIC Educational Resources Information Center
Edwards, Patricia Thomas
2010-01-01
The purpose of this research study was to investigate if there were differences in students' school climate perceptions based on the independent variables, which were measured on a nominal scale and included school diversity (highly, moderately, minimally), ethnicity (Black, Hispanic, White, Other), educational category (general education, special…
Testing Causal Impacts of a School-Based SEL Intervention Using Instrumental Variable Techniques
ERIC Educational Resources Information Center
Torrente, Catalina; Nathanson, Lori; Rivers, Susan; Brackett, Marc
2015-01-01
Children's social-emotional skills, such as conflict resolution and emotion regulation, have been linked to a number of highly regarded academic and social outcomes. The current study presents preliminary results from a causal test of the theory of change of RULER, a universal school-based approach to social and emotional learning (SEL).…
Analysis of the Microstructure of Titles in the INSPEC Data-Base
ERIC Educational Resources Information Center
And Others; Lynch, Michael F.
1973-01-01
A high degree of constancy has been found in the microstructure of titles of samples of the INSPEC data base taken over a three-year period. Character and digram frequencies are relatively stable, while variable-length character-strings characterizing samples separated by three years in time show close similarities. (2 references) (Author/SJ)
Matthew Parks; Aaron Liston; Rich Cronn
2011-01-01
Primers were designed to amplify the highly variable locus ycf1 from all 11 subsections of Pinus to facilitate plastome assemblies based on short sequence reads as well as future phylogenetic and population genetic analyses. Primer design was based on alignment of 33 Pinus and four Pinaceae plastomes with...
Poverty Lines Based on Fuzzy Sets Theory and Its Application to Malaysian Data
ERIC Educational Resources Information Center
Abdullah, Lazim
2011-01-01
Defining the poverty line has been acknowledged as being highly variable by the majority of published literature. Despite long discussions and successes, poverty line has a number of problems due to its arbitrary nature. This paper proposes three measurements of poverty lines using membership functions based on fuzzy set theory. The three…
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)
Glen E. Liston; Kelly Elder
2006-01-01
An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...
Psychometric Properties of the RMARS Scale in High School Students
ERIC Educational Resources Information Center
García-Santillán, Arturo; Martínez-Rodríguez, Valeria; Santana, Josefina C.
2018-01-01
The purpose of this study was to determine if there is a structure of variables that allows us to understand the level of Anxiety towards Mathematics in high school students from the municipalities of Zacatal and Jamapa, Veracruz, Mexico. This was based on the seminal works of Richardson and Suinn [1972], who developed the Mathematics Anxiety…
Learning Objects and the E-Learning Cost Dilemma
ERIC Educational Resources Information Center
Weller, Martin
2004-01-01
The creation of quality e-learning material creates a cost dilemma for many institutions, since it has both high variable and high fixed costs. This cost dilemma means that economies of scale are difficult to achieve, which may result in a consequent reduction in the quality of the learning material. Based on the experience of creating a masters…
Rixen, D; Raum, M; Bouillon, B; Schlosser, L E; Neugebauer, E
2001-03-01
On hospital admission numerous variables are documented from multiple trauma patients. The value of these variables to predict outcome are discussed controversially. The aim was the ability to initially determine the probability of death of multiple trauma patients. Thus, a multivariate probability model was developed based on data obtained from the trauma registry of the Deutsche Gesellschaft für Unfallchirurgie (DGU). On hospital admission the DGU trauma registry collects more than 30 variables prospectively. In the first step of analysis those variables were selected, that were assumed to be clinical predictors for outcome from literature. In a second step a univariate analysis of these variables was performed. For all primary variables with univariate significance in outcome prediction a multivariate logistic regression was performed in the third step and a multivariate prognostic model was developed. 2069 patients from 20 hospitals were prospectively included in the trauma registry from 01.01.1993-31.12.1997 (age 39 +/- 19 years; 70.0% males; ISS 22 +/- 13; 18.6% lethality). From more than 30 initially documented variables, the age, the GCS, the ISS, the base excess (BE) and the prothrombin time were the most important prognostic factors to predict the probability of death (P(death)). The following prognostic model was developed: P(death) = 1/1 + e(-[k + beta 1(age) + beta 2(GCS) + beta 3(ISS) + beta 4(BE) + beta 5(prothrombin time)]) where: k = -0.1551, beta 1 = 0.0438 with p < 0.0001, beta 2 = -0.2067 with p < 0.0001, beta 3 = 0.0252 with p = 0.0071, beta 4 = -0.0840 with p < 0.0001 and beta 5 = -0.0359 with p < 0.0001. Each of the five variables contributed significantly to the multifactorial model. These data show that the age, GCS, ISS, base excess and prothrombin time are potentially important predictors to initially identify multiple trauma patients with a high risk of lethality. With the base excess and prothrombin time value, as only variables of this multifactorial model that can be therapeutically influenced, it might be possible to better guide early and aggressive therapy.
Contextual mediation of perceptions in hauntings and poltergeist-like experiences.
Lange, R; Houran, J; Harte, T M; Havens, R A
1996-06-01
The content of perceived apparitions, e.g., bereavement hallucinations, cannot be explained entirely in terms of electromagnetically induced neurochemical processes. It was shown that contextual variables influential in hallucinatory and hypnotic states also structured reported haunting experiences. As predicted, high congruency was found between the experiential content and the nature of the contextual variables. Further, the number of contextual variables involved in an experience was related to the type of experience and the state or arousal preceding the experience. Based on these findings we argue that a more complete explanation of haunting experiences should take into account both electromagnetically induced neurochemical processes and factors related to contextual mediation.
NASA Astrophysics Data System (ADS)
Glover, K. C.; MacDonald, G. M.; Kirby, M.
2016-12-01
Hydroclimatic variability is especially important in California, a water-stressed and increasingly populous region. We assess the range of past hydroclimatic sensitivity and variability in the San Bernardino Mountains of Southern California based on 125 ka of lacustrine sediment records. Geochemistry, charcoal and pollen highlight periods of sustained moisture, aridity and sudden variability driven by orbital and oceanic variations. Marine Isotope Stage 3 (MIS 3) is one such period of greater moisture availability that lasted c. 30 kyr, with smaller-scale perturbations likely reflect North Atlantic Dansgaard-Oeschgar events. Past glacial periods, MIS 4 and MIS 2, display high-amplitude changes. These include periods of reduced forest cover that span millennia, indicating long-lasting aridity. Rapid forest expansion also occurs, marking sudden shifts towards wet conditions. Fire regimes have also changed in tandem with hydroclimate and vegetation. Higher-resolution analysis of the past 10 ka shows that Southern California hydroclimate was broadly similar to other regions of the Southwest and Great Basin, including an orbital and oceanic-driven wet Early Holocene, dry Mid-Holocene, and highly variable Late Holocene. Shorter-term pluvial conditions occur throughout the Holocene, with episodic moisture likely derived from a Pacific source.
The effects of modeling instruction on high school physics academic achievement
NASA Astrophysics Data System (ADS)
Wright, Tiffanie L.
The purpose of this study was to explore whether Modeling Instruction, compared to traditional lecturing, is an effective instructional method to promote academic achievement in selected high school physics classes at a rural middle Tennessee high school. This study used an ex post facto , quasi-experimental research methodology. The independent variables in this study were the instructional methods of teaching. The treatment variable was Modeling Instruction and the control variable was traditional lecture instruction. The Treatment Group consisted of participants in Physical World Concepts who received Modeling Instruction. The Control Group consisted of participants in Physical Science who received traditional lecture instruction. The dependent variable was gains scores on the Force Concepts Inventory (FCI). The participants for this study were 133 students each in both the Treatment and Control Groups (n = 266), who attended a public, high school in rural middle Tennessee. The participants were administered the Force Concepts Inventory (FCI) prior to being taught the mechanics of physics. The FCI data were entered into the computer-based Statistical Package for the Social Science (SPSS). Two independent samples t-tests were conducted to answer the research questions. There was a statistically significant difference between the treatment and control groups concerning the instructional method. Modeling Instructional methods were found to be effective in increasing the academic achievement of students in high school physics. There was no statistically significant difference between FCI gains scores for gender. Gender was found to have no effect on the academic achievement of students in high school physics classes. However, even though there was not a statistically significant difference, female students' gains scores were higher than male students' gains scores when Modeling Instructional methods of teaching were used. Based on these findings, it is recommended that high school science teachers should use Modeling Instructional methods of teaching daily in their classrooms. A recommendation for further research is to expand the Modeling Instructional methods of teaching into different content areas, (i.e., reading and language arts) to explore academic achievement gains.
Validity of Factors of the Psychopathy Checklist–Revised in Female Prisoners
Kennealy, Patrick J.; Hicks, Brian M.; Patrick, Christopher J.
2008-01-01
The validity of the Psychopathy Checklist–Revised (PCL-R) has been examined extensively in men, but its validity for women remains understudied. Specifically, the correlates of the general construct of psychopathy and its components as assessed by PCL-R total, factor, and facet scores have yet to be examined in depth. Based on previous research conducted with male offenders, a large female inmate sample was used to examine the patterns of relations between total, factor, and facet scores on the PCL-R and various criterion variables. These variables include ratings of psychopathy based on Cleckley’s criteria, symptoms of antisocial personality disorder, and measures of substance use and abuse, criminal behavior, institutional misconduct, interpersonal aggression, normal range personality, intellectual functioning, and social background variables. Results were highly consistent with past findings in male samples and provide further evidence for the construct validity of the PCL-R two-factor and four-facet models across genders. PMID:17986651
Minozzi, Clémentine; Caron, Antoine; Grenier-Petel, Jean-Christophe; Santandrea, Jeffrey; Collins, Shawn K
2018-05-04
A library of 50 copper-based complexes derived from bisphosphines and diamines was prepared and evaluated in three mechanistically distinct photocatalytic reactions. In all cases, a copper-based catalyst was identified to afford high yields, where new heteroleptic complexes derived from the bisphosphine BINAP displayed high efficiency across all reaction types. Importantly, the evaluation of the library of copper complexes revealed that even when photophysical data is available, it is not always possible to predict which catalyst structure will be efficient or inefficient in a given process, emphasizing the advantages for catalyst structures with high modularity and structural variability. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glosser, D.; Kutchko, B.; Benge, G.
Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less
Recent developments in clopidogrel pharmacology and their relation to clinical outcomes.
Gurbel, Paul A; Antonino, Mark J; Tantry, Udaya S
2009-08-01
Oral antiplatelet therapy with clopidogrel and aspirin is an important and widely prescribed strategy to prevent ischemic events in patients with cardiovascular diseases. However, the occurrence of thrombotic events including stent thrombosis is still high (> 10%). Current practice guidelines are mainly based on large-scale trials focusing on clinical endpoints and 'one size fits all' strategies of treating all patients with the same clopidogrel doses. Pharmacodynamic studies have demonstrated that the latter strategy is associated with wide response variability where a substantial percentage of patients show nonresponsivenes. Translational research studies have established the relation between clopidogrel nonresponsivenes or high on-treatment platelet reactivity to adverse clinical events, thereby establishing clopidogrel nonresponsivenes as an important emerging clinical entity. Clopidogrel response variability is primarily a pharmacokinetic phenomenon associated with insufficient active metabolite generation that is secondary to i) limited intestinal absorption affected by an ABCB1 gene polymorphism; ii) functional variability in P450 isoenzyme activity; and iii) a genetic polymorphism of CYP450 isoenzymes. Personalized antiplatelet treatment with higher clopidogrel doses in selected patients or with newer more potent P2Y(12) receptor blockers based on individual platelet function measurement can overcome some of the limitations of current clopidogrel treatment.
A study of variable thrust, variable specific impulse trajectories for solar system exploration
NASA Astrophysics Data System (ADS)
Sakai, Tadashi
A study has been performed to determine the advantages and disadvantages of variable thrust and variable Isp (specific impulse) trajectories for solar system exploration. There have been several numerical research efforts for variable thrust, variable Isp, power-limited trajectory optimization problems. All of these results conclude that variable thrust, variable Isp (variable specific impulse, or VSI) engines are superior to constant thrust, constant Isp (constant specific impulse; or CSI) engines. However, most of these research efforts assume a mission from Earth to Mars, and some of them further assume that these planets are circular and coplanar. Hence they still lack the generality. This research has been conducted to answer the following questions: (1) Is a VSI engine always better than a CSI engine or a high thrust engine for any mission to any planet with any time of flight considering lower propellant mass as the sole criterion? (2) If a planetary swing-by is used for a VSI trajectory, is the fuel savings of a VSI swing-by trajectory better than that of a CSI swing-by or high thrust swing-by trajectory? To support this research, an unique, new computer-based interplanetary trajectory calculation program has been created. This program utilizes a calculus of variations algorithm to perform overall optimization of thrust, Isp, and thrust vector direction along a trajectory that minimizes fuel consumption for interplanetary travel. It is assumed that the propulsion system is power-limited, and thus the compromise between thrust and Isp is a variable to be optimized along the flight path. This program is capable of optimizing not only variable thrust trajectories but also constant thrust trajectories in 3-D space using a planetary ephemeris database. It is also capable of conducting planetary swing-bys. Using this program, various Earth-originating trajectories have been investigated and the optimized results have been compared to traditional CSI and high thrust trajectory solutions. Results show that VSI rocket engines reduce fuel requirements for any mission compared to CSI rocket engines. Fuel can be saved by applying swing-by maneuvers for VSI engines; but the effects of swing-bys due to VSI engines are smaller than that of CSI or high thrust engines.
An internal variable constitutive model for the large deformation of metals at high temperatures
NASA Technical Reports Server (NTRS)
Brown, Stuart; Anand, Lallit
1988-01-01
The advent of large deformation finite element methodologies is beginning to permit the numerical simulation of hot working processes whose design until recently has been based on prior industrial experience. Proper application of such finite element techniques requires realistic constitutive equations which more accurately model material behavior during hot working. A simple constitutive model for hot working is the single scalar internal variable model for isotropic thermal elastoplasticity proposed by Anand. The model is recalled and the specific scalar functions, for the equivalent plastic strain rate and the evolution equation for the internal variable, presented are slight modifications of those proposed by Anand. The modified functions are better able to represent high temperature material behavior. The monotonic constant true strain rate and strain rate jump compression experiments on a 2 percent silicon iron is briefly described. The model is implemented in the general purpose finite element program ABAQUS.
An oilspill trajectory analysis model with a variable wind deflection angle
Samuels, W.B.; Huang, N.E.; Amstutz, D.E.
1982-01-01
The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.
Moreno-Martinez, Francisco Javier; Montoro, Pedro R; Laws, Keith R
2011-05-01
This paper presents a new corpus of 140 high quality colour images belonging to 14 subcategories and covering a range of naming difficulty. One hundred and six Spanish speakers named the items and provided data for several psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived internet search hits. Apart from the large number of variables evaluated, these stimuli present an important advantage with respect to other comparable image corpora in so far as naming performance in healthy individuals is less prone to ceiling effect problems. Reliability and validity indexes showed that our items display similar psycholinguistic characteristics to those of other corpora. In sum, this set of ecologically valid stimuli provides a useful tool for scientists engaged in cognitive and neuroscience-based research.
A Bayesian Account of Vocal Adaptation to Pitch-Shifted Auditory Feedback
Hahnloser, Richard H. R.
2017-01-01
Motor systems are highly adaptive. Both birds and humans compensate for synthetically induced shifts in the pitch (fundamental frequency) of auditory feedback stemming from their vocalizations. Pitch-shift compensation is partial in the sense that large shifts lead to smaller relative compensatory adjustments of vocal pitch than small shifts. Also, compensation is larger in subjects with high motor variability. To formulate a mechanistic description of these findings, we adapt a Bayesian model of error relevance. We assume that vocal-auditory feedback loops in the brain cope optimally with known sensory and motor variability. Based on measurements of motor variability, optimal compensatory responses in our model provide accurate fits to published experimental data. Optimal compensation correctly predicts sensory acuity, which has been estimated in psychophysical experiments as just-noticeable pitch differences. Our model extends the utility of Bayesian approaches to adaptive vocal behaviors. PMID:28135267
A particle swarm optimization variant with an inner variable learning strategy.
Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.
NASA Astrophysics Data System (ADS)
Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.
2018-04-01
The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.
The Navy’s Application of Ocean Forecasting to Decision Support
2014-09-01
Prediction Center (OPC) website for graphics or the National Operational Model Archive and Distribution System ( NOMADS ) for data files. Regional...inputs: » GLOBE = Global Land One-km Base Elevation » WVS = World Vector Shoreline » DBDB2 = Digital Bathymetry Data Base 2 minute resolution » DBDBV... Digital Bathymetry Data Base variable resolution Oceanography | Vol. 27, No.3130 Very High-Resolution Coastal Circulation Models Nearshore
Scholz, Miklas; Uzomah, Vincent C
2013-08-01
The retrofitting of sustainable drainage systems (SuDS) such as permeable pavements is currently undertaken ad hoc using expert experience supported by minimal guidance based predominantly on hard engineering variables. There is a lack of practical decision support tools useful for a rapid assessment of the potential of ecosystem services when retrofitting permeable pavements in urban areas that either feature existing trees or should be planted with trees in the near future. Thus the aim of this paper is to develop an innovative rapid decision support tool based on novel ecosystem service variables for retrofitting of permeable pavement systems close to trees. This unique tool proposes the retrofitting of permeable pavements that obtained the highest ecosystem service score for a specific urban site enhanced by the presence of trees. This approach is based on a novel ecosystem service philosophy adapted to permeable pavements rather than on traditional engineering judgement associated with variables based on quick community and environment assessments. For an example case study area such as Greater Manchester, which was dominated by Sycamore and Common Lime, a comparison with the traditional approach of determining community and environment variables indicates that permeable pavements are generally a preferred SuDS option. Permeable pavements combined with urban trees received relatively high scores, because of their great potential impact in terms of water and air quality improvement, and flood control, respectively. The outcomes of this paper are likely to lead to more combined permeable pavement and tree systems in the urban landscape, which are beneficial for humans and the environment. Copyright © 2013 Elsevier B.V. All rights reserved.
Predictors of Start of Different Antidepressants in Patient Charts among Patients with Depression
Kim, Hyungjin Myra; Zivin, Kara; Choe, Hae Mi; Stano, Clare M.; Ganoczy, Dara; Walters, Heather; Valenstein, Marcia
2016-01-01
Background In usual psychiatric care, antidepressant treatments are selected based on physician and patient preferences rather than being randomly allocated, resulting in spurious associations between these treatments and outcome studies. Objectives To identify factors recorded in electronic medical chart progress notes predictive of antidepressant selection among patients who had received a depression diagnosis. Methods This retrospective study sample consisted of 556 randomly selected Veterans Health Administration (VHA) patients diagnosed with depression from April 1, 1999 to September 30, 2004, stratified by the antidepressant agent, geographic region, gender, and year of depression cohort entry. Predictors were obtained from administrative data, and additional variables were abstracted from electronic medical chart notes in the year prior to the start of the antidepressant in five categories: clinical symptoms and diagnoses, substance use, life stressors, behavioral/ideation measures (e.g., suicide attempts), and treatments received. Multinomial logistic regression analysis was used to assess the predictors associated with different antidepressant prescribing, and adjusted relative risk ratios (RRR) are reported. Results Of the administrative data-based variables, gender, age, illicit drug abuse or dependence, and number of psychiatric medications in prior year were significantly associated with antidepressant selection. After adjusting for administrative data-based variables, sleep problems (RRR = 2.47) or marital issues (RRR = 2.64) identified in the charts were significantly associated with prescribing mirtazapine rather than sertraline; however, no other chart-based variables showed a significant association or an association with a large magnitude. Conclusion Some chart data-based variables were predictive of antidepressant selection, but we neither found many nor found them highly predictive of antidepressant selection in patients treated for depression. PMID:25943003
The first search for variable stars in the open cluster NGC 6253 and its surrounding field
NASA Astrophysics Data System (ADS)
de Marchi, F.; Poretti, E.; Montalto, M.; Desidera, S.; Piotto, G.
2010-01-01
Aims: This work presents the first high-precision variability survey in the field of the intermediate-age, metal-rich open cluster NGC 6253. Clusters of this type are benchmarks for stellar evolution models. Methods: Continuous photometric monitoring of the cluster and its surrounding field was performed over a time span of ten nights using the Wide Field Imager mounted at the ESO-MPI 2.2 m telescope. High-quality timeseries, each composed of about 800 datapoints, were obtained for 250 000 stars using ISIS and DAOPHOT packages. Candidate members were selected by using the colour-magnitude diagrams and period-luminosity-colour relations. Membership probabilities based on the proper motions were also used. The membership of all the variables discovered within a radius of 8´ from the centre is discussed by comparing the incidence of the classes in the cluster direction and in the surrounding field. Results: We discovered 595 variables and we also characterized most of them providing their variability classes, periods, and amplitudes. The sample is complete for short periods: we classified 20 pulsating variables, 225 contact systems, 99 eclipsing systems (22 β Lyr type, 59 β Per type, 18 RS CVn type), and 77 rotational variables. The time-baseline hampered the precise characterization of 173 variables with periods longer than 4-5 days. Moreover, we found a cataclysmic system undergoing an outburst of about 2.5 mag. We propose a list of 35 variable stars as probable members of NGC 6253. ARRAY(0x383c870)
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
Ferri, D V; Munhoz, C F; Neves, P M O; Ferracin, L M; Sartori, D; Vieira, M L C; Fungaro, M H P
2012-12-01
The banana weevil Cosmopolites sordidus (Germar) is one of a number of pests that attack banana crops. The use of the entomopathogenic fungus Beauveria bassiana as a biological control agent for this pest may contribute towards reducing the application of chemical insecticides on banana crops. In this study, the genetic variability of a collection of Brazilian isolates of B. bassiana was evaluated. Samples were obtained from various geographic regions of Brazil, and from different hosts of the Curculionidae family. Based on the DNA fingerprints generated by RAPD and AFLP, we found that 92 and 88 % of the loci were polymorphic, respectively. The B. bassiana isolates were attributed to two genotypic clusters based on the RAPD data, and to three genotypic clusters, when analyzed with AFLP. The nucleotide sequences of nuclear ribosomal DNA intergenic spacers confirmed that all isolates are in fact B. bassiana. Analysis of molecular variance showed that variability among the isolates was not correlated with geographic origin or hosts. A RAPD-specific marker for isolate CG 1024, which is highly virulent to C. sordidus, was cloned and sequenced. Based on the sequences obtained, specific PCR primers BbasCG1024F (5'-TGC GGC TGA GGA GGA CT-3') and BbasCG1024R (5'-TGC GGC TGA GTG TAG AAC-3') were designed for detecting and monitoring this isolate in the field.
NASA Astrophysics Data System (ADS)
Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander
2017-06-01
Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.
NASA Astrophysics Data System (ADS)
Zhang, Ya-feng; Wang, Xin-ping; Hu, Rui; Pan, Yan-xia
2016-08-01
Throughfall is known to be a critical component of the hydrological and biogeochemical cycles of forested ecosystems with inherently temporal and spatial variability. Yet little is understood concerning the throughfall variability of shrubs and the associated controlling factors in arid desert ecosystems. Here we systematically investigated the variability of throughfall of two morphological distinct xerophytic shrubs (Caragana korshinskii and Artemisia ordosica) within a re-vegetated arid desert ecosystem, and evaluated the effects of shrub structure and rainfall characteristics on throughfall based on heavily gauged throughfall measurements at the event scale. We found that morphological differences were not sufficient to generate significant difference (P < 0.05) in throughfall between two studied shrub species under the same rainfall and meteorological conditions in our study area, with a throughfall percentage of 69.7% for C. korshinskii and 64.3% for A. ordosica. We also observed a highly variable patchy pattern of throughfall beneath individual shrub canopies, but the spatial patterns appeared to be stable among rainfall events based on time stability analysis. Throughfall linearly increased with the increasing distance from the shrub base for both shrubs, and radial direction beneath shrub canopies had a pronounced impact on throughfall. Throughfall variability, expressed as the coefficient of variation (CV) of throughfall, tended to decline with the increase in rainfall amount, intensity and duration, and stabilized passing a certain threshold. Our findings highlight the great variability of throughfall beneath the canopies of xerophytic shrubs and the time stability of throughfall pattern among rainfall events. The spatially heterogeneous and temporally stable throughfall is expected to generate a dynamic patchy distribution of soil moisture beneath shrub canopies within arid desert ecosystems.
NASA Astrophysics Data System (ADS)
Dennis, L.; Roesler, E. L.; Guba, O.; Hillman, B. R.; McChesney, M.
2016-12-01
The Atmospheric Radiation Measurement (ARM) climate research facility has three siteslocated on the North Slope of Alaska (NSA): Barrrow, Oliktok, and Atqasuk. These sites, incombination with one other at Toolik Lake, have the potential to become a "megasite" whichwould combine observational data and high resolution modeling to produce high resolutiondata products for the climate community. Such a data product requires high resolutionmodeling over the area of the megasite. We present three variable resolution atmosphericgeneral circulation model (AGCM) configurations as potential alternatives to stand-alonehigh-resolution regional models. Each configuration is based on a global cubed-sphere gridwith effective resolution of 1 degree, with a refinement in resolution down to 1/8 degree overan area surrounding the ARM megasite. The three grids vary in the size of the refined areawith 13k, 9k, and 7k elements. SquadGen, NCL, and GIMP are used to create the grids.Grids vary based upon the selection of areas of refinement which capture climate andweather processes that may affect a proposed NSA megasite. A smaller area of highresolution may not fully resolve climate and weather processes before they reach the NSA,however grids with smaller areas of refinement have a significantly reduced computationalcost compared with grids with larger areas of refinement. Optimal size and shape of thearea of refinement for a variable resolution model at the NSA is investigated.
Remote sensing of Sonoran Desert vegetation structure and phenology with ground-based LiDAR
Sankey, Joel B.; Munson, Seth M.; Webb, Robert H.; Wallace, Cynthia S.A.; Duran, Cesar M.
2015-01-01
Long-term vegetation monitoring efforts have become increasingly important for understanding ecosystem response to global change. Many traditional methods for monitoring can be infrequent and limited in scope. Ground-based LiDAR is one remote sensing method that offers a clear advancement to monitor vegetation dynamics at high spatial and temporal resolution. We determined the effectiveness of LiDAR to detect intra-annual variability in vegetation structure at a long-term Sonoran Desert monitoring plot dominated by cacti, deciduous and evergreen shrubs. Monthly repeat LiDAR scans of perennial plant canopies over the course of one year had high precision. LiDAR measurements of canopy height and area were accurate with respect to total station survey measurements of individual plants. We found an increase in the number of LiDAR vegetation returns following the wet North American Monsoon season. This intra-annual variability in vegetation structure detected by LiDAR was attributable to a drought deciduous shrub Ambrosia deltoidea, whereas the evergreen shrub Larrea tridentata and cactus Opuntia engelmannii had low variability. Benefits of using LiDAR over traditional methods to census desert plants are more rapid, consistent, and cost-effective data acquisition in a high-resolution, 3-dimensional context. We conclude that repeat LiDAR measurements can be an effective method for documenting ecosystem response to desert climatology and drought over short time intervals and at detailed-local spatial scale.
Khormi, Hassan M; Kumar, Lalit
2012-05-01
An important option in preventing the spread of dengue fever (DF) is to control and monitor its vector (Aedes aegypti) as well as to locate and destroy suitable mosquito breeding environments. The aim of the present study was to use a combination of environmental and socioeconomic variables to model areas at risk of DF. These variables include clinically confirmed DF cases, mosquito counts, population density in inhabited areas, total populations per district, water access, neighbourhood quality and the spatio-temporal risk of DF based on the average, weekly frequency of DF incidence. Out of 111 districts investigated, 17 (15%), covering a total area of 121 km2, were identified as of high risk, 25 (22%), covering 133 km2, were identified as of medium risk, 18 (16%), covering 180 km2, were identified as of low risk and 51 (46%), covering 726 km2, were identified as of very low risk. The resultant model shows that most areas at risk of DF were concentrated in the central part of Jeddah county, Saudi Arabia. The methods used can be implemented as routine procedures for control and prevention. A concerted intervention in the medium- and high-risk level districts identified in this study could be highly effective in reducing transmission of DF in the area as a whole.
NASA Astrophysics Data System (ADS)
Berger, Sophie; Drews, Reinhard; Helm, Veit; Sun, Sainan; Pattyn, Frank
2017-11-01
Ice shelves control the dynamic mass loss of ice sheets through buttressing and their integrity depends on the spatial variability of their basal mass balance (BMB), i.e. the difference between refreezing and melting. Here, we present an improved technique - based on satellite observations - to capture the small-scale variability in the BMB of ice shelves. As a case study, we apply the methodology to the Roi Baudouin Ice Shelf, Dronning Maud Land, East Antarctica, and derive its yearly averaged BMB at 10 m horizontal gridding. We use mass conservation in a Lagrangian framework based on high-resolution surface velocities, atmospheric-model surface mass balance and hydrostatic ice-thickness fields (derived from TanDEM-X surface elevation). Spatial derivatives are implemented using the total-variation differentiation, which preserves abrupt changes in flow velocities and their spatial gradients. Such changes may reflect a dynamic response to localized basal melting and should be included in the mass budget. Our BMB field exhibits much spatial detail and ranges from -14.7 to 8.6 m a-1 ice equivalent. Highest melt rates are found close to the grounding line where the pressure melting point is high, and the ice shelf slope is steep. The BMB field agrees well with on-site measurements from phase-sensitive radar, although independent radar profiling indicates unresolved spatial variations in firn density. We show that an elliptical surface depression (10 m deep and with an extent of 0.7 km × 1.3 km) lowers by 0.5 to 1.4 m a-1, which we tentatively attribute to a transient adaptation to hydrostatic equilibrium. We find evidence for elevated melting beneath ice shelf channels (with melting being concentrated on the channel's flanks). However, farther downstream from the grounding line, the majority of ice shelf channels advect passively (i.e. no melting nor refreezing) toward the ice shelf front. Although the absolute, satellite-based BMB values remain uncertain, we have high confidence in the spatial variability on sub-kilometre scales. This study highlights expected challenges for a full coupling between ice and ocean models.
NASA Astrophysics Data System (ADS)
Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.
2017-12-01
Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.
Krieger, M; Schwabenbauer, E-M; Hoischen-Taubner, S; Emanuelson, U; Sundrum, A
2018-03-01
Production diseases in dairy cows are multifactorial, which means they emerge from complex interactions between many different farm variables. Variables with a large impact on production diseases can be identified for groups of farms using statistical models, but these methods cannot be used to identify highly influential variables in individual farms. This, however, is necessary for herd health planning, because farm conditions and associated health problems vary largely between farms. The aim of this study was to rank variables according to their anticipated effect on production diseases on the farm level by applying a graph-based impact analysis on 192 European organic dairy farms. Direct impacts between 13 pre-defined variables were estimated for each farm during a round-table discussion attended by practitioners, that is farmer, veterinarian and herd advisor. Indirect impacts were elaborated through graph analysis taking into account impact strengths. Across farms, factors supposedly exerting the most influence on production diseases were 'feeding', 'hygiene' and 'treatment' (direct impacts), as well as 'knowledge and skills' and 'herd health monitoring' (indirect impacts). Factors strongly influenced by production diseases were 'milk performance', 'financial resources' and 'labour capacity' (directly and indirectly). Ranking of variables on the farm level revealed considerable differences between farms in terms of their most influential and most influenced farm factors. Consequently, very different strategies may be required to reduce production diseases in these farms. The method is based on perceptions and estimations and thus prone to errors. From our point of view, however, this weakness is clearly outweighed by the ability to assess and to analyse farm-specific relationships and thus to complement general knowledge with contextual knowledge. Therefore, we conclude that graph-based impact analysis represents a promising decision support tool for herd health planning. The next steps include testing the method using more specific and problem-oriented variables as well as evaluating its effectiveness.
Superior Intraparietal Sulcus Controls the Variability of Visual Working Memory Precision.
Galeano Weber, Elena M; Peters, Benjamin; Hahn, Tim; Bledowski, Christoph; Fiebach, Christian J
2016-05-18
Limitations of working memory (WM) capacity depend strongly on the cognitive resources that are available for maintaining WM contents in an activated state. Increasing the number of items to be maintained in WM was shown to reduce the precision of WM and to increase the variability of WM precision over time. Although WM precision was recently associated with neural codes particularly in early sensory cortex, we have so far no understanding of the neural bases underlying the variability of WM precision, and how WM precision is preserved under high load. To fill this gap, we combined human fMRI with computational modeling of behavioral performance in a delayed color-estimation WM task. Behavioral results replicate a reduction of WM precision and an increase of precision variability under high loads (5 > 3 > 1 colors). Load-dependent BOLD signals in primary visual cortex (V1) and superior intraparietal sulcus (IPS), measured during the WM task at 2-4 s after sample onset, were modulated by individual differences in load-related changes in the variability of WM precision. Although stronger load-related BOLD increase in superior IPS was related to lower increases in precision variability, thus stabilizing WM performance, the reverse was observed for V1. Finally, the detrimental effect of load on behavioral precision and precision variability was accompanied by a load-related decline in the accuracy of decoding the memory stimuli (colors) from left superior IPS. We suggest that the superior IPS may contribute to stabilizing visual WM performance by reducing the variability of memory precision in the face of higher load. This study investigates the neural bases of capacity limitations in visual working memory by combining fMRI with cognitive modeling of behavioral performance, in human participants. It provides evidence that the superior intraparietal sulcus (IPS) is a critical brain region that influences the variability of visual working memory precision between and within individuals (Fougnie et al., 2012; van den Berg et al., 2012) under increased memory load, possibly in cooperation with perceptual systems of the occipital cortex. These findings substantially extend our understanding of the nature of capacity limitations in visual working memory and their neural bases. Our work underlines the importance of integrating cognitive modeling with univariate and multivariate methods in fMRI research, thus improving our knowledge of brain-behavior relationships. Copyright © 2016 the authors 0270-6474/16/365623-13$15.00/0.
NASA Astrophysics Data System (ADS)
Blaen, Phillip; Khamis, Kieran; Lloyd, Charlotte; Krause, Stefan
2017-04-01
At the river catchment scale, storm events can drive highly variable behaviour in nutrient and water fluxes, yet short-term dynamics are frequently missed by low resolution sampling regimes. In addition, nutrient source contributions can vary significantly within and between storm events. Our inability to identify and characterise time dynamic source zone contributions severely hampers the adequate design of land use management practices in order to control nutrient exports from agricultural landscapes. Here, we utilise an 8-month high-frequency (hourly) time series of streamflow, nitrate concentration (NO3) and fluorescent dissolved organic matter concentration (FDOM) derived from optical in-situ sensors located in a headwater agricultural catchment. We characterised variability in flow and nutrient dynamics across 29 storm events. Storm events represented 31% of the time series and contributed disproportionately to nutrient loads (43% of NO3 and 36% of CDOM) relative to their duration. Principal components analysis of potential hydroclimatological controls on nutrient fluxes demonstrated that a small number of components, representing >90% of variance in the dataset, were highly significant model predictors of inter-event variability in catchment nutrient export. Hysteresis analysis of nutrient concentration-discharge relationships suggested spatially discrete source zones existed for NO3 and FDOM, and that activation of these zones varied on an event-specific basis. Our results highlight the benefits of high-frequency in-situ monitoring for characterising complex short-term nutrient dynamics and unravelling connections between hydroclimatological variability and river nutrient export and source zone activation under extreme flow conditions. These new process-based insights are fundamental to underpinning the development of targeted management measures to reduce nutrient loading of surface waters.
Kogut, Katherine; Eisen, Ellen A.; Jewell, Nicholas P.; Quirós-Alcalá, Lesliam; Castorina, Rosemary; Chevrier, Jonathan; Holland, Nina T.; Barr, Dana Boyd; Kavanagh-Baird, Geri; Eskenazi, Brenda
2012-01-01
Background: Dialkyl phosphate (DAP) metabolites in spot urine samples are frequently used to characterize children’s exposures to organophosphorous (OP) pesticides. However, variable exposure and short biological half-lives of OP pesticides could result in highly variable measurements, leading to exposure misclassification. Objective: We examined within- and between-child variability in DAP metabolites in urine samples collected during 1 week. Methods: We collected spot urine samples over 7 consecutive days from 25 children (3–6 years of age). On two of the days, we collected 24-hr voids. We assessed the reproducibility of urinary DAP metabolite concentrations and evaluated the sensitivity and specificity of spot urine samples as predictors of high (top 20%) or elevated (top 40%) weekly average DAP metabolite concentrations. Results: Within-child variance exceeded between-child variance by a factor of two to eight, depending on metabolite grouping. Although total DAP concentrations in single spot urine samples were moderately to strongly associated with concentrations in same-day 24-hr samples (r ≈ 0.6–0.8, p < 0.01), concentrations in spot samples collected > 1 day apart and in 24-hr samples collected 3 days apart were weakly correlated (r ≈ –0.21 to 0.38). Single spot samples predicted high (top 20%) and elevated (top 40%) full-week average total DAP excretion with only moderate sensitivity (≈ 0.52 and ≈ 0.67, respectively) but relatively high specificity (≈ 0.88 and ≈ 0.78, respectively). Conclusions: The high variability we observed in children’s DAP metabolite concentrations suggests that single-day urine samples provide only a brief snapshot of exposure. Sensitivity analyses suggest that classification of cumulative OP exposure based on spot samples is prone to type 2 classification errors. PMID:23052012
Global Gradients of Coral Exposure to Environmental Stresses and Implications for Local Management
Maina, Joseph; McClanahan, Tim R.; Venus, Valentijn; Ateweberhan, Mebrahtu; Madin, Joshua
2011-01-01
Background The decline of coral reefs globally underscores the need for a spatial assessment of their exposure to multiple environmental stressors to estimate vulnerability and evaluate potential counter-measures. Methodology/Principal Findings This study combined global spatial gradients of coral exposure to radiation stress factors (temperature, UV light and doldrums), stress-reinforcing factors (sedimentation and eutrophication), and stress-reducing factors (temperature variability and tidal amplitude) to produce a global map of coral exposure and identify areas where exposure depends on factors that can be locally managed. A systems analytical approach was used to define interactions between radiation stress variables, stress reinforcing variables and stress reducing variables. Fuzzy logic and spatial ordinations were employed to quantify coral exposure to these stressors. Globally, corals are exposed to radiation and reinforcing stress, albeit with high spatial variability within regions. Based on ordination of exposure grades, regions group into two clusters. The first cluster was composed of severely exposed regions with high radiation and low reducing stress scores (South East Asia, Micronesia, Eastern Pacific and the central Indian Ocean) or alternatively high reinforcing stress scores (the Middle East and the Western Australia). The second cluster was composed of moderately to highly exposed regions with moderate to high scores in both radiation and reducing factors (Caribbean, Great Barrier Reef (GBR), Central Pacific, Polynesia and the western Indian Ocean) where the GBR was strongly associated with reinforcing stress. Conclusions/Significance Despite radiation stress being the most dominant stressor, the exposure of coral reefs could be reduced by locally managing chronic human impacts that act to reinforce radiation stress. Future research and management efforts should focus on incorporating the factors that mitigate the effect of coral stressors until long-term carbon reductions are achieved through global negotiations. PMID:21860667
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond-Lamberty, Benjamin; Bunn, Andrew G.; Thomson, Allison M.
High-latitude northern ecosystems are experiencing rapid climate changes, and represent a large potential climate feedback because of their high soil carbon densities and shifting disturbance regimes. A significant carbon flow from these ecosystems is soil respiration (RS, the flow of carbon dioxide, generated by plant roots and soil fauna, from the soil surface to atmosphere), and any change in the high-latitude carbon cycle might thus be reflected in RS observed in the field. This study used two variants of a machine-learning algorithm and least squares regression to examine how remotely-sensed canopy greenness (NDVI), climate, and other variables are coupled tomore » annual RS based on 105 observations from 64 circumpolar sites in a global database. The addition of NDVI roughly doubled model performance, with the best-performing models explaining ~62% of observed RS variability« less
Various Approaches for Targeting Quasar Candidates
NASA Astrophysics Data System (ADS)
Zhang, Y.; Zhao, Y.
2015-09-01
With the establishment and development of space-based and ground-based observational facilities, the improvement of scientific output of high-cost facilities is still a hot issue for astronomers. The discovery of new and rare quasars attracts much attention. Different methods to select quasar candidates are in bloom. Among them, some are based on color cuts, some are from multiwavelength data, some rely on variability of quasars, some are based on data mining, and some depend on ensemble methods.
Effects of Topography-driven Micro-climatology on Evaporation
NASA Astrophysics Data System (ADS)
Adams, D. D.; Boll, J.; Wagenbrenner, N. S.
2017-12-01
The effects of spatial-temporal variation of climatic conditions on evaporation in micro-climates are not well defined. Current spatially-based remote sensing and modeling for evaporation is limited for high resolutions and complex topographies. We investigated the effect of topography-driven micro-climatology on evaporation supported by field measurements and modeling. Fourteen anemometers and thermometers were installed in intersecting transects over the complex topography of the Cook Agronomy Farm, Pullman, WA. WindNinja was used to create 2-D vector maps based on recorded observations for wind. Spatial analysis of vector maps using ArcGIS was performed for analysis of wind patterns and variation. Based on field measurements, wind speed and direction show consequential variability based on hill-slope location in this complex topography. Wind speed and wind direction varied up to threefold and more than 45 degrees, respectively for a given time interval. The use of existing wind models enables prediction of wind variability over the landscape and subsequently topography-driven evaporation patterns relative to wind. The magnitude of the spatial-temporal variability of wind therefore resulted in variable evaporation rates over the landscape. These variations may contribute to uneven crop development patterns observed during the late growth stages of the agricultural crops at the study location. Use of hill-slope location indexes and appropriate methods for estimating actual evaporation support development of methodologies to better define topography-driven heterogeneity in evaporation. The cumulative effects of spatially-variable climatic factors on evaporation are important to quantify the localized water balance and inform precision farming practices.
Liu, Ruimin; Men, Cong; Yu, Wenwen; Xu, Fei; Wang, Qingrui; Shen, Zhenyao
2018-01-01
To examine the variabilities of source contributions in the Yangtze River Estuary (YRE), the uncertainty based on the positive matrix factorization (PMF) was applied to the source apportionment of the 16 priority PAHs in 120 surface sediment samples from four seasons. Based on the signal-to-noise ratios, the PAHs categorized as "Bad" might drop out of the estimation of bootstrap. Next, the spatial variability of residuals was applied to determine which species with non-normal curves should be excluded. The median values from the bootstrapped solutions were chosen as the best estimate of the true factor contributions, and the intervals from 5th to 95th percentile represent the variability in each sample factor contribution. Based on the results, the median factor contributions of wood grass combustion and coke plant emissions were highly correlated with the variability (R 2 = 0.6797-0.9937) in every season. Meanwhile, the factor of coal and gasoline combustion had large variability with lower R 2 values in every season, especially in summer (0.4784) and winter (0.2785). The coefficient of variation (CV) values based on the Bootstrap (BS) simulations were applied to indicate the uncertainties of PAHs in every factor of each season. Acy, NaP and BgP always showed higher CV values, which suggested higher uncertainties in the BS simulations, and the PAH with the lowest concentration among all PAHs usually became the species with higher uncertainties. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory
Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo
2013-01-01
Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.
Variable camber wing based on pneumatic artificial muscles
NASA Astrophysics Data System (ADS)
Yin, Weilong; Liu, Libo; Chen, Yijin; Leng, Jinsong
2009-07-01
As a novel bionic actuator, pneumatic artificial muscle has high power to weight ratio. In this paper, a variable camber wing with the pneumatic artificial muscle is developed. Firstly, the experimental setup to measure the static output force of pneumatic artificial muscle is designed. The relationship between the static output force and the air pressure is investigated. Experimental result shows the static output force of pneumatic artificial muscle decreases nonlinearly with increasing contraction ratio. Secondly, the finite element model of the variable camber wing is developed. Numerical results show that the tip displacement of the trailing-edge increases linearly with increasing external load and limited with the maximum static output force of pneumatic artificial muscles. Finally, the variable camber wing model is manufactured to validate the variable camber concept. Experimental result shows that the wing camber increases with increasing air pressure and that it compare very well with the FEM result.
Characterization of suicidal behaviour with self-organizing maps.
Leiva-Murillo, José M; López-Castromán, Jorge; Baca-García, Enrique
2013-01-01
The study of the variables involved in suicidal behavior is important from a social, medical, and economical point of view. Given the high number of potential variables of interest, a large population of subjects must be analysed in order to get conclusive results. In this paper, we describe a method based on self-organizing maps (SOMs) for finding the most relevant variables even when their relation to suicidal behavior is strongly nonlinear. We have applied the method to a cohort with more than 8,000 subjects and 600 variables and discovered four groups of variables involved in suicidal behavior. According to the results, there are four main groups of risk factors that characterize the population of suicide attempters: mental disorders, alcoholism, impulsivity, and childhood abuse. The identification of specific subpopulations of suicide attempters is consistent with current medical knowledge and may provide a new avenue of research to improve the management of suicidal cases.
Model selection bias and Freedman's paradox
Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.
2010-01-01
In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.
NASA Astrophysics Data System (ADS)
Blackstock, J. M.; Covington, M. D.; Williams, S. G. W.; Myre, J. M.; Rodriguez, J.
2017-12-01
Variability in CO2 fluxes within Earth's Critical zone occurs over a wide range of timescales. Resolving this and its drivers requires high-temporal resolution monitoring of CO2 both in the soil and aquatic environments. High-cost (> 1,000 USD) gas analyzers and data loggers present cost-barriers for investigations with limited budgets, particularly if high spatial resolution is desired. To overcome high-costs, we developed an Arduino based CO2 measuring platform (i.e. gas analyzer and data logger). The platform was deployed at multiple sites within the Critical Zone overlying the Springfield Plateau aquifer in Northwest Arkansas, USA. The CO2 gas analyzer used in this study was a relatively low-cost SenseAir K30. The analyzer's optical housing was covered by a PTFE semi-permeable membrane allowing for gas exchange between the analyzer and environment. Total approximate cost of the monitoring platform was 200 USD (2% detection limit) to 300 USD (10% detection limit) depending on the K30 model used. For testing purposes, we deployed the Arduino based platform alongside a commercial monitoring platform. CO2 concentration time series were nearly identical. Notably, CO2 cycles at the surface water site, which operated from January to April 2017, displayed a systematic increase in daily CO2 amplitude. Preliminary interpretation suggests key observation of seasonally increasing stream metabolic function. Other interpretations of observed cyclical and event-based behavior are out of the scope of the study; however, the presented method describes an accurate near-hourly characterization of CO2 variability. The new platform has been shown to be operational for several months, and we infer reliable operation for much longer deployments (> 1 year) given adequate environmental protection and power supply. Considering cost-savings, this platform is an attractive option for continuous, accurate, low-power, and low-cost CO2 monitoring for remote locations, globally.
NASA Astrophysics Data System (ADS)
Nche-Fambo, F. A.; Scharler, U. M.; Tirok, K.
2015-06-01
In South African estuaries, there is no knowledge on the resilience and variability in phytoplankton communities under conditions of hypersalinity, extended droughts and reverse salinity gradients. Phytoplankton composition, abundance and biomass vary with changes in environmental variables and taxa richness declines specifically under hypersaline conditions. This research thus investigated the phytoplankton community composition, its resilience and variability under highly variable and extreme environmental conditions in an estuarine lake system (Lake St. Lucia, South Africa) over one year. The lake system was characterised by a reverse salinity gradient with hypersalinity furthest from the estuarine inlet during the study period. During this study, 78 taxa were recorded: 56 diatoms, eight green algae, one cryptophyte, seven cyanobacteria and six dinoflagellates. Taxon variability and resilience depended on their ability to tolerate high salinities. Consequently, the phytoplankton communities as well as total abundance and biomass differed along the salinity gradient and over time with salinity as the main determinant. Cyanobacteria were dominant in hypersaline conditions, dinoflagellates in marine-brackish salinities, green algae and cryptophytes in lower salinities (brackish) and diatoms were abundant in marine-brackish salinities but survived in hypersaline conditions. Total abundance and biomass ranged from 3.66 × 103 to 1.11 × 109 Cells/L and 1.21 × 106 to 1.46 × 1010 pgC/L respectively, with the highest values observed under hypersaline conditions. Therefore, even under highly variable, extreme environmental conditions and hypersalinity the phytoplankton community as a whole was resilient enough to maintain a relatively high biomass throughout the study period. The resilience of few dominant taxa, such as Cyanothece, Spirulina, Protoperidinium and Nitzschia and the dominance of other common genera such as Chlamydomonas, Chroomonas, Navicula, Gyrosigma, Oxyrrhis, and Prorocentrum, provided the carbon at the base of the food web in the system and showed that even during the extended period of drought, a foundation for productivity can be provided for once conditions improve.
NASA Technical Reports Server (NTRS)
Marley, Mark Scott
2016-01-01
Over the past several years a number of high cadence photometric observations of solar system giant planets have been acquired by various platforms. Such observations are of interest as they provide points of comparison to the already expansive set of brown dwarf variability observations and the small, but growing, set of exoplanet variability observations. By measuring how rapidly the integrated light from solar system giant planets can evolve, variability observations of substellar objects that are unlikely to ever be resolved can be placed in a fuller context. Examples of brown dwarf variability observations include extensive work from the ground (e.g., Radigen et al. 2014), Spitzer (e.g., Metchev et al. 2015), Kepler (Gizis et al. 2015), and HST (Yang et al. 2015).Variability has been measured on the planetary mass companion to the brown dwarf 2MASS 1207b (Zhou et al. 2016) and further searches are planned in thermal emission for the known directly imaged planets with ground based telescopes (Apai et al. 2016) and in reflected light with future space based telescopes. Recent solar system variability observations include Kepler monitoring of Neptune (Simon et al. 2016) and Uranus, Spitzer observations of Neptune (Stauffer et al. 2016), and Cassini observations of Jupiter (West et al. in prep). The Cassini observations are of particular interest as they measured the variability of Jupiter at a phase angle of approximately 60 deg, comparable to the viewing geometry expected for space based direct imaging of cool extrasolar Jupiters in reflected light. These solar system analog observations capture many of the characteristics seen in brown dwarf variability, including large amplitudes and rapid light curve evolution on timescales as short as a few rotation periods. Simon et al. (2016) attribute such variations at Neptune to a combination of large scale, stable cloud structures along with smaller, more rapidly varying, cloud patches. The observed brown dwarf and exoplanet variability may well arise from comparable cloud structures. In my presentation I will compare and contrast the nature of the variability observed for the various solar system and other substelar objects and present a wish list for future observations.
NASA Astrophysics Data System (ADS)
Marley, Mark S.; Kepler Giant Planet Variability Team, Spitzer Ice Giant Variability Team
2016-10-01
Over the past several years a number of of high cadence photometric observations of solar system giant planets have been acquired by various platforms. Such observations are of interest as they provide points of comparison to the already expansive set of brown dwarf variability observations and the small, but growing, set of exoplanet variability observations. By measuring how rapidly the integrated light from solar system giant planets can evolve, variability observations of substellar objects that are unlikely to ever be resolved can be placed in a fuller context. Examples of brown dwarf variability observations include extensive work from the ground (e.g., Radigan et al. 2014), Spitzer (e.g., Metchev et al. 2015), Kepler (Gizis et al. 2015), and HST (Yang et al. 2015). Variability has been measured on the planetary mass companion to the brown dwarf 2MASS 1207b (Zhou et al. 2016) and further searches are planned in thermal emission for the known directly imaged planets with ground based telescopes (Apai et al. 2016) and in reflected light with future space based telescopes. Recent solar system variability observations include Kepler monitoring of Neptune (Simon et al. 2016) and Uranus, Spitzer observations of Neptune (Stauffer et al. 2016), and Cassini observations of Jupiter (West et al. in prep). The Cassini observations are of particular interest as they measured the variability of Jupiter at a phase angle of ˜60○, comparable to the viewing geometry expected for space based direct imaging of cool extrasolar Jupiters in reflected light. These solar system analog observations capture many of the characteristics seen in brown dwarf variability, including large amplitudes and rapid light curve evolution on timescales as short as a few rotation periods. Simon et al. (2016) attribute such variations at Neptune to a combination of large scale, stable cloud structures along with smaller, more rapidly varying, cloud patches. The observed brown dwarf and exoplanet variability may well arise from comparable cloud structures. In my presentation I will compare and contrast the nature of the variability observed for the various solar system and other substellar objects and present a wish list for future observations.
Cross-country transferability of multi-variable damage models
NASA Astrophysics Data System (ADS)
Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens
2017-04-01
Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.
Opfer, Roland; Suppa, Per; Kepp, Timo; Spies, Lothar; Schippling, Sven; Huppertz, Hans-Jürgen
2016-05-01
Fully-automated regional brain volumetry based on structural magnetic resonance imaging (MRI) plays an important role in quantitative neuroimaging. In clinical trials as well as in clinical routine multiple MRIs of individual patients at different time points need to be assessed longitudinally. Measures of inter- and intrascanner variability are crucial to understand the intrinsic variability of the method and to distinguish volume changes due to biological or physiological effects from inherent noise of the methodology. To measure regional brain volumes an atlas based volumetry (ABV) approach was deployed using a highly elastic registration framework and an anatomical atlas in a well-defined template space. We assessed inter- and intrascanner variability of the method in 51 cognitively normal subjects and 27 Alzheimer dementia (AD) patients from the Alzheimer's Disease Neuroimaging Initiative by studying volumetric results of repeated scans for 17 compartments and brain regions. Median percentage volume differences of scan-rescans from the same scanner ranged from 0.24% (whole brain parenchyma in healthy subjects) to 1.73% (occipital lobe white matter in AD), with generally higher differences in AD patients as compared to normal subjects (e.g., 1.01% vs. 0.78% for the hippocampus). Minimum percentage volume differences detectable with an error probability of 5% were in the one-digit percentage range for almost all structures investigated, with most of them being below 5%. Intrascanner variability was independent of magnetic field strength. The median interscanner variability was up to ten times higher than the intrascanner variability. Copyright © 2016 Elsevier Inc. All rights reserved.
Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification
Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.
2013-01-01
Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761
Managing Financial Risk to Hydropower in Snow Dominated Systems: A Hetch Hetchy Case Study
NASA Astrophysics Data System (ADS)
Hamilton, A. L.; Characklis, G. W.; Reed, P. M.
2017-12-01
Hydropower generation in snow dominated systems is vulnerable to severe shortfalls in years with low snowpack. Meanwhile, generators are also vulnerable to variability in electricity demand and wholesale electricity prices, both of which can be impacted by factors such as temperature and natural gas price. Year-to-year variability in these underlying stochastic variables leads to financial volatility and the threat of low revenue periods, which can be highly disruptive for generators with large fixed operating costs and debt service. In this research, the Hetch Hetchy Power system is used to characterize financial risk in a snow dominated hydropower system. Owned and operated by the San Francisco Public Utilities Commission, Hetch Hetchy generates power for its own municipal operations and sells excess power to irrigation districts, as well as on the wholesale market. This investigation considers the effects of variability in snowpack, temperature, and natural gas price on Hetch Hetchy Power's yearly revenues. This information is then used to evaluate the effectiveness of various financial risk management tools for hedging against revenue variability. These tools are designed to mitigate against all three potential forms of financial risk (i.e. low hydropower generation, low electricity demand, and low/high electricity price) and include temperature-based derivative contracts, natural gas price-based derivative contracts, and a novel form of snowpack-based index insurance contract. These are incorporated into a comprehensive risk management portfolio, along with self-insurance in which the utility buffers yearly revenue volatility using a contingency fund. By adaptively managing the portfolio strategy, a utility can efficiently spread yearly risks over a multi-year time horizon. The Borg Multiobjective Evolutionary Algorithm is used to generate a set of Pareto optimal portfolio strategies, which are used to compare the tradeoffs in objectives such as expected revenues, low revenues, revenue volatility, and portfolio complexity.
Khan, Azizuddin; Sharma, Narendra K; Dixit, Shikha
2008-09-01
Prospective memory is memory for the realization of delayed intention. Researchers distinguish 2 kinds of prospective memory: event- and time-based (G. O. Einstein & M. A. McDaniel, 1990). Taking that distinction into account, the present authors explored participants' comparative performance under event- and time-based tasks. In an experimental study of 80 participants, the authors investigated the roles of cognitive load and task condition in prospective memory. Cognitive load (low vs. high) and task condition (event- vs. time-based task) were the independent variables. Accuracy in prospective memory was the dependent variable. Results showed significant differential effects under event- and time-based tasks. However, the effect of cognitive load was more detrimental in time-based prospective memory. Results also revealed that time monitoring is critical in successful performance of time estimation and so in time-based prospective memory. Similarly, participants' better performance on the event-based prospective memory task showed that they acted on the basis of environment cues. Event-based prospective memory was environmentally cued; time-based prospective memory required self-initiation.
NASA Astrophysics Data System (ADS)
Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.
2017-12-01
There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.
Seasonal and Interannual Variabilities in Tropical Tropospheric Ozone
NASA Technical Reports Server (NTRS)
Ziemke, J. R.; Chandra, S.
1999-01-01
This paper presents a detailed characterization of seasonal and interannual variability in tropical tropospheric column ozone (TCO). TCO time series are derived from 20 years (1979-1998) of total ozone mapping spectrometer (TOMS) data using the convective cloud differential (CCD) method. Our study identifies three regions in the tropics with distinctly different zonal characteristics related to seasonal and interannual variability. These three regions are the eastern Pacific, Atlantic, and western Pacific. Results show that in both the eastern and western Pacific seasonal-cycle variability of northern hemisphere (NH) TCO exhibits maximum amount during NH spring whereas largest amount in southern hemisphere (SH) TCO occurs during SH spring. In the Atlantic, maximum TCO in both hemispheres occurs in SH spring. These seasonal cycles are shown to be comparable to seasonal cycles present in ground-based ozonesonde measurements. Interannual variability in the Atlantic region indicates a quasi-biennial oscillation (QBO) signal that is out of phase with the QBO present in stratospheric column ozone (SCO). This is consistent with high pollution and high concentrations of mid-to-upper tropospheric O3-producing precursors in this region. The out of phase relation suggests a UV modulation of tropospheric photochemistry caused by the QBO in stratospheric O3. During El Nino events there is anomalously low TCO in the eastern Pacific and high values in the western Pacific, indicating the effects of convectively-driven transport of low-value boundary layer O3 (reducing TCO) and O3 precursors including H2O and OH. A simplified technique is proposed to derive high-resolution maps of TCO in the tropics even in the absence of tropopause-level clouds. This promising approach requires only total ozone gridded measurements and utilizes the small variability observed in TCO near the dateline. This technique has an advantage compared to the CCD method because the latter requires high-resolution footprint measurements of both reflectivity and total ozone in the presence of tropopause-level cloud tops.
Observations of V420 Aur (HD 34921) needed to support spectroscopy
NASA Astrophysics Data System (ADS)
Waagen, Elizabeth O.
2016-10-01
Marcella Wijngaarden and Kelly Gourdji (graduate students at the University of Amsterdam/Anton Pannekoek Institute for Astronomy) have requested AAVSO observers' assistance in providing optical photometry of V420 Aur in support of their high-resolution spectroscopy with the Mercator telescope + Hermes spectrograph in La Palma 2016 October 7 through 17. They write: "[V420 Aur (HD 34921) is] the optical Be star that is part of a peculiar High Mass X-ray Binary...[that exhibits highly] complex and variable spectra...it is difficult to construct a physical model of this HMXB system, though based on these observations, the system is thought to contain a B[e] star with a dense plasma region, an accretion disk around a neutron star, a shell and circumstellar regions of cold dust. It has been over a decade since the last spectra were taken, and, given the highly variable nature of this star, we expect new observations to yield new information that will contribute to a better understanding of this system." Observations in BVRI (preferred over other bands) are requested beginning immediately and continuing through October 24. In all cases, timeseries in a few bands (i.e. BVRI) are preferred over single/a few observations in the other bands as it is the variability on relatively short timescales that is most important. "The target is bright so exposures should be long enough to reach good signal to noise in order to see the small variability amplitude but without saturating the target/comparison stars. We will study the variability on several timescales, so observations starting from a few per night to high cadence timeseries are useful." Finder charts with sequence may be created using the AAVSO Variable Star Plotter (https://www.aavso.org/vsp). Observations should be submitted to the AAVSO International Database. See full Alert Notice for more details.
NASA Technical Reports Server (NTRS)
Schultz, D. F.
1986-01-01
This effort summarizes the work performed on a steam cooled, rich-burn primary zone, variable geometry combustor designed for combustion of nitrogeneous fuels such as heavy oils or synthetic crude oils. The steam cooling was employed to determine its feasibility and assess its usefulness as part of a ground based gas turbine bottoming cycle. Variable combustor geometry was employed to demonstrate its ability to control primary and secondary zone equivalence ratios and overall pressure drop. Both concepts proved to be highly successful in achieving their desired objectives. The steam cooling reduced peak liner temperatures to less than 800 K. This low temperature offers the potential of both long life and reduced use of strategic materials for liner fabrication. These degrees of variable geometry were successfully employed to control air flow distribution within the combustor. A variable blade angle axial flow air swirler was used to control primary zone air flow, while the secondary and tertiary zone air flows were controlled by rotating bands which regulated air flow to the secondary zone quench holes and the dilutions holes respectively.
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2006-01-01
Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.
Kleis, Sebastian; Rueckmann, Max; Schaeffer, Christian G
2017-04-15
In this Letter, we propose a novel implementation of continuous variable quantum key distribution that operates with a real local oscillator placed at the receiver site. In addition, pulsing of the continuous wave laser sources is not required, leading to an extraordinary practical and secure setup. It is suitable for arbitrary schemes based on modulated coherent states and heterodyne detection. The shown results include transmission experiments, as well as an excess noise analysis applying a discrete 8-state phase modulation. Achievable key rates under collective attacks are estimated. The results demonstrate the high potential of the approach to achieve high secret key rates at relatively low effort and cost.
Craniodental variation in Paranthropus boisei: a developmental and functional perspective.
Wood, B; Lieberman, D E
2001-09-01
What levels and patterns of craniodental variation among a fossil hypodigm are necessary to reject the null hypothesis that only a single species is sampled? We suggest how developmental and functional criteria can be used to predict where in the skeleton of fossil hominins we should expect more, or less, within-species variation. We present and test three hypotheses about the factors contributing to craniodental variation in extant primate taxa, and then apply these results to the interpretation of the P. boisei hypodigm. Within the comparative samples of extant Homo, Pan, Gorilla, Pongo, and Colobus, variables from the cranial base, neurocranium, and face that are not subject to high magnitudes of strain have consistently lower levels of intraspecific variation than variables from regions of the face subject to high levels of strain. Dental size variables are intermediate in terms of their reliability. P. boisei is found to have a low degree of variability relative to extant primates for variables shown to be generally useful for testing taxonomic hypotheses. Contrary to the claims of Suwa et al. ([1997] Nature 389:489-492), the recently discovered material from Konso falls within the range of variation of the "pre-Konso" hypodigm of P. boisei for available conventional metrical variables. Those aspects of the Konso material that appear to extend the range of the P. boisei hypodigm involve regions of the skull predicted to be prone to high levels of within-species variation. The approach used in this study focuses on craniodental data, but it is applicable to other regions of the skeleton. Copyright 2001 Wiley-Liss, Inc.
Inter-annual variability and long term predictability of exchanges through the Strait of Gibraltar
NASA Astrophysics Data System (ADS)
Boutov, Dmitri; Peliz, Álvaro; Miranda, Pedro M. A.; Soares, Pedro M. M.; Cardoso, Rita M.; Prieto, Laura; Ruiz, Javier; García-Lafuente, Jesus
2014-03-01
Inter-annual variability of calculated barotropic (netflow) and simulated baroclinic (inflow and outflow) exchanges through the Strait of Gibraltar is analyzed and their response to the main modes of atmospheric variability is investigated. Time series of the outflow obtained by high resolution simulations and estimated from in-situ Acoustic Doppler Current Profiler (ADCP) current measurements are compared. The time coefficients (TC) of the leading empirical orthogonal function (EOF) modes that describe zonal atmospheric circulation in the vicinity of the Strait (1st and 3rd of Sea-Level Pressure (SLP) and 1st of the wind) show significant covariance with the inflow and outflow. Based on these analyses, a regression model between these SLP TCs and outflow of the Mediterranean Water was developed. This regression outflow time series was compared with estimates based on current meter observations and the predictability and reconstruction of past exchange variability based on atmospheric pressure fields are discussed. The simple regression model seems to reproduce the outflow evolution fairly reasonably, with the exception of the year 2008, which is apparently anomalous without available physical explanation yet. The exchange time series show a reduced inter-annual variability (less than 1%, 2.6% and 3.1% of total 2-day variability, for netflow, inflow and outflow, respectively). From a statistical point of view no clear long-term tendencies were revealed. Anomalously high baroclinic fluxes are reported for the years of 2000-2001 that are coincident with strong impact on the Alboran Sea ecosystem. The origin of the anomalous flow is associated with a strong negative anomaly (~ - 9 hPa) in atmospheric pressure fields settled north of Iberian Peninsula and extending over the central Atlantic, favoring an increased zonal circulation in winter 2000/2001. These low pressure fields forced intense and durable westerly winds in the Gulf of Cadiz-Alboran system. The signal of this anomaly is also seen in time coefficients of the most significant EOF modes. The predictability of the exchanges for future climate is discussed.
NASA Technical Reports Server (NTRS)
Lihavainen, H.; Kerminen, V.-M.; Remer, L. A.
2009-01-01
The first aerosol indirect effect over a clean, northern high-latitude site was investigated by determining the aerosol cloud interaction (ACI) using three different approaches; ground-based in situ measurements, combined ground-based in situ measurements 5 and satellite retrievals and using only satellite retrievals. The obtained values of ACI were highest for in situ ground-based data, clearly lower for combined ground-based and satellite data, and lowest for data relying solely on satellite retrievals. One of the key findings of this study was the high sensitivity of ACI to the definition of the aerosol burden. We showed that at least a part of the variability in ACI can be explained by 10 how different investigators have related dierent cloud properties to "aerosol burden".
ERIC Educational Resources Information Center
Clark, Heddy Kovach; Ringwalt, Chris L.; Shamblen, Stephen R.; Hanley, Sean M.
2011-01-01
Using a randomized controlled effectiveness trial, we examined the effects of Project SUCCESS on a range of secondary outcomes, including the program's mediating variables. Project SUCCESS, which is based both on the Theory of Reasoned Action and on Cognitive Behavior Theory, is a school-based substance use prevention program that targets…
Efficiency Study of NLS Base-Year Design. RTI-22U-884-3.
ERIC Educational Resources Information Center
Moore, R. P.; And Others
An efficiency study was conducted of the base year design used for the National Longitudinal Study of the High School Class of 1972 (NLS). Finding the optimal design involved a search for the numbers of sample schools and students that would maximize the variance at a given cost. Twenty-one variables describing students' plans, attitudes,…
Estimating Agricultural Nitrous Oxide Emissions
USDA-ARS?s Scientific Manuscript database
Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...
Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular
NASA Astrophysics Data System (ADS)
Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.
2015-12-01
The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had meaningful result with artificial and natural effect. It is expected to predict future forest fire risk with future climate variables as the climate changes.
Pedestrian visual recommendation in Kertanegara - Semeru corridor in Malang City
NASA Astrophysics Data System (ADS)
Cosalia, V. B.
2017-06-01
Streetscape could be the first impression to see an urban area. One of the streerscape that should be attended to it is corridor of Jl. Kertanegara - Semeru since at that corridor is the road corridor having the strong caracter also as the one of the main axes in Malang city. This research is aim knowing the visual quality also the exact structuring rcommendation for Jl. Kertanegara - Semeru based on pedestrian’s visual. The methode used to this research is Scenic Beauty Estimation (SBE) and used historic study. There is several variables used, they are scale space, visual flexibility, beauty, emphasis, balance and dominant. Based on those variable the pedestrians as a respondent doing the assessment. Based on the result of SBE have been done, it is showed that the visual quality in Corridor Kertanegara Semeru is well enough since the result showed that there are 10 photos in low visual quality in Jl. Semeru and 14 photos in high visual quality in Jl. Kertanegara, Jl. Tugu dan Jl. Kahuripan. By the historic study and based on high visual quality reference doing the structuring recommendation in part of landscape having the low visual quality.
Validation of China-wide interpolated daily climate variables from 1960 to 2011
NASA Astrophysics Data System (ADS)
Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang
2015-02-01
Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.
Song, Ruiguang; Hall, H Irene; Harrison, Kathleen McDavid; Sharpe, Tanya Telfair; Lin, Lillian S; Dean, Hazel D
2011-01-01
We developed a statistical tool that brings together standard, accessible, and well-understood analytic approaches and uses area-based information and other publicly available data to identify social determinants of health (SDH) that significantly affect the morbidity of a specific disease. We specified AIDS as the disease of interest and used data from the American Community Survey and the National HIV Surveillance System. Morbidity and socioeconomic variables in the two data systems were linked through geographic areas that can be identified in both systems. Correlation and partial correlation coefficients were used to measure the impact of socioeconomic factors on AIDS diagnosis rates in certain geographic areas. We developed an easily explained approach that can be used by a data analyst with access to publicly available datasets and standard statistical software to identify the impact of SDH. We found that the AIDS diagnosis rate was highly correlated with the distribution of race/ethnicity, population density, and marital status in an area. The impact of poverty, education level, and unemployment depended on other SDH variables. Area-based measures of socioeconomic variables can be used to identify risk factors associated with a disease of interest. When correlation analysis is used to identify risk factors, potential confounding from other variables must be taken into account.
Piedra, Jose; Ontiveros, Maria; Miravet, Susana; Penalva, Cristina; Monfar, Mercè; Chillon, Miguel
2015-02-01
Recombinant adeno-associated viruses (rAAVs) are promising vectors in preclinical and clinical assays for the treatment of diseases with gene therapy strategies. Recent technological advances in amplification and purification have allowed the production of highly purified rAAV vector preparations. Although quantitative polymerase chain reaction (qPCR) is the current method of choice for titrating rAAV genomes, it shows high variability. In this work, we report a rapid and robust rAAV titration method based on the quantitation of encapsidated DNA with the fluorescent dye PicoGreen®. This method allows detection from 3×10(10) viral genome/ml up to 2.4×10(13) viral genome/ml in a linear range. Contrasted with dot blot or qPCR, the PicoGreen-based assay has less intra- and interassay variability. Moreover, quantitation is rapid, does not require specific primers or probes, and is independent of the rAAV pseudotype analyzed. In summary, development of this universal rAAV-titering method may have substantive implications in rAAV technology.
Choi, Hongyoon; Ha, Seunggyun; Im, Hyung Jun; Paek, Sun Ha; Lee, Dong Soo
2017-01-01
Dopaminergic degeneration is a pathologic hallmark of Parkinson's disease (PD), which can be assessed by dopamine transporter imaging such as FP-CIT SPECT. Until now, imaging has been routinely interpreted by human though it can show interobserver variability and result in inconsistent diagnosis. In this study, we developed a deep learning-based FP-CIT SPECT interpretation system to refine the imaging diagnosis of Parkinson's disease. This system trained by SPECT images of PD patients and normal controls shows high classification accuracy comparable with the experts' evaluation referring quantification results. Its high accuracy was validated in an independent cohort composed of patients with PD and nonparkinsonian tremor. In addition, we showed that some patients clinically diagnosed as PD who have scans without evidence of dopaminergic deficit (SWEDD), an atypical subgroup of PD, could be reclassified by our automated system. Our results suggested that the deep learning-based model could accurately interpret FP-CIT SPECT and overcome variability of human evaluation. It could help imaging diagnosis of patients with uncertain Parkinsonism and provide objective patient group classification, particularly for SWEDD, in further clinical studies.
Vulnerability mapping in kelud volcano based on village information
NASA Astrophysics Data System (ADS)
Hisbaron, D. R.; Wijayanti, H.; Iffani, M.; Winastuti, R.; Yudinugroho, M.
2018-04-01
Kelud Volcano is a basaltic andesitic stratovolcano, situated at 27 km to the east of Kediri, Indonesia. Historically, Kelud Volcano has erupted with return period of 9-75 years, had caused nearly 160,000 people living in Tulungagung, Blitar and Kediri District to be in high-risk areas. This study aims to map vulnerability towards lava flows in Kediri and Malang using detailed scale. There are four major variables, namely demography, asset, hazard, and land use variables. PGIS (Participatory Geographic Information System) is employed to collect data, while ancillary data is derived from statistics information, interpretation of high resolution satellite imagery and Unmanned Aerial Vehicles (UAVs). Data were obtained from field checks and some from high resolution satellite imagery and UAVs. The output of this research is village-based vulnerability information that becomes a valuable input for local stakeholders to improve local preparedness in areas prone to improved disaster resilience. The results indicated that the highest vulnerability to lava flood disaster in Kelud Volcano is owned by Kandangan Hamlet, Pandean Hamlet and Kacangan Hamlet, because these two hamlets are in the dominant high vulnerability position of 3 out of 4 scenarios (economic, social and equal).
Yaminfirooz, Mousa; Ardali, Farzaneh Raeesi
2018-01-01
Nowadays, publishing highly-cited papers is important for researchers and editors. In this evidence-based study, the factors influencing the citability of published papers in the field of medicine have been identified. 200 papers indexed in Scopus (in two groups: highly-cited and lowly-cited) with 100 papers in each were studied. Needed data were manually collected with a researcher-made checklist. Data analysis was done in SPSS using descriptive and inferential statistics. Variables such as journal IF, journal rank, journal subject quartile, the first/corresponding author's h-index, the number of documents produced by the first/corresponding author, SJR and SNIP had significantly positive correlation with paper citability (p< .05). Other variables, including among others, paper age, paper type, the number of references, the number of authors, indexing institute and journal kind had not any relationship with paper citability (p> .05). the factors affecting the citability are among indicators relating to authors, publishing journals and published papers. Determining the extent to which these factors influence the citability of a paper needs further large-scaled research. Authors and editors searching for high-citedness should consider these factors when authoring and publishing papers.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs
NASA Astrophysics Data System (ADS)
Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Pincus, R.
2016-12-01
A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation and cloudiness. Unlike other similar methods, only one new prognostic variable, turbulent kinetic energy (TKE), needs to be intoduced, making the technique computationally efficient.SHOC is now incorporated into a version of GFS, as well as into the next generation of the NCEP global model - NOAA Environmental Modeling System (NEMS). Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these variables. Radiative transfer parameterization uses cloudiness computed by SHOC.Outstanding problems include high level tropical cloud fraction being too high in SHOC runs, possibly related to the interaction of SHOC with condensate detrained from deep convection.Future work will consist of evaluating model performance and tuning the physics if necessary, by performing medium-range NWP forecasts with prescribed initial conditions, and AMIP-type climate tests with prescribed SSTs. Depending on the results, the model will be tuned or parameterizations modified. Next, SHOC will be implemented in the NCEP CFS, and tuned and evaluated for climate applications - seasonal prediction and long coupled climate runs. Impact of new physics on ENSO, MJO, ISO, monsoon variability, etc will be examined.
Sun, Guibo; Webster, Chris; Ni, Michael Y; Zhang, Xiaohu
2018-05-07
Uncertainty with respect to built environment (BE) data collection, measure conceptualization and spatial scales is evident in urban health research, but most findings are from relatively lowdensity contexts. We selected Hong Kong, an iconic high-density city, as the study area as limited research has been conducted on uncertainty in such areas. We used geocoded home addresses (n=5732) from a large population-based cohort in Hong Kong to extract BE measures for the participants' place of residence based on an internationally recognized BE framework. Variability of the measures was mapped and Spearman's rank correlation calculated to assess how well the relationships among indicators are preserved across variables and spatial scales. We found extreme variations and uncertainties for the 180 measures collected using comprehensive data and advanced geographic information systems modelling techniques. We highlight the implications of methodological selection and spatial scales of the measures. The results suggest that more robust information regarding urban health research in high-density city would emerge if greater consideration were given to BE data, design methods and spatial scales of the BE measures.
Novel high-frequency, high-power, pulsed oscillator based on a transmission line transformer.
Burdt, R; Curry, R D
2007-07-01
Recent analysis and experiments have demonstrated the potential for transmission line transformers to be employed as compact, high-frequency, high-power, pulsed oscillators with variable rise time, high output impedance, and high operating efficiency. A prototype system was fabricated and tested that generates a damped sinusoidal wave form at a center frequency of 4 MHz into a 200 Omega load, with operating efficiency above 90% and peak power on the order of 10 MW. The initial rise time of the pulse is variable and two experiments were conducted to demonstrate initial rise times of 12 and 3 ns, corresponding to a spectral content from 4-30 and from 4-100 MHz, respectively. A SPICE model has been developed to accurately predict the circuit behavior and scaling laws have been identified to allow for circuit design at higher frequencies and higher peak power. The applications, circuit analysis, test stand, experimental results, circuit modeling, and design of future systems are all discussed.
Evidence and mapping of extinction debts for global forest-dwelling reptiles, amphibians and mammals
NASA Astrophysics Data System (ADS)
Chen, Youhua; Peng, Shushi
2017-03-01
Evidence of extinction debts for the global distributions of forest-dwelling reptiles, mammals and amphibians was tested and the debt magnitude was estimated and mapped. By using different correlation tests and variable importance analysis, the results showed that spatial richness patterns for the three forest-dwelling terrestrial vertebrate groups had significant and stronger correlations with past forest cover area and other variables in the 1500 s, implying the evidence for extinction debts. Moreover, it was likely that the extinction debts have been partially paid, given that their global richness patterns were also significantly correlated with contemporary forest variables in the 2000 s (but the absolute magnitudes of the correlation coefficients were usually smaller than those calculated for historical forest variables). By utilizing species-area relationships, spatial extinction-debt magnitudes for the three vertebrate groups at the global scale were estimated and the hotspots of extinction debts were identified. These high-debt hotspots were generally situated in areas that did not spatially overlap with hotspots of species richness or high extinction-risk areas based on IUCN threatened status to a large extent. This spatial mismatch pattern suggested that necessary conservation efforts should be directed toward high-debt areas that are still overlooked.
Chen, Youhua; Peng, Shushi
2017-03-16
Evidence of extinction debts for the global distributions of forest-dwelling reptiles, mammals and amphibians was tested and the debt magnitude was estimated and mapped. By using different correlation tests and variable importance analysis, the results showed that spatial richness patterns for the three forest-dwelling terrestrial vertebrate groups had significant and stronger correlations with past forest cover area and other variables in the 1500 s, implying the evidence for extinction debts. Moreover, it was likely that the extinction debts have been partially paid, given that their global richness patterns were also significantly correlated with contemporary forest variables in the 2000 s (but the absolute magnitudes of the correlation coefficients were usually smaller than those calculated for historical forest variables). By utilizing species-area relationships, spatial extinction-debt magnitudes for the three vertebrate groups at the global scale were estimated and the hotspots of extinction debts were identified. These high-debt hotspots were generally situated in areas that did not spatially overlap with hotspots of species richness or high extinction-risk areas based on IUCN threatened status to a large extent. This spatial mismatch pattern suggested that necessary conservation efforts should be directed toward high-debt areas that are still overlooked.
Time-resolved High Spectral Resolution Observation of 2MASSW J0746425+200032AB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ji; Mawet, Dimitri; Prato, Lisa, E-mail: ji.wang@caltech.edu
Many brown dwarfs (BDs) exhibit photometric variability at levels from tenths to tens of percents. The photometric variability is related to magnetic activity or patchy cloud coverage, characteristic of BDs near the L–T transition. Time-resolved spectral monitoring of BDs provides diagnostics of cloud distribution and condensate properties. However, current time-resolved spectral studies of BDs are limited to low spectral resolution ( R ∼ 100) with the exception of the study of Luhman 16 AB at a resolution of 100,000 using the VLT+CRIRES. This work yielded the first map of BD surface inhomogeneity, highlighting the importance and unique contribution of highmore » spectral resolution observations. Here, we report on the time-resolved high spectral resolution observations of a nearby BD binary, 2MASSW J0746425+200032AB. We find no coherent spectral variability that is modulated with rotation. Based on simulations, we conclude that the coverage of a single spot on 2MASSW J0746425+200032AB is smaller than 1% or 6.25% if spot contrast is 50% or 80% of its surrounding flux, respectively. Future high spectral resolution observations aided by adaptive optics systems can put tighter constraints on the spectral variability of 2MASSW J0746425+200032AB and other nearby BDs.« less
A diagnostic model for chronic hypersensitivity pneumonitis.
Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R
2016-10-01
The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist's diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Large-scale human skin lipidomics by quantitative, high-throughput shotgun mass spectrometry.
Sadowski, Tomasz; Klose, Christian; Gerl, Mathias J; Wójcik-Maciejewicz, Anna; Herzog, Ronny; Simons, Kai; Reich, Adam; Surma, Michal A
2017-03-07
The lipid composition of human skin is essential for its function; however the simultaneous quantification of a wide range of stratum corneum (SC) and sebaceous lipids is not trivial. We developed and validated a quantitative high-throughput shotgun mass spectrometry-based platform for lipid analysis of tape-stripped SC skin samples. It features coverage of 16 lipid classes; total quantification to the level of individual lipid molecules; high reproducibility and high-throughput capabilities. With this method we conducted a large lipidomic survey of 268 human SC samples, where we investigated the relationship between sampling depth and lipid composition, lipidome variability in samples from 14 different sampling sites on the human body and finally, we assessed the impact of age and sex on lipidome variability in 104 healthy subjects. We found sebaceous lipids to constitute an abundant component of the SC lipidome as they diffuse into the topmost SC layers forming a gradient. Lipidomic variability with respect to sampling depth, site and subject is considerable, and mainly accredited to sebaceous lipids, while stratum corneum lipids vary less. This stresses the importance of sampling design and the role of sebaceous lipids in skin studies.
Cardenas, M.B.; Harvey, J.W.; Packman, A.I.; Scott, D.T.
2008-01-01
Temperature is a primary physical and biogeochemical variable in aquatic systems. Field-based measurement of temperature at discrete sampling points has revealed temperature variability in fluvial systems, but traditional techniques do not readily allow for synoptic sampling schemes that can address temperature-related questions with broad, yet detailed, coverage. We present results of thermal infrared imaging at different stream discharge (base flow and peak flood) conditions using a handheld IR camera. Remotely sensed temperatures compare well with those measured with a digital thermometer. The thermal images show that periphyton, wood, and sandbars induce significant thermal heterogeneity during low stages. Moreover, the images indicate temperature variability within the periphyton community and within the partially submerged bars. The thermal heterogeneity was diminished during flood inundation, when the areas of more slowly moving water to the side of the stream differed in their temperature. The results have consequences for thermally sensitive hydroelogical processes and implications for models of those processes, especially those that assume an effective stream temperature. Copyright ?? 2008 John Wiley & Sons, Ltd.
Caffo, Brian; Diener-West, Marie; Punjabi, Naresh M.; Samet, Jonathan
2010-01-01
This manuscript considers a data-mining approach for the prediction of mild obstructive sleep disordered breathing, defined as an elevated respiratory disturbance index (RDI), in 5,530 participants in a community-based study, the Sleep Heart Health Study. The prediction algorithm was built using modern ensemble learning algorithms, boosting in specific, which allowed for assessing potential high-dimensional interactions between predictor variables or classifiers. To evaluate the performance of the algorithm, the data were split into training and validation sets for varying thresholds for predicting the probability of a high RDI (≥ 7 events per hour in the given results). Based on a moderate classification threshold from the boosting algorithm, the estimated post-test odds of a high RDI were 2.20 times higher than the pre-test odds given a positive test, while the corresponding post-test odds were decreased by 52% given a negative test (sensitivity and specificity of 0.66 and 0.70, respectively). In rank order, the following variables had the largest impact on prediction performance: neck circumference, body mass index, age, snoring frequency, waist circumference, and snoring loudness. Citation: Caffo B; Diener-West M; Punjabi NM; Samet J. A novel approach to prediction of mild obstructive sleep disordered breathing in a population-based sample: the Sleep Heart Health Study. SLEEP 2010;33(12):1641-1648. PMID:21120126
NASA Astrophysics Data System (ADS)
Mulkerrin, Elizabeth A.
The purpose of this study was to determine the effect of an 11th-grade and 12th-grade zoo-based academic high school experiential science program compared to a same school-district school-based academic high school experiential science program on students' pretest and posttest science, math, and reading achievement, and student perceptions of program relevance, rigor, and relationships. Science coursework delivery site served as the study's independent variable for the two naturally formed groups representing students (n = 18) who completed a zoo-based experiential academic high school science program and students (n = 18) who completed a school-based experiential academic high school science program. Students in the first group, a zoo-based experiential academic high school science program, completed real world, hands-on projects at the zoo while students in the second group, those students who completed a school-based experiential academic high school science program, completed real world, simulated projects in the classroom. These groups comprised the two research arms of the study. Both groups of students were selected from the same school district. The study's two dependent variables were achievement and school climate. Achievement was analyzed using norm-referenced 11th-grade pretest PLAN and 12th-grade posttest ACT test composite scores. Null hypotheses were rejected in the direction of improved test scores for both science program groups---students who completed the zoo-based experiential academic high school science program (p < .001) and students who completed the school-based experiential academic high school science program (p < .001). The posttest-posttest ACT test composite score comparison was not statistically different ( p = .93) indicating program equipoise for students enrolled in both science programs. No overall weighted grade point average score improvement was observed for students in either science group, however, null hypotheses were rejected in the direction of improved science grade point average scores for 11th-grade (p < .01) and 12th-grade (p = .01) students who completed the zoo-based experiential academic high school science program. Null hypotheses were not rejected for between group posttest science grade point average scores and school district criterion reference math and reading test scores. Finally, students who completed the zoo-based experiential academic high school science program had statistically improved pretest-posttest perceptions of program relationship scores (p < .05) and compared to students who completed the school-based experiential academic high school science program had statistically greater posttest perceptions of program relevance (p < .001), perceptions of program rigor (p < .001), and perceptions of program relationships (p < .001).
Predicting Power Outages Using Multi-Model Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.
2017-12-01
Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.
Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo
2015-01-01
The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and design space. In this article, we integrate thin-plate spline (TPS) interpolation, Kohonen's self-organizing map (SOM) and a Bayesian network (BN) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared using a standard formulation. We measured the tensile strength and disintegration time as response variables and the compressibility, cohesion and dispersibility of the pretableting blend as latent variables. We predicted these variables quantitatively using nonlinear TPS, generated a large amount of data on pretableting blends and tablets and clustered these data into several clusters using a SOM. Our results show that we are able to predict the experimental values of the latent and response variables with a high degree of accuracy and are able to classify the tablet data into several distinct clusters. In addition, to visualize the latent structure between the causal and latent factors and the response variables, we applied a BN method to the SOM clustering results. We found that despite having inserted latent variables between the causal factors and response variables, their relation is equivalent to the results for the SOM clustering, and thus we are able to explain the underlying latent structure. Consequently, this technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulation.
Novel Strategy to Evaluate Infectious Salmon Anemia Virus Variants by High Resolution Melting
Sepúlveda, Dagoberto; Cárdenas, Constanza; Carmona, Marisela; Marshall, Sergio H.
2012-01-01
Genetic variability is a key problem in the prevention and therapy of RNA-based virus infections. Infectious Salmon Anemia virus (ISAv) is an RNA virus which aggressively attacks salmon producing farms worldwide and in particular in Chile. Just as with most of the Orthomyxovirus, ISAv displays high variability in its genome which is reflected by a wider infection potential, thus hampering management and prevention of the disease. Although a number of widely validated detection procedures exist, in this case there is a need of a more complex approach to the characterization of virus variability. We have adapted a procedure of High Resolution Melting (HRM) as a fine-tuning technique to fully differentiate viral variants detected in Chile and projected to other infective variants reported elsewhere. Out of the eight viral coding segments, the technique was adapted using natural Chilean variants for two of them, namely segments 5 and 6, recognized as virulence-associated factors. Our work demonstrates the versatility of the technique as well as its superior resolution capacity compared with standard techniques currently in use as key diagnostic tools. PMID:22719837
Variation of surface ozone in Campo Grande, Brazil: meteorological effect analysis and prediction.
Pires, J C M; Souza, A; Pavão, H G; Martins, F G
2014-09-01
The effect of meteorological variables on surface ozone (O3) concentrations was analysed based on temporal variation of linear correlation and artificial neural network (ANN) models defined by genetic algorithms (GAs). ANN models were also used to predict the daily average concentration of this air pollutant in Campo Grande, Brazil. Three methodologies were applied using GAs, two of them considering threshold models. In these models, the variables selected to define different regimes were daily average O3 concentration, relative humidity and solar radiation. The threshold model that considers two O3 regimes was the one that correctly describes the effect of important meteorological variables in O3 behaviour, presenting also a good predictive performance. Solar radiation, relative humidity and rainfall were considered significant for both O3 regimes; however, wind speed (dispersion effect) was only significant for high concentrations. According to this model, high O3 concentrations corresponded to high solar radiation, low relative humidity and wind speed. This model showed to be a powerful tool to interpret the O3 behaviour, being useful to define policy strategies for human health protection regarding air pollution.
Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles
NASA Astrophysics Data System (ADS)
Mini, C.; Hogue, T. S.; Pincetl, S.
2012-04-01
Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.
NASA Astrophysics Data System (ADS)
Franke, Jasper G.; Werner, Johannes; Donner, Reik V.
2017-04-01
The increasing availability of high-resolution North Atlantic paleoclimate proxies allows to not only study local climate variations in time, but also temporal changes in spatial variability patterns across the entire region possibly controlled by large-scale coherent variability modes such as the North Atlantic Oscillation (NAO) and Atlantic Multidecadal Oscillation. In this study, we use functional paleoclimate network analysis [1,2] to investigate changes in the statistical similarity patterns among an ensemble of high-resolution terrestrial paleoclimate records from Northern Europe included in the Arctic 2k data base. Specifically, we construct complex networks capturing the mutual statistical similarity of inter-annual temperature variability recorded in tree ring records, ice cores and lake sediments for multidecadal time windows covering the last two millenia. The observed patterns of co-variability are ultimately connected to the North Atlantic atmospheric circulation and most prominently to multidecadal variations of the NAO. Based on the inferred networks, we study the dynamical similarity between regional clusters of archives defined according to present-day inter-annual temperature variations across the study region. This analysis identifies those time-dependent inter-regional linkages that are most informative about the leading-order North Atlantic climate variability according to a recent NAO reconstruction for the last millenium [3]. Based on these linkages, we extend the existing reconstruction to obtain qualitative information on multidecadal to centennial scale North Atlantic climate variability over the last two millenia. In general, we find a tendency towards a dominating positive NAO phase interrupted by pronounced and extended intervals of negative NAO. Relatively rapid transitions between both types of behaviour are present during distinct periods including the Little Ice Age, the Medieval Climate Anomaly and for the Dark Ages Little Ice Age. [1] K. Rehfeld, N. Marwan, S.F.M. Breitenbach, J. Kurths: Late Holocene Asian summer monsoon dynamics from small but complex networks of paleoclimate data. Climate Dynamics 41, 3-19, 2013 [2] J.L. Oster, N.P. Kelley: Tracking regional and global teleconnections recorded by western North American speleothem records. Quaternary Science Reviews 149, 18-33, 2016 [3] P. Ortega, F. Lehner, D. Swingedouw, V. Masson-Delmotte, C.C. Raible, M. Casado, P. Yiou: A model-tested North Atlantic Oscillation reconstruction for the past millenium. Nature 523, 71-74, 2015
Omari, Taher I.; Savilampi, Johanna; Kokkinn, Karmen; Schar, Mistyka; Lamvik, Kristin; Doeltgen, Sebastian; Cock, Charles
2016-01-01
Purpose. We evaluated the intra- and interrater agreement and test-retest reliability of analyst derivation of swallow function variables based on repeated high resolution manometry with impedance measurements. Methods. Five subjects swallowed 10 × 10 mL saline on two occasions one week apart producing a database of 100 swallows. Swallows were repeat-analysed by six observers using software. Swallow variables were indicative of contractility, intrabolus pressure, and flow timing. Results. The average intraclass correlation coefficients (ICC) for intra- and interrater comparisons of all variable means showed substantial to excellent agreement (intrarater ICC 0.85–1.00; mean interrater ICC 0.77–1.00). Test-retest results were less reliable. ICC for test-retest comparisons ranged from slight to excellent depending on the class of variable. Contractility variables differed most in terms of test-retest reliability. Amongst contractility variables, UES basal pressure showed excellent test-retest agreement (mean ICC 0.94), measures of UES postrelaxation contractile pressure showed moderate to substantial test-retest agreement (mean Interrater ICC 0.47–0.67), and test-retest agreement of pharyngeal contractile pressure ranged from slight to substantial (mean Interrater ICC 0.15–0.61). Conclusions. Test-retest reliability of HRIM measures depends on the class of variable. Measures of bolus distension pressure and flow timing appear to be more test-retest reliable than measures of contractility. PMID:27190520
NASA Astrophysics Data System (ADS)
Zhang, X.; Roman, M.; Kimmel, D.; McGilliard, C.; Boicourt, W.
2006-05-01
High-resolution, axial sampling surveys were conducted in Chesapeake Bay during April, July, and October from 1996 to 2000 using a towed sampling device equipped with sensors for depth, temperature, conductivity, oxygen, fluorescence, and an optical plankton counter (OPC). The results suggest that the axial distribution and variability of hydrographic and biological parameters in Chesapeake Bay were primarily influenced by the source and magnitude of freshwater input. Bay-wide spatial trends in the water column-averaged values of salinity were linear functions of distance from the main source of freshwater, the Susquehanna River, at the head of the bay. However, spatial trends in the water column-averaged values of temperature, dissolved oxygen, chlorophyll-a and zooplankton biomass were nonlinear along the axis of the bay. Autocorrelation analysis and the residuals of linear and quadratic regressions between each variable and latitude were used to quantify the patch sizes for each axial transect. The patch sizes of each variable depended on whether the data were detrended, and the detrending techniques applied. However, the patch size of each variable was generally larger using the original data compared to the detrended data. The patch sizes of salinity were larger than those for dissolved oxygen, chlorophyll-a and zooplankton biomass, suggesting that more localized processes influence the production and consumption of plankton. This high-resolution quantification of the zooplankton spatial variability and patch size can be used for more realistic assessments of the zooplankton forage base for larval fish species.
Weight-based discrimination: an ubiquitary phenomenon?
Sikorski, C; Spahlholz, J; Hartlev, M; Riedel-Heller, S G
2016-02-01
Despite strong indications of a high prevalence of weight-related stigmatization in individuals with obesity, limited attention has been given to the role of weight discrimination in examining the stigma obesity. Studies, up to date, rely on a limited basis of data sets and additional studies are needed to confirm the findings of previous studies. In particular, data for Europe are lacking, and are needed in light of a recent ruling of the European Court of Justice that addressed weight-based discrimination. The data were derived from a large representative telephone survey in Germany (n=3003). The dependent variable, weight-based discrimination, was assessed with a one-item question. The lifetime prevalence of weight discrimination across different sociodemographic variables was determined. Logistic regression models were used to assess the association of independent and dependent variables. A sub-group analysis was conducted analyzing all participants with a body mass index ⩾25 kg m(-)(2). The overall prevalence of weight-based discrimination was 7.3%. Large differences, however, were observed regarding weight status. In normal weight and overweight participants the prevalence was 5.6%, but this number doubled in participants with obesity class I (10.2%), and quadrupled in participants with obesity class II (18.7%) and underweight (19.7%). In participants with obesity class III, every third participant reported accounts of weight-based discrimination (38%). In regression models, after adjustment, the associations of weight status and female gender (odds ratio: 2.59, P<0.001) remained highly significant. Discrimination seems to be an ubiquitary phenomenon at least for some groups that are at special risk, such as heavier individuals and women. Our findings therefore emphasize the need for research and intervention on weight discrimination among adults with obesity, including anti-discrimination legislation.
eClims: An Extensible and Dynamic Integration Framework for Biomedical Information Systems.
Savonnet, Marinette; Leclercq, Eric; Naubourg, Pierre
2016-11-01
Biomedical information systems (BIS) require consideration of three types of variability: data variability induced by new high throughput technologies, schema or model variability induced by large scale studies or new fields of research, and knowledge variability resulting from new discoveries. Beyond data heterogeneity, managing variabilities in the context of BIS requires extensible and dynamic integration process. In this paper, we focus on data and schema variabilities and we propose an integration framework based on ontologies, master data, and semantic annotations. The framework addresses issues related to: 1) collaborative work through a dynamic integration process; 2) variability among studies using an annotation mechanism; and 3) quality control over data and semantic annotations. Our approach relies on two levels of knowledge: BIS-related knowledge is modeled using an application ontology coupled with UML models that allow controlling data completeness and consistency, and domain knowledge is described by a domain ontology, which ensures data coherence. A system build with the eClims framework has been implemented and evaluated in the context of a proteomic platform.
Evaluation of the FIR Example using Xilinx Vivado High-Level Synthesis Compiler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Finkel, Hal; Yoshii, Kazutomo
Compared to central processing units (CPUs) and graphics processing units (GPUs), field programmable gate arrays (FPGAs) have major advantages in reconfigurability and performance achieved per watt. This development flow has been augmented with high-level synthesis (HLS) flow that can convert programs written in a high-level programming language to Hardware Description Language (HDL). Using high-level programming languages such as C, C++, and OpenCL for FPGA-based development could allow software developers, who have little FPGA knowledge, to take advantage of the FPGA-based application acceleration. This improves developer productivity and makes the FPGA-based acceleration accessible to hardware and software developers. Xilinx Vivado HLSmore » compiler is a high-level synthesis tool that enables C, C++ and System C specification to be directly targeted into Xilinx FPGAs without the need to create RTL manually. The white paper [1] published recently by Xilinx uses a finite impulse response (FIR) example to demonstrate the variable-precision features in the Vivado HLS compiler and the resource and power benefits of converting floating point to fixed point for a design. To get a better understanding of variable-precision features in terms of resource usage and performance, this report presents the experimental results of evaluating the FIR example using Vivado HLS 2017.1 and a Kintex Ultrascale FPGA. In addition, we evaluated the half-precision floating-point data type against the double-precision and single-precision data type and present the detailed results.« less
NASA Astrophysics Data System (ADS)
Zhou, Zongchuan; Dang, Dongsheng; Qi, Caijuan; Tian, Hongliang
2018-02-01
It is of great significance to make accurate forecasting for the power consumption of high energy-consuming industries. A forecasting model for power consumption of high energy-consuming industries based on system dynamics is proposed in this paper. First, several factors that have influence on the development of high energy-consuming industries in recent years are carefully dissected. Next, by analysing the relationship between each factor and power consumption, the system dynamics flow diagram and equations are set up to reflect the relevant relationships among variables. In the end, the validity of the model is verified by forecasting the power consumption of electrolytic aluminium industry in Ningxia according to the proposed model.
Weather Variability, Tides, and Barmah Forest Virus Disease in the Gladstone Region, Australia
Naish, Suchithra; Hu, Wenbiao; Nicholls, Neville; Mackenzie, John S.; McMichael, Anthony J.; Dale, Pat; Tong, Shilu
2006-01-01
In this study we examined the impact of weather variability and tides on the transmission of Barmah Forest virus (BFV) disease and developed a weather-based forecasting model for BFV disease in the Gladstone region, Australia. We used seasonal autoregressive integrated moving-average (SARIMA) models to determine the contribution of weather variables to BFV transmission after the time-series data of response and explanatory variables were made stationary through seasonal differencing. We obtained data on the monthly counts of BFV cases, weather variables (e.g., mean minimum and maximum temperature, total rainfall, and mean relative humidity), high and low tides, and the population size in the Gladstone region between January 1992 and December 2001 from the Queensland Department of Health, Australian Bureau of Meteorology, Queensland Department of Transport, and Australian Bureau of Statistics, respectively. The SARIMA model shows that the 5-month moving average of minimum temperature (β = 0.15, p-value < 0.001) was statistically significantly and positively associated with BFV disease, whereas high tide in the current month (β = −1.03, p-value = 0.04) was statistically significantly and inversely associated with it. However, no significant association was found for other variables. These results may be applied to forecast the occurrence of BFV disease and to use public health resources in BFV control and prevention. PMID:16675420
Quasar spectral variability from the XMM-Newton serendipitous source catalogue
NASA Astrophysics Data System (ADS)
Serafinelli, R.; Vagnetti, F.; Middei, R.
2017-04-01
Context. X-ray spectral variability analyses of active galactic nuclei (AGN) with moderate luminosities and redshifts typically show a "softer when brighter" behaviour. Such a trend has rarely been investigated for high-luminosity AGNs (Lbol ≳ 1044 erg/s), nor for a wider redshift range (e.g. 0 ≲ z ≲ 5). Aims: We present an analysis of spectral variability based on a large sample of 2700 quasars, measured at several different epochs, extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue. Methods: We quantified the spectral variability through the parameter β defined as the ratio between the change in the photon index Γ and the corresponding logarithmic flux variation, β = -ΔΓ/Δlog FX. Results: Our analysis confirms a softer when brighter behaviour for our sample, extending the previously found general trend to high luminosity and redshift. We estimate an ensemble value of the spectral variability parameter β = -0.69 ± 0.03. We do not find dependence of β on redshift, X-ray luminosity, black hole mass or Eddington ratio. A subsample of radio-loud sources shows a smaller spectral variability parameter. There is also some change with the X-ray flux, with smaller β (in absolute value) for brighter sources. We also find significant correlations for a small number of individual sources, indicating more negative values for some sources.
Zhang, Hanyuan; Tian, Xuemin; Deng, Xiaogang; Cao, Yuping
2018-05-16
As an attractive nonlinear dynamic data analysis tool, global preserving kernel slow feature analysis (GKSFA) has achieved great success in extracting the high nonlinearity and inherently time-varying dynamics of batch process. However, GKSFA is an unsupervised feature extraction method and lacks the ability to utilize batch process class label information, which may not offer the most effective means for dealing with batch process monitoring. To overcome this problem, we propose a novel batch process monitoring method based on the modified GKSFA, referred to as discriminant global preserving kernel slow feature analysis (DGKSFA), by closely integrating discriminant analysis and GKSFA. The proposed DGKSFA method can extract discriminant feature of batch process as well as preserve global and local geometrical structure information of observed data. For the purpose of fault detection, a monitoring statistic is constructed based on the distance between the optimal kernel feature vectors of test data and normal data. To tackle the challenging issue of nonlinear fault variable identification, a new nonlinear contribution plot method is also developed to help identifying the fault variable after a fault is detected, which is derived from the idea of variable pseudo-sample trajectory projection in DGKSFA nonlinear biplot. Simulation results conducted on a numerical nonlinear dynamic system and the benchmark fed-batch penicillin fermentation process demonstrate that the proposed process monitoring and fault diagnosis approach can effectively detect fault and distinguish fault variables from normal variables. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Research on the Diesel Engine with Sliding Mode Variable Structure Theory
NASA Astrophysics Data System (ADS)
Ma, Zhexuan; Mao, Xiaobing; Cai, Le
2018-05-01
This study constructed the nonlinear mathematical model of the diesel engine high-pressure common rail (HPCR) system through two polynomial fitting which was treated as a kind of affine nonlinear system. Based on sliding-mode variable structure control (SMVSC) theory, a sliding-mode controller for affine nonlinear systems was designed for achieving the control of common rail pressure and the diesel engine’s rotational speed. Finally, on the simulation platform of MATLAB, the designed nonlinear HPCR system was simulated. The simulation results demonstrated that sliding-mode variable structure control algorithm shows favourable control performances which are overcoming the shortcomings of traditional PID control in overshoot, parameter adjustment, system precision, adjustment time and ascending time.
NASA Astrophysics Data System (ADS)
Harris, B. J.; Sun, S. S.; Li, W. H.
2017-03-01
With the growing need for effective intercity transport, the need for more advanced rail vehicle technology has never been greater. The conflicting primary longitudinal suspension requirements of high speed stability and curving performance limit the development of rail vehicle technology. This paper presents a novel magnetorheological fluid based joint with variable stiffness characteristics for the purpose of overcoming this parameter conflict. Firstly, the joint design and working principle is developed. Following this, a prototype is tested by MTS to characterize its variable stiffness properties under a range of conditions. Lastly, the performance of the proposed MRF rubber joint with regard to improving train stability and curving performance is numerically evaluated.
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
Mechanism-Based Design for High-Temperature, High-Performance Composites. Book 3.
1997-09-01
l(e-ß):(e-ß)--4(e:ß) 2 = el3 + -4(en-e33f, (77) 7 2 = 62:a-(e:a)2 = e?2 + 4, (78) where n = e2, ß = I-nn = eiei +e3e3, and the Cartesian...relation, the particles most susceptible to fracture are those at the larger size range of the population . Thus, with increasing standard deviation of...strength variability is associated exclusively with a single population of flaws. The second is based on comparisons of mean strengths of two or more
NASA Astrophysics Data System (ADS)
Saigo, Barbara Woodworth
The researcher collaborated with four high school biology teachers who had been involved for 2-1/2 years in a constructivism-based professional development experience that emphasized teaching for conceptual change and using classroom-based inquiry as a basis for making instructional decisions. The researcher and teachers designed a five-day instructional unit on biosystematics using two contrasting approaches, comprising the treatment variable. The "traditional" unit emphasized lecture, written materials, and some laboratory activities. The "constructivist" unit emphasized a specific, inquiry-based, conceptual change strategy and collaborative learning. The study used a quasi-experimental, factorial design to explore impact of instructional approach (the treatment variable) on student performance (the dependent variable) on repeated measures (three) of a biology concept test. Additional independent variables considered were gender, cumulative GPA, and the section in which students were enrolled. Scores on the biology concept test were compiled for the 3 constructivist sections (N = 44) and the 3 traditional sections (N = 42). Analysis of Covariance (ANCOVA) was applied. The main findings in regard to the primary research question were that instructional approach did not have a significant relationship to immediate post test scores or gain, but that one month after instruction students in the constructivist group demonstrated less loss of gain than those in the traditional group; i.e., their longer-term retention was greater. Also, GPA*instructional approach effects were detected for post-post-test gain. GPA and gender were significantly associated with pre-test, post-test, and post-post scores; however, in terms of change (gain) from pre-test to post-test and pre-test to post-post-test, GPA and gender were not significant effects. Section was a significant effect for all three tests, in terms of both score and gain. Gender*section effects were detected for post-test gain and post-post-test scores.
NASA Astrophysics Data System (ADS)
Leijala, U.; Bjorkqvist, J. V.; Pellikka, H.; Johansson, M. M.; Kahma, K. K.
2017-12-01
Predicting the behaviour of the joint effect of sea level and wind waves is of great significance due to the major impact of flooding events in densely populated coastal regions. As mean sea level rises, the effect of sea level variations accompanied by the waves will be even more harmful in the future. The main challenge when evaluating the effect of waves and sea level variations is that long time series of both variables rarely exist. Wave statistics are also highly location-dependent, thus requiring wave buoy measurements and/or high-resolution wave modelling. As an initial approximation of the joint effect, the variables may be treated as independent random variables, to achieve the probability distribution of their sum. We present results of a case study based on three probability distributions: 1) wave run-up constructed from individual wave buoy measurements, 2) short-term sea level variability based on tide gauge data, and 3) mean sea level projections based on up-to-date regional scenarios. The wave measurements were conducted during 2012-2014 on the coast of city of Helsinki located in the Gulf of Finland in the Baltic Sea. The short-term sea level distribution contains the last 30 years (1986-2015) of hourly data from Helsinki tide gauge, and the mean sea level projections are scenarios adjusted for the Gulf of Finland. Additionally, we present a sensitivity test based on six different theoretical wave height distributions representing different wave behaviour in relation to sea level variations. As these wave distributions are merged with one common sea level distribution, we can study how the different shapes of the wave height distribution affect the distribution of the sum, and which one of the components is dominating under different wave conditions. As an outcome of the method, we obtain a probability distribution of the maximum elevation of the continuous water mass, which enables a flexible tool for evaluating different risk levels in the current and future climate.
NASA Astrophysics Data System (ADS)
Tada, R.; Seki, A.; Ikeda, M.; Irino, T.; Ikehara, K.; Karasuda, A.; Sugisaki, S.; Sagawa, T.; Itaki, T.; Kubota, Y.; Murayama, M.; Lu, S.; Murray, R. W.; Alvarez Zarikian, C. A.
2017-12-01
It is well-known that Dansgaard-Oeschger Cycles (DOC) and East Asian Summer Monsoon (EASM) are closely linked during the last glacial period, and that Atlantic Meridional Ocean Circulation (AMOC) played a key role to amplify and propagate the DOC signal. Climatic model studies also suggested that on and off of AMOC caused simultaneous north-south shifts of westerly jets (WJ) in both hemispheres and ITCZ. Since WJ over East Asia bounds the northern limit of EASM front, it is likely that N-S shifts of WJ caused millennial-scale variability of EASM precipitation distribution. This linkage can be traced back to ca. 0.4 Ma based on comparison of synthetic Greenland temperature record of Barker et al. (2011) with d18O record of Chinese speleothems, and back to 0.8 Ma based on comparison of synthetic Greenland temperature record with Br profile of the hemipelagic sediments of the Japan Sea (reflecting marine organic carbon content and considered as a proxy of EASM) retrieved from Site U1424 during IODP Exp. 346. Br profile of the Japan Sea sediments also implies that millennial-scale variability of EASM was persistent since ca. 1.45 Ma ago, which was probably linked with AMOC variability. However, presence/absence of millennial-scale variability of EASM and possibility of its linkage with AMOC variability are not known for the period before 1.45 Ma. Here we extend our Br record of Site U1424 back to ca. 3 Ma and demonstrate that there was intermitted occurrence of millennial-scale EASM variability since ca. 2.5 Ma when LR04 glacial d18O value first exceeded ca. 4 permil. This may suggest the presence of threshold of ice volume to cause millennial-scale variability of AMOC and EASM.
Comparative and Evolutionary Analyses of Meloidogyne spp. Based on Mitochondrial Genome Sequences
García, Laura Evangelina; Sánchez-Puerta, M. Virginia
2015-01-01
Molecular taxonomy and evolution of nematodes have been recently the focus of several studies. Mitochondrial sequences were proposed as an alternative for precise identification of Meloidogyne species, to study intraspecific variability and to follow maternal lineages. We characterized the mitochondrial genomes (mtDNAs) of the root knot nematodes M. floridensis, M. hapla and M. incognita. These were AT rich (81–83%) and highly compact, encoding 12 proteins, 2 rRNAs, and 22 tRNAs. Comparisons with published mtDNAs of M. chitwoodi, M. incognita (another strain) and M. graminicola revealed that they share protein and rRNA gene order but differ in the order of tRNAs. The mtDNAs of M. floridensis and M. incognita were strikingly similar (97–100% identity for all coding regions). In contrast, M. floridensis, M. chitwoodi, M. hapla and M. graminicola showed 65–84% nucleotide identity for coding regions. Variable mitochondrial sequences are potentially useful for evolutionary and taxonomic studies. We developed a molecular taxonomic marker by sequencing a highly-variable ~2 kb mitochondrial region, nad5-cox1, from 36 populations of root-knot nematodes to elucidate relationships within the genus Meloidogyne. Isolates of five species formed monophyletic groups and showed little intraspecific variability. We also present a thorough analysis of the mitochondrial region cox2-rrnS. Phylogenies based on either mitochondrial region had good discrimination power but could not discriminate between M. arenaria, M. incognita and M. floridensis. PMID:25799071
Gálvez, Laura; Urbaniak, Monika; Waśkiewicz, Agnieszka; Stępień, Łukasz; Palmero, Daniel
2017-10-01
Fusarium proliferatum is a world-wide occurring fungal pathogen affecting several crops included garlic bulbs. In Spain, this is the most frequent pathogenic fungus associated with garlic rot during storage. Moreover, F. proliferatum is an important mycotoxigenic species, producing a broad range of toxins, which may pose a risk for food safety. The aim of this study is to assess the intraspecific variability of the garlic pathogen in Spain implied by analyses of translation elongation factor (tef-1α) and FUM1 gene sequences as well as the differences in growth rates. Phylogenetic characterization has been complemented with the characterization of mating type alleles as well as the species potential as a toxin producer. Phylogenetic trees based on the sequence of the translation elongation factor and FUM1 genes from seventy nine isolates from garlic revealed a considerable intraspecific variability as well as high level of diversity in growth speed. Based on the MAT alleles amplified by PCR, F. proliferatum isolates were separated into different groups on both trees. All isolates collected from garlic in Spain proved to be fumonisin B 1 , B 2 , and B 3 producers. Quantitative analyses of fumonisins, beauvericin and moniliformin (common secondary metabolites of F. proliferatum) showed no correlation with phylogenetic analysis neither mycelial growth. This pathogen presents a high intraspecific variability within the same geographical region and host, which is necessary to be considered in the management of the disease. Copyright © 2017 Elsevier Ltd. All rights reserved.
Toward a Unified View of Black-Hole High-Energy States
NASA Technical Reports Server (NTRS)
Nowak, Michael A.
1995-01-01
We present here a review of high-energy (greater than 1 keV) observations of seven black-hole candidates, six of which have estimated masses. In this review we focus on two parameters of interest: the ratio of 'nonthermal' to total luminosity as a function of the total luminosity divided by the Eddington luminosity, and the root-mean-square (rms) variability as a function of the nonthermal-to-total luminosity ratio. Below approx. 10% Eddington luminosity, the sources tend to be strictly nonthermal (the so called 'off' and 'low' states). Above this luminosity the sources become mostly thermal (the 'high' state). with the nonthermal component increasing with luminosity (the 'very high' and 'flare' states). There are important exceptions to this behavior, however, and no steady - as opposed to transient - source has been observed over a wide range of parameter space. In addition, the rms variability is positively correlated with the ratio of nonthermal to total luminosity, although there may be a minimum level of variability associated with 'thermal' states. We discuss these results in light of theoretical models and find that currently no single model describes the full range of black-hole high-energy behavior. In fact, the observations are exactly opposite from what one expects based upon simple notions of accretion disk instabilities.
Development and Validation of a High-Quality Composite Real-World Mortality Endpoint.
Curtis, Melissa D; Griffith, Sandra D; Tucker, Melisa; Taylor, Michael D; Capra, William B; Carrigan, Gillis; Holzman, Ben; Torres, Aracelis Z; You, Paul; Arnieri, Brandon; Abernethy, Amy P
2018-05-14
To create a high-quality electronic health record (EHR)-derived mortality dataset for retrospective and prospective real-world evidence generation. Oncology EHR data, supplemented with external commercial and US Social Security Death Index data, benchmarked to the National Death Index (NDI). We developed a recent, linkable, high-quality mortality variable amalgamated from multiple data sources to supplement EHR data, benchmarked against the highest completeness U.S. mortality data, the NDI. Data quality of the mortality variable version 2.0 is reported here. For advanced non-small-cell lung cancer, sensitivity of mortality information improved from 66 percent in EHR structured data to 91 percent in the composite dataset, with high date agreement compared to the NDI. For advanced melanoma, metastatic colorectal cancer, and metastatic breast cancer, sensitivity of the final variable was 85 to 88 percent. Kaplan-Meier survival analyses showed that improving mortality data completeness minimized overestimation of survival relative to NDI-based estimates. For EHR-derived data to yield reliable real-world evidence, it needs to be of known and sufficiently high quality. Considering the impact of mortality data completeness on survival endpoints, we highlight the importance of data quality assessment and advocate benchmarking to the NDI. © 2018 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.
On-chip continuous-variable quantum entanglement
NASA Astrophysics Data System (ADS)
Masada, Genta; Furusawa, Akira
2016-09-01
Entanglement is an essential feature of quantum theory and the core of the majority of quantum information science and technologies. Quantum computing is one of the most important fruits of quantum entanglement and requires not only a bipartite entangled state but also more complicated multipartite entanglement. In previous experimental works to demonstrate various entanglement-based quantum information processing, light has been extensively used. Experiments utilizing such a complicated state need highly complex optical circuits to propagate optical beams and a high level of spatial interference between different light beams to generate quantum entanglement or to efficiently perform balanced homodyne measurement. Current experiments have been performed in conventional free-space optics with large numbers of optical components and a relatively large-sized optical setup. Therefore, they are limited in stability and scalability. Integrated photonics offer new tools and additional capabilities for manipulating light in quantum information technology. Owing to integrated waveguide circuits, it is possible to stabilize and miniaturize complex optical circuits and achieve high interference of light beams. The integrated circuits have been firstly developed for discrete-variable systems and then applied to continuous-variable systems. In this article, we review the currently developed scheme for generation and verification of continuous-variable quantum entanglement such as Einstein-Podolsky-Rosen beams using a photonic chip where waveguide circuits are integrated. This includes balanced homodyne measurement of a squeezed state of light. As a simple example, we also review an experiment for generating discrete-variable quantum entanglement using integrated waveguide circuits.
Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward
2017-08-01
Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
O'Mara, N. A.; Kelly, C. S.; Herbert, T.
2017-12-01
Laminated sediment cores taken from the San Lazaro Basin (SLB) (25.18N, 112.66W) located off the coast of Baja California in the subtropical eastern Pacific were geochemically analyzed for alkenone and sterol biomarkers to reconstruct sea surface temperature (SST) and marine productivity from 850-1980 CE. High sedimentation rates, low bottom water dissolved oxygen, and high marine productivity in combination with the San Lazaro Basin's location within the dynamic transition zone between the tropical and subtropical eastern Pacific, make it a prime location to study variability of tropical and subtropical modes of climate variability. This study focuses on the impacts and variability of the El Niño Southern Oscillation and the Pacific Decadal Oscillation on the subtropical eastern Pacific. SST and coccolithophore productivity (n=730) for 2 mm sections of sediment corresponding to 1 measurement every 1.8 years were reconstructed using the Uk'37 unsaturation index and C37 alkenone concentration. The high resolution of this record allowed for the analysis of variability of SST and productivity on decadal timescales. Brassicasterol concentrations were calculated for a limited number of samples (n=44) to assess diatom productivity. High spectral power was found at periods of 20-30 years in SST and productivity records indicating a strong influence of the PDO on the SLB, making this the first marine based record directly relevant to PDO reconstructions that continuously spans the last millennium. Cool and productive (warm and less productive) waters were observed in the southern California Current in the Medieval Climate Anomaly 900-1200 CE (Little Ice Age 1400-1800 CE) supporting previous reconstructions that warmer (cooler) SST are linked to both reduced (enhanced) phytoplankton productivity. Additionally, cool (warm) SST were also associated with dry (wet) conditions in the American Southwest indicating that changes in the PDO has had a significant impact on drought in this region over the past millennium.
Groundwater level responses to precipitation variability in Mediterranean insular aquifers
NASA Astrophysics Data System (ADS)
Lorenzo-Lacruz, Jorge; Garcia, Celso; Morán-Tejeda, Enrique
2017-09-01
Groundwater is one of the largest and most important sources of fresh water on many regions under Mediterranean climate conditions, which are exposed to large precipitation variability that includes frequent meteorological drought episodes, and present high evapotranspiration rates and water demand during the dry season. The dependence on groundwater increases in those areas with predominant permeable lithologies, contributing to aquifer recharge and the abundance of ephemeral streams. The increasing pressure of tourism on water resources in many Mediterranean coastal areas, and uncertainty related to future precipitation and water availability, make it urgent to understand the spatio-temporal response of groundwater bodies to precipitation variability, if sustainable use of the resource is to be achieved. We present an assessment of the response of aquifers to precipitation variability based on correlations between the Standardized Precipitation Index (SPI) at various time scales and the Standardized Groundwater Index (SGI) across a Mediterranean island. We detected three main responses of aquifers to accumulated precipitation anomalies: (i) at short time scales of the SPI (<6 months); (ii) at medium time scales (6-24 months); and at long time scales (>24 months). The differing responses were mainly explained by differences in lithology and the percentage of highly permeable rock strata in the aquifer recharge areas. We also identified differences in the months and seasons when aquifer storages are more dependent on precipitation; these were related to climate seasonality and the degree of aquifer exploitation or underground water extraction. The recharge of some aquifers, especially in mountainous areas, is related to precipitation variability within a limited spatial extent, whereas for aquifers located in the plains, precipitation variability influence much larger areas; the topography and geological structure of the island explain these differences. Results indicate large spatial variability in the response of aquifers to precipitation in a very small area, highlighting the importance of having high spatial resolution hydro-climatic databases available to enable full understanding of the effects of climate variability on scarce water resources.
Arbitrary-step randomly delayed robust filter with application to boost phase tracking
NASA Astrophysics Data System (ADS)
Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang
2018-04-01
The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
Validation of public health competencies and impact variables for low- and middle-income countries.
Zwanikken, Prisca Ac; Alexander, Lucy; Huong, Nguyen Thanh; Qian, Xu; Valladares, Laura Magana; Mohamed, Nazar A; Ying, Xiao Hua; Gonzalez-Robledo, Maria Cecilia; Linh, Le Cu; Wadidi, Marwa Se Abuzaid; Tahir, Hanan; Neupane, Sunisha; Scherpbier, Albert
2014-01-20
The number of Master of Public Health (MPH) programmes in low- and middle-income countries (LMICs) is increasing, but questions have been raised regarding the relevance of their outcomes and impacts on context. Although processes for validating public health competencies have taken place in recent years in many high-income countries, validation in LMICs is needed. Furthermore, impact variables of MPH programmes in the workplace and in society have not been developed. A set of public health competencies and impact variables in the workplace and in society was designed using the competencies and learning objectives of six participating institutions offering MPH programmes in or for LMICs, and the set of competencies of the Council on Linkages Between Academia and Public Health Practice as a reference. The resulting competencies and impact variables differ from those of the Council on Linkages in scope and emphasis on social determinants of health, context specificity and intersectoral competencies. A modified Delphi method was used in this study to validate the public health competencies and impact variables; experts and MPH alumni from China, Vietnam, South Africa, Sudan, Mexico and the Netherlands reviewed them and made recommendations. The competencies and variables were validated across two Delphi rounds, first with public health experts (N = 31) from the six countries, then with MPH alumni (N = 30). After the first expert round, competencies and impact variables were refined based on the quantitative results and qualitative comments. Both rounds showed high consensus, more so for the competencies than the impact variables. The response rate was 100%. This is the first time that public health competencies have been validated in LMICs across continents. It is also the first time that impact variables of MPH programmes have been proposed and validated in LMICs across continents. The high degree of consensus between experts and alumni suggests that these public health competencies and impact variables can be used to design and evaluate MPH programmes, as well as for individual and team assessment and continuous professional development in LMICs.
Validation of public health competencies and impact variables for low- and middle-income countries
2014-01-01
Background The number of Master of Public Health (MPH) programmes in low- and middle-income countries (LMICs) is increasing, but questions have been raised regarding the relevance of their outcomes and impacts on context. Although processes for validating public health competencies have taken place in recent years in many high-income countries, validation in LMICs is needed. Furthermore, impact variables of MPH programmes in the workplace and in society have not been developed. Method A set of public health competencies and impact variables in the workplace and in society was designed using the competencies and learning objectives of six participating institutions offering MPH programmes in or for LMICs, and the set of competencies of the Council on Linkages Between Academia and Public Health Practice as a reference. The resulting competencies and impact variables differ from those of the Council on Linkages in scope and emphasis on social determinants of health, context specificity and intersectoral competencies. A modified Delphi method was used in this study to validate the public health competencies and impact variables; experts and MPH alumni from China, Vietnam, South Africa, Sudan, Mexico and the Netherlands reviewed them and made recommendations. Results The competencies and variables were validated across two Delphi rounds, first with public health experts (N = 31) from the six countries, then with MPH alumni (N = 30). After the first expert round, competencies and impact variables were refined based on the quantitative results and qualitative comments. Both rounds showed high consensus, more so for the competencies than the impact variables. The response rate was 100%. Conclusion This is the first time that public health competencies have been validated in LMICs across continents. It is also the first time that impact variables of MPH programmes have been proposed and validated in LMICs across continents. The high degree of consensus between experts and alumni suggests that these public health competencies and impact variables can be used to design and evaluate MPH programmes, as well as for individual and team assessment and continuous professional development in LMICs. PMID:24438672
NASA Astrophysics Data System (ADS)
Lucas, P. W.; Smith, L. C.; Contreras Peña, C.; Froebrich, D.; Drew, J. E.; Kumar, M. S. N.; Borissova, J.; Minniti, D.; Kurtev, R.; Monguió, M.
2017-12-01
We present a catalogue of 618 high-amplitude infrared variable stars (1 < ΔK < 5 mag) detected by the two widely separated epochs of 2.2 μm data in the UKIDSS Galactic plane survey, from searches covering ∼1470 deg2. Most were discovered by a search of all fields at 30 < l < 230°. Sources include new dusty Mira variables, three new cataclysmic variable candidates, a blazar and a peculiar source that may be an interacting binary system. However, ∼60 per cent are young stellar obbjects (YSOs), based on spatial association with star-forming regions at distances ranging from 300 pc to over 10 kpc. This confirms our initial result in Contreras Peña et al. (Paper I) that YSOs dominate the high-amplitude infrared variable sky in the Galactic disc. It is also supported by recently published VISTA Variables in the Via Lactea (VVV) results at 295 < l < 350°. The spectral energy distributions of the YSOs indicate class I or flat-spectrum systems in most cases, as in the VVV sample. A large number of variable YSOs are associated with the Cygnus X complex and other groups are associated with the North America/Pelican nebula, the Gemini OB1 molecular cloud, the Rosette complex, the Cone nebula, the W51 star-forming region and the S86 and S236 H II regions. Most of the YSO variability is likely due to variable/episodic accretion on time-scales of years, albeit usually less extreme than classical FUors and EXors. Luminosities at the 2010 Wide-field Infrared Survey Explorer epoch range from ∼0.1 to 103 L⊙ but only rarely exceed 102.5 L⊙.
Differing Roles of Functional Movement Variability as Experience Increases in Gymnastics
Busquets, Albert; Marina, Michel; Davids, Keith; Angulo-Barroso, Rosa
2016-01-01
Current theories, like Ecological Dynamics, propose that inter-trial movement variability is functional when acquiring or refining movement coordination. Here, we examined how age-based experience levels of gymnasts constrained differences in emergent movement pattern variability during task performance. Specifically, we investigated different roles of movement pattern variability when gymnasts in different age groups performed longswings on a high bar, capturing the range of experience from beginner to advanced status. We also investigated the functionality of the relationships between levels of inter-trial variability and longswing amplitude during performance. One-hundred and thirteen male gymnasts in five age groups were observed performing longswings (with three different experience levels: beginners, intermediates and advanced performers). Performance was evaluated by analysis of key events in coordination of longswing focused on the arm-trunk and trunk-thigh segmental relations. Results revealed that 10 of 18 inter-trial variability measures changed significantly as a function of increasing task experience. Four of ten variability measures conformed to a U-shaped function with age implying exploratory strategies amongst beginners and functional adaptive variability amongst advanced performers. Inter-trial variability of arm-trunk coordination variables (6 of 10) conformed to a \\-shaped curve, as values were reduced to complete the longswings. Changes in coordination variability from beginner to intermediate status were largely restrictive, with only one variability measure related to exploration. Data revealed how inter-trial movement variability in gymnastics, relative to performance outcomes, needs careful interpretation, implying different roles as task experience changes. Key points Inter-trial variability while performing longswings on a high bar was assessed in a large sample (113 participants) divided into five age groups (form beginners to advanced gymnasts). Longswing assessment allowed us to evaluate inter-trial variability in representative performance context. Coordination variability presented two different configurations across experience levels depending on the variable of interest: either a U-shaped or a L- or \\-shaped graph. Increased inter-trial variability of the functional phase events offered flexibility to adapt the longswing performance in the advanced gymnasts, while decreasing variability in arm-trunk coordination modes was critical to improve longswing and to achieve the most advanced level. In addition, the relationship between variability measures and the global performance outcome (i.e. the swing amplitude) revealed different functional roles of movement variability (exploratory or restrictive) as a function of changes in experience levels. PMID:27274664
Capacitance-Based Dosimetry of Co-60 Radiation using Fully-Depleted Silicon-on-Insulator Devices
Li, Yulong; Porter, Warren M.; Ma, Rui; Reynolds, Margaret A.; Gerbi, Bruce J.; Koester, Steven J.
2015-01-01
The capacitance based sensing of fully-depleted silicon-on-insulator (FDSOI) variable capacitors for Co-60 gamma radiation is investigated. Linear response of the capacitance is observed for radiation dose up to 64 Gy, while the percent capacitance change per unit dose is as high as 0.24 %/Gy. An analytical model is developed to study the operational principles of the varactors and the maximum sensitivity as a function of frequency is determined. The results show that FDSOI varactor dosimeters have potential for extremely-high sensitivity as well as the potential for high frequency operation in applications such as wireless radiation sensing. PMID:27840451
High performance reconciliation for continuous-variable quantum key distribution with LDPC code
NASA Astrophysics Data System (ADS)
Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua
2015-03-01
Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.
Factors predicting recall of mathematics terms by deaf students: implications for teaching.
Lang, Harry; Pagliaro, Claudia
2007-01-01
In this study of deaf high school students, imagery and familiarity were found to be the best predictors of geometry word recall, whereas neither concreteness nor signability of the terms was a significant predictor variable. Recall of high imagery terms was significantly better than for low imagery terms, and the same result was found for high- over low-familiarity and signability. Concrete terms were recalled significantly better than abstract terms. Geometry terms that could be represented with single signs were recalled significantly better than those that are usually fingerspelled or those represented by compound signs. Teachers with degrees and/or certification in mathematics had significantly higher self-ratings for the strongest predictor variables, imagery (visualization), and familiarity, as compared with those without such formal training. Based on these findings, implications for mathematics instruction, teacher education, and research are provided.
Hydrological Dynamics of Central America: Time-of-Emergence of the Global Warming Signal
NASA Astrophysics Data System (ADS)
Imbach, P. A.; Georgiou, S.; Calderer, L.; Coto, A.; Nakaegawa, T.; Chou, S. C.; Lyra, A. A.; Hidalgo, H. G.; Ciais, P.
2016-12-01
Central America is among the world's most vulnerable regions to climate variability and change. Country economies are highly dependent on the agricultural sector and over 40 million people's rural livelihoods directly depend on the use of natural resources. Future climate scenarios show a drier outlook (higher temperatures and lower precipitation) over a region where rural livelihoods are already compromised by water availability and climate variability. Previous efforts to validate modelling of the regional hydrology have been based on high resolution (1 km2) equilibrium models (Imbach et al., 2010) or using dynamic models (Variable Infiltration Capacity) with coarse climate forcing (0.5°) (Hidalgo et al., 2013; Maurer et al., 2009). We present here: (i) validation of the hydrological outputs from high-resolution simulations (10 km2) of a dynamic vegetation model (Orchidee), using 7 different sets of model input forcing data, with monthly runoff observations from 182 catchments across Central America; (ii) the first assessments of the region's hydrological variability using the historical simulations (iii) an estimation of the time of emergence of the climate change signal (under the SRES emission scenarios) on the water balance. We found model performance to be comparable with that from studies in other world regions (Yang et al. 2016) when forced with high resolution precipitation data (monthly values at 5 km2, Funk et al. (2015)) and the Climate Research Unit (CRU 3.2, Harris et al. (2014)) dataset of meteorological parameters. Validation results showed a Pearson correlation coefficient ≈ 0.6, general underestimation of runoff of ≈ 60% and variability close to observed values (ratio of standard deviations of ≈ 0.7). Maps of historical runoff are presented to show areas where high runoff variability follows high mean annual runoff, with opposite trends over the Caribbean. Future scenarios show large areas where future maximum water availability will always fall below minus-one standard deviation of the historical values by mid-century. Additionally, our results highlight the time horizon left to develop adaptation strategies to cope with future reductions in water availability.
Discrete-Choice Modeling Of Non-Working Women’s Trip-Chaining Activity Based
NASA Astrophysics Data System (ADS)
Hayati, Amelia; Pradono; Purboyo, Heru; Maryati, Sri
2018-05-01
Start The urban developments of technology and economics are now changing the lifestyles of the urban societies. It is also changing their travel demand to meet their movement needs. Nowadays, urban women, especially in Bandung, West Java, have a high demand for their daily travel and tend to increase. They have the ease of accessibility to personal modes of transportation and freedom to go anywhere to meet their personal and family needs. This also happens to non-working women or as housewives in the city of Bandung. More than 50% of women’s mobility is outside the home, in the term of trip-chaining, from leaving to returning home in one day. It is based on their complex activities in order to meet the needs of family and home care. While less than 60% of male’s mobility is outdoors, it is a simple trip-chaining or only has a single trip. The trip-chaining has significant differences between non-working women and working-men. This illustrates the pattern of Mom and Dad’s mobility in a family with an activity-based approach for the same purpose, i.e. family welfare. This study explains how complex the trip-chaining of non-working urban women and as housewives, with an activity-based approach done outdoors in a week. Socio-economic and household demographic variables serve as the basis for measuring the independent variables affecting family welfare, as well as the variables of type, time and duration of activities performed by unemployed housewives. This study aims to examine the interrelationships between activity variables, especially the time of activity and travel, and socio-economic of household variables that can generate the complexity of women’s daily travel. Discrete Choice Modeling developed by Ben-Akiva, Chandra Bhat, etc., is used in this study to illustrate the relationship between activity and socio-economic demographic variables based on primary survey data in Bandung, West Java for 466 unemployed housewives. The results of the regression, by Seemingly Unrelated Regression approach methods, showed the interrelationship between all variables, including the complexity of trip chaining of housewives based on their daily activities. The type of mandatory and discretionary activities, and the duration of activities performed during the dismissal in the series of trip chains conducted are intended for the fulfillment of the welfare of all family member.
Variability of attention processes in ADHD: observations from the classroom.
Rapport, Mark D; Kofler, Michael J; Alderson, R Matt; Timko, Thomas M; Dupaul, George J
2009-05-01
Classroom- and laboratory-based efforts to study the attentional problems of children with ADHD are incongruent in elucidating attentional deficits; however, none have explored within- or between-minute variability in the classroom attentional processing in children with ADHD. High and low attention groups of ADHD children defined via cluster analysis, and 36 typically developing children, were observed while completing academic assignments in their general education classrooms. All children oscillated between attentive and inattentive states; however, children in both ADHD groups switched states more frequently and remained attentive for shorter durations relative to typically developing children. Overall differences in attention and optimal ability to maintain attention among the groups are consistent with laboratory studies of increased ADHD-related interindividual and intergroup variability but inconsistent with laboratory results of increased intra-individual variability and attention decrements over time.
Climate change. Six centuries of variability and extremes in a coupled marine-terrestrial ecosystem.
Black, Bryan A; Sydeman, William J; Frank, David C; Griffin, Daniel; Stahle, David W; García-Reyes, Marisol; Rykaczewski, Ryan R; Bograd, Steven J; Peterson, William T
2014-09-19
Reported trends in the mean and variability of coastal upwelling in eastern boundary currents have raised concerns about the future of these highly productive and biodiverse marine ecosystems. However, the instrumental records on which these estimates are based are insufficiently long to determine whether such trends exceed preindustrial limits. In the California Current, a 576-year reconstruction of climate variables associated with winter upwelling indicates that variability increased over the latter 20th century to levels equaled only twice during the past 600 years. This modern trend in variance may be unique, because it appears to be driven by an unprecedented succession of extreme, downwelling-favorable, winter climate conditions that profoundly reduce productivity for marine predators of commercial and conservation interest. Copyright © 2014, American Association for the Advancement of Science.
Correlative and multivariate analysis of increased radon concentration in underground laboratory.
Maletić, Dimitrije M; Udovičić, Vladimir I; Banjanac, Radomir M; Joković, Dejan R; Dragić, Aleksandar L; Veselinović, Nikola B; Filipović, Jelena
2014-11-01
The results of analysis using correlative and multivariate methods, as developed for data analysis in high-energy physics and implemented in the Toolkit for Multivariate Analysis software package, of the relations of the variation of increased radon concentration with climate variables in shallow underground laboratory is presented. Multivariate regression analysis identified a number of multivariate methods which can give a good evaluation of increased radon concentrations based on climate variables. The use of the multivariate regression methods will enable the investigation of the relations of specific climate variable with increased radon concentrations by analysis of regression methods resulting in 'mapped' underlying functional behaviour of radon concentrations depending on a wide spectrum of climate variables. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
[Influence of road on breeding habitat of Nipponia nippon based on MaxEnt model].
Zhang, Hui; Gao, Ji Xi; Ma, Meng Xiao; Shao, Fang Ze; Wang, Qiao; Li, Guang Yu; Qiu, Jie; Zhou, Ke Xin
2017-04-18
Quantitative study on effects of roads on suitable breeding habitats of wildlife is one of topics that need in-depth research in road ecology. Crested ibis (Nipponia nippon), a first class nationally protected bird species, is the species of interest in this research. Using the Maximum Entropy Models (MaxEnt) in the Species Distribution Model (SDM) toolbox of ArcGIS, autocorrelation of environmental variables were analyzed and environmental variables with r>0.8 were removed. Ten environmental variables were chosen as impact factors for the breeding habitat of crested ibis, including mean temperature of coldest quarter, landscape type, normalized difference vegetation index(NDVI), slope, aspect, distance to waters, distance to paddy field, distance to high-grade roads (expressway, national way, provincial way), and distance to low-grade roads (country road). By analyzing the contribution rate of each environmental variable, the results showed that the mean temperature of coldest quarter, landscape type, distance to paddy field, and distance to high-grade roads were the main factors determining breeding habitat of crested ibis. The suitable distribution of crested ibis' nesting area was under the following scenarios: variable road present (scenario1), high-grade road absent (scenario2), and low-grade road absent (scenario 3). The results showed that the presence of roads affected suitable nesting areas of crested ibis with high-grade roads showing a larger influence than low-grade roads. The presence of high-grade roads and low-grade roads decreased the suitable nesting areas of crested ibis by 66.23 and 35.69 km 2 , respectively. The crested ibis preferred to nest in areas distant from high-grade roads, with an average road avoidance distance of 1500 m. This study was of great significance for formulating management measures to protect crested ibis and provide a reference for quantitative assessment on impacts of engineering and construction projects on wildlife.
Providing written language services in the schools: the time is now.
Fallon, Karen A; Katz, Lauren A
2011-01-01
The current study was conducted to investigate the provision of written language services by school-based speech-language pathologists (SLPs). Specifically, the study examined SLPs' knowledge, attitudes, and collaborative practices in the area of written language services as well as the variables that impact provision of these services. Public school-based SLPs from across the country were solicited for participation in an online, Web-based survey. Data from 645 full-time SLPs from 49 states were evaluated using descriptive statistics and logistic regression. Many school-based SLPs reported not providing any services in the area of written language to students with written language weaknesses. Knowledge, attitudes, and collaborative practices were mixed. A logistic regression revealed three variables likely to predict high levels of service provision in the area of written language. Data from the current study revealed that many struggling readers and writers on school-based SLPs' caseloads are not receiving services from their SLPs. Implications for SLPs' preservice preparation, continuing education, and doctoral preparation are discussed.
ERIC Educational Resources Information Center
Foster, Geraldine R. K.; Tickle, Martin
2013-01-01
Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…
Case Study Projects for College Mathematics Courses Based on a Particular Function of Two Variables
ERIC Educational Resources Information Center
Shi, Y.
2007-01-01
Based on a sequence of number pairs, a recent paper (Mauch, E. and Shi, Y., 2005, Using a sequence of number pairs as an example in teaching mathematics, "Mathematics and Computer Education," 39(3), 198-205) presented some interesting examples that can be used in teaching high school and college mathematics classes such as algebra, geometry,…
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
NASA Technical Reports Server (NTRS)
Molnar, Gyula; Susskind, Joel
2008-01-01
The AIRS instrument is currently the best space-based tool to simultaneously monitor the vertical distribution of key climatically important atmospheric parameters as well as surface properties, and has provided high quality data for more than 5 years. AIRS analysis results produced at the GODDARD/DAAC, based on Versions 4 & 5 of the AIRS retrieval algorithm, are currently available for public use. Here, first we present an assessment of interrelationships of anomalies (proxies of climate variability based on 5 full years, since Sept. 2002) of various climate parameters at different spatial scales. We also present AIRS-retrievals-based global, regional and 1x1 degree grid-scale "trend"-analyses of important atmospheric parameters for this 5-year period. Note that here "trend" simply means the linear fit to the anomaly (relative the mean seasonal cycle) time series of various parameters at the above-mentioned spatial scales, and we present these to illustrate the usefulness of continuing AIRS-based climate observations. Preliminary validation efforts, in terms of intercomparisons of interannual variabilities with other available satellite data analysis results, will also be addressed. For example, we show that the outgoing longwave radiation (OLR) interannual spatial variabilities from the available state-of-the-art CERES measurements and from the AIRS computations are in remarkably good agreement. Version 6 of the AIRS retrieval scheme (currently under development) promises to further improve bias agreements for the absolute values by implementing a more accurate radiative transfer model for the OLR computations and by improving surface emissivity retrievals.
Müller, Dirk; Pulm, Jannis; Gandjour, Afschin
2012-01-01
To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Phillips, Stephen Robert; Costa, Maycira
2017-12-01
The use of standard ocean colour reflectance based algorithms to derive surface chlorophyll may have limited applicability for optically dynamic coastal waters due to the pre-defined coefficients based on global datasets. Reflectance based algorithms adjusted to regional optical water characteristics are a promising alternative. A class-based definition of optically diverse coastal waters was investigated as a first step towards the development of temporal and spatial constrained reflectance based algorithms for optically variable coastal waters. A large set of bio-optical data were collected as part of five research cruises and bi-weekly trips aboard a ship of opportunity in the west coast of Canada, to assess the spatial and temporal variability of above-water reflectance in this contrasted coastal environment. To accomplish this, in situ biophysical and optical measurements were collected in conjunction with above-water hyperspectral remote sensing reflectance (Rrs) at 145 stations. The concentrations of measured biophysical data varied considerably; chlorophyll a (Chla) (mean = 1.64, range: 0.10-7.20 μg l-1), total suspended matter (TSM) (3.09, 0.82-20.69 mg l-1), and absorption by chromophoric dissolved organic matter (CDOM) (acdom(443 nm)) (0.525, 0.007-3.072 m-1), thus representing the spatio-temporal variability of the Salish Sea. Optically, a similar large range was also found; particulate scattering (bp(650 nm)) (1.316, 0.250-7.450 m-1), particulate backscattering (bbp(650 nm)) (0.022, 0.005-0.097 m-1), total beam attenuation coefficient (ct(650)) (1.675, 0.371-9.537 m-1) and particulate absorption coefficient (ap(650 nm)) (0.345, 0.048-2.020 m-1). An empirical orthogonal function (EOF) analysis revealed that Rrs variability was highly correlated to bp (r = 0.90), bbp (r = 0.82) and concentration of TSM (r = 0.80), which highlighted the dominant role of water turbidity in this region. Hierarchical clustering analysis was applied to the normalized Rrs spectra to define optical water classes. Class 1 was defined by the highest Rrs values, particularly above 570 nm, indicating more turbid waters; Class 2 was dominated by high Chla and TSM concentrations, which is shown by high Rrs at 570 nm as well as fluorescence and absorption peaks; Class 3 shows strong fluorescence signatures accompanied by low TSM influence; and Class 4 is most representative of clear waters with a less defined absorption peak around 440 nm. By understanding the bio-optical factors which control the variability of the Rrs spectra this study aims to develop a sub-regional characterization of this coastal region aiming to improve bio-optical algorithms in this complex coastal area.
Recknagel, Friedrich; Orr, Philip T; Cao, Hongqing
2014-01-01
Seven-day-ahead forecasting models of Cylindrospermopsis raciborskii in three warm-monomictic and mesotrophic reservoirs in south-east Queensland have been developed by means of water quality data from 1999 to 2010 and the hybrid evolutionary algorithm HEA. Resulting models using all measured variables as inputs as well as models using electronically measurable variables only as inputs forecasted accurately timing of overgrowth of C. raciborskii and matched well high and low magnitudes of observed bloom events with 0.45≤r 2 >0.61 and 0.4≤r 2 >0.57, respectively. The models also revealed relationships and thresholds triggering bloom events that provide valuable information on synergism between water quality conditions and population dynamics of C. raciborskii. Best performing models based on using all measured variables as inputs indicated electrical conductivity (EC) within the range of 206-280mSm -1 as threshold above which fast growth and high abundances of C. raciborskii have been observed for the three lakes. Best models based on electronically measurable variables for the Lakes Wivenhoe and Somerset indicated a water temperature (WT) range of 25.5-32.7°C within which fast growth and high abundances of C. raciborskii can be expected. By contrast the model for Lake Samsonvale highlighted a turbidity (TURB) level of 4.8 NTU as indicator for mass developments of C. raciborskii. Experiments with online measured water quality data of the Lake Wivenhoe from 2007 to 2010 resulted in predictive models with 0.61≤r 2 >0.65 whereby again similar levels of EC and WT have been discovered as thresholds for outgrowth of C. raciborskii. The highest validity of r 2 =0.75 for an in situ data-based model has been achieved after considering time lags for EC by 7 days and dissolved oxygen by 1 day. These time lags have been discovered by a systematic screening of all possible combinations of time lags between 0 and 10 days for all electronically measurable variables. The so-developed model performs seven-day-ahead forecasts and is currently implemented and tested for early warning of C. raciborskii blooms in the Wivenhoe reservoir. Copyright © 2013 Elsevier B.V. All rights reserved.
Predictors of emotional and physical dating violence in a sample of serious juvenile offenders.
Sweeten, Gary; Larson, Matthew; Piquero, Alex R
2016-10-01
We estimate group-based dating violence trajectories and identify the adolescent risk factors that explain membership in each trajectory group. Using longitudinal data from the Pathways to Desistance Study, which follows a sample of 1354 serious juvenile offenders from Philadelphia, Pennsylvania and Phoenix, Arizona between mid-adolescence and early adulthood, we estimate group-based trajectory models of both emotional dating violence and physical dating violence over a span of five years in young adulthood. We then estimate multinomial logistic regression models to identify theoretically motivated risk factors that predict membership in these groups. We identified three developmental patterns of emotional dating violence: none (33%), low-level (59%) and high-level decreasing (8%). The best-fitting model for physical dating violence also had three groups: none (73%), low-level (24%) and high-level (3%). Race/ethnicity, family and psychosocial variables were among the strongest predictors of both emotional and physical dating violence. In addition, delinquency history variables predicted emotional dating violence and relationship variables predicted physical dating violence. Dating violence is quite prevalent in young adulthood among serious juvenile offenders. Numerous predictors distinguish between chronic dating violence perpetrators and other groups. These may suggest points of intervention for reducing future violence. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.