Sample records for scatterplots

  1. Scatterplot analysis of EEG slow-wave magnitude and heart rate variability: an integrative exploration of cerebral cortical and autonomic functions.

    PubMed

    Kuo, Terry B J; Yang, Cheryl C H

    2004-06-15

    To explore interactions between cerebral cortical and autonomic functions in different sleep-wake states. Active waking (AW), quiet sleep (QS), and paradoxical sleep (PS) of adult male Wistar-Kyoto rats (WKY) on their daytime sleep were compared. Ten WKY. All rats had electrodes implanted for polygraphic recordings. One week later, a 6-hour daytime sleep-wakefulness recording session was performed. A scatterplot analysis of electroencephalogram (EEG) slow-wave magnitude (0.5-4 Hz) and heart rate variability (HRV) was applied in each rat. The EEG slow-wave-RR interval scatterplot from all of the recordings revealed a propeller-like pattern. If the scatterplot was divided into AW, PS, and QS according to the corresponding EEG mean power frequency and nuchal electromyogram, the EEG slow wave-RR interval relationship became nil, negative, and positive for AW, PS, and QS, respectively. A significant negative relationship was found for EEG slow-wave and high-frequency power of HRV (HF) coupling during PS and for EEG slow wave and low-frequency power of HRV to HF ratio (LF/HF) coupling during QS. The optimal time lags for the slow wave-LF/HF relationship were different between PS and QS. Bradycardia noted in QS and PS was related to sympathetic suppression and vagal excitation, respectively. The EEG slow wave-HRV scatterplot may provide unique insights into studies of sleep, and such a relationship may delineate the sleep-state-dependent fluctuations in autonomic nervous system activity.

  2. A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn.

    PubMed

    Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  3. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    PubMed Central

    Fisher, Aaron; Anderson, G. Brooke; Peng, Roger

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457

  4. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

  5. A Simulation To Model Exponential Growth.

    ERIC Educational Resources Information Center

    Appelbaum, Elizabeth Berman

    2000-01-01

    Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)

  6. The Danger of Dichotomizing Continuous Variables: A Visualization

    ERIC Educational Resources Information Center

    Kuss, Oliver

    2013-01-01

    Four rather different scatterplots of two variables X and Y are given, which, after dichotomizing X and Y, result in identical fourfold-tables misleadingly showing no association. (Contains 1 table and 1 figure.)

  7. Using Maslows Hierarchy of Needs to Identify Indicators of Potential Mass Migration Events

    DTIC Science & Technology

    2016-03-23

    34, Psychological Review, 1943: 370-396. 7 Ibid. 10 subconsciously ; however, one will consciously seek them out if they are denied. Other...and less prone to bias . Examining statistical data 4 The scatterplots for the most promising

  8. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    PubMed Central

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  9. Using Maslow’s Hierarchy of Needs to Identify Indicators of Potential Mass Migration Events

    DTIC Science & Technology

    2016-03-23

    34, Psychological Review, 1943: 370-396. 7 Ibid. 10 subconsciously ; however, one will consciously seek them out if they are denied. Other...and less prone to bias . Examining statistical data 4 The scatterplots for the most promising

  10. Visualizing Qualitative Information

    ERIC Educational Resources Information Center

    Slone, Debra J.

    2009-01-01

    The abundance of qualitative data in today's society and the need to easily scrutinize, digest, and share this information calls for effective visualization and analysis tools. Yet, no existing qualitative tools have the analytic power, visual effectiveness, and universality of familiar quantitative instruments like bar charts, scatter-plots, and…

  11. Mapping Stars with TI-83.

    ERIC Educational Resources Information Center

    Felsager, Bjorn

    2001-01-01

    Describes a mathematics and science project designed to help students gain some familiarity with constellations and trigonometry by using the TI-83 calculator as a tool. Specific constellations such as the Big Dipper (Plough) and other sets of stars are located using stereographic projection and graphed using scatterplots. (MM)

  12. Visualizing Concordance of Sets

    DTIC Science & Technology

    2006-01-01

    Elements Filtering with Human Muscular Dystrophy Dataset of 21 sets and 163 elements. 4.1.4 Diagram Ordering using the Rank-by-Feature Framework...Proceedings of Advanced Visual Interfaces, pp. 110-119, 2000. [4] R. A. Becker and W. S. Cleveland, "Brushing Scatterplots," Technometrics, vol. 29, pp. 127

  13. Applying Descriptive Statistics to Teaching the Regional Classification of Climate.

    ERIC Educational Resources Information Center

    Lindquist, Peter S.; Hammel, Daniel J.

    1998-01-01

    Describes an exercise for college and high school students that relates descriptive statistics to the regional climatic classification. The exercise introduces students to simple calculations of central tendency and dispersion, the construction and interpretation of scatterplots, and the definition of climatic regions. Forces students to engage…

  14. Using Scatterplots to Teach the Critical Power Concept

    ERIC Educational Resources Information Center

    Pettitt, Robert W.

    2012-01-01

    The critical power (CP) concept has received renewed attention and excitement in the academic community. The CP concept was originally conceived as a model derived from a series of exhaustive, constant-load, exercise bouts. All-out exercise testing has made quantification of the parameters for the two-component model easier to arrive at, which may…

  15. Arc_Mat: a Matlab-based spatial data analysis toolbox

    NASA Astrophysics Data System (ADS)

    Liu, Xingjian; Lesage, James

    2010-03-01

    This article presents an overview of Arc_Mat, a Matlab-based spatial data analysis software package whose source code has been placed in the public domain. An earlier version of the Arc_Mat toolbox was developed to extract map polygon and database information from ESRI shapefiles and provide high quality mapping in the Matlab software environment. We discuss revisions to the toolbox that: utilize enhanced computing and graphing capabilities of more recent versions of Matlab, restructure the toolbox with object-oriented programming features, and provide more comprehensive functions for spatial data analysis. The Arc_Mat toolbox functionality includes basic choropleth mapping; exploratory spatial data analysis that provides exploratory views of spatial data through various graphs, for example, histogram, Moran scatterplot, three-dimensional scatterplot, density distribution plot, and parallel coordinate plots; and more formal spatial data modeling that draws on the extensive Spatial Econometrics Toolbox functions. A brief review of the design aspects of the revised Arc_Mat is described, and we provide some illustrative examples that highlight representative uses of the toolbox. Finally, we discuss programming with and customizing the Arc_Mat toolbox functionalities.

  16. A simulation study of hardwood rootstock populations in young loblolly pine plantations

    Treesearch

    David R. Weise; Glenn R. Glover

    1988-01-01

    A computer program to simulate spatial distribution of hardwood rootstock populations is presented. Nineteen 3 to 6 yearold loblolly pine (Pinus taeda L.) plantations in Alabama and Georgia were measured to provide information for the simulator. Spatial pattern, expressed as Pielou's nonrandomness index (PNI), ranged from 0.47 to 2.45. Scatterplots illustrated no...

  17. A New Way to Teach (or Compute) Pearson's "r" without Reliance on Cross-Products

    ERIC Educational Resources Information Center

    Huck, Schuyler W.; Ren, Bixiang; Yang, Hongwei

    2007-01-01

    Many students have difficulty seeing the conceptual link between bivariate data displayed in a scatterplot and the statistical summary of the relationship, "r." This article shows how to teach (and compute) "r" such that each datum's direct and indirect influences are made apparent and used in a new formula for calculating Pearson's "r."

  18. Using Pooled Data and Data Visualization to Introduce Statistical Concepts in the General Chemistry Laboratory

    ERIC Educational Resources Information Center

    Olsen, Robert J.

    2008-01-01

    I describe how data pooling and data visualization can be employed in the first-semester general chemistry laboratory to introduce core statistical concepts such as central tendency and dispersion of a data set. The pooled data are plotted as a 1-D scatterplot, a purpose-designed number line through which statistical features of the data are…

  19. Examining Student Conceptions of Covariation: A Focus on the Line of Best Fit

    ERIC Educational Resources Information Center

    Casey, Stephanie A.

    2015-01-01

    The purpose of this research study was to learn about students' conceptions concerning the line of best fit just prior to their introduction to the topic. Task-based interviews were conducted with thirty-three students, focused on five tasks that asked them to place the line of best fit on a scatterplot and explain their reasoning throughout the…

  20. Airborne Remote Sensing of Trafficability in the Coastal Zone

    DTIC Science & Technology

    2009-01-01

    validation instruments: Analytical Spectral Devices (ASD) full-range spectrometer; light weight deflectometer ( LWD ), which measures dynamic deflection...liquid water absorption features. The corresponding bearing strength measured by the LWD was high at the shoreline site and low at the backdune site...REVIEW REMOTE SENSING FIGURE 7 Correlation of in situ grain size, moisture, and bearing strength measurements. Scatterplot of percent moisture vs LWD

  1. Different rates of DNA replication at early versus late S-phase sections: multiscale modeling of stochastic events related to DNA content/EdU (5-ethynyl-2'deoxyuridine) incorporation distributions.

    PubMed

    Li, Biao; Zhao, Hong; Rybak, Paulina; Dobrucki, Jurek W; Darzynkiewicz, Zbigniew; Kimmel, Marek

    2014-09-01

    Mathematical modeling allows relating molecular events to single-cell characteristics assessed by multiparameter cytometry. In the present study we labeled newly synthesized DNA in A549 human lung carcinoma cells with 15-120 min pulses of EdU. All DNA was stained with DAPI and cellular fluorescence was measured by laser scanning cytometry. The frequency of cells in the ascending (left) side of the "horseshoe"-shaped EdU/DAPI bivariate distributions reports the rate of DNA replication at the time of entrance to S phase while their frequency in the descending (right) side is a marker of DNA replication rate at the time of transition from S to G2 phase. To understand the connection between molecular-scale events and scatterplot asymmetry, we developed a multiscale stochastic model, which simulates DNA replication and cell cycle progression of individual cells and produces in silico EdU/DAPI scatterplots. For each S-phase cell the time points at which replication origins are fired are modeled by a non-homogeneous Poisson Process (NHPP). Shifted gamma distributions are assumed for durations of cell cycle phases (G1, S and G2 M), Depending on the rate of DNA synthesis being an increasing or decreasing function, simulated EdU/DAPI bivariate graphs show predominance of cells in left (early-S) or right (late-S) side of the horseshoe distribution. Assuming NHPP rate estimated from independent experiments, simulated EdU/DAPI graphs are nearly indistinguishable from those experimentally observed. This finding proves consistency between the S-phase DNA-replication rate based on molecular-scale analyses, and cell population kinetics ascertained from EdU/DAPI scatterplots and demonstrates that DNA replication rate at entrance to S is relatively slow compared with its rather abrupt termination during S to G2 transition. Our approach opens a possibility of similar modeling to study the effect of anticancer drugs on DNA replication/cell cycle progression and also to quantify other kinetic events that can be measured during S-phase. © 2014 International Society for Advancement of Cytometry.

  2. Vegetation classification and soil moisture calculation using land surface temperature (LST) and vegetation index (VI)

    NASA Astrophysics Data System (ADS)

    Liu, Liangyun; Zhang, Bing; Xu, Genxing; Zheng, Lanfen; Tong, Qingxi

    2002-03-01

    In this paper, the temperature-missivity separating (TES) method and normalized difference vegetation index (NDVI) are introduced, and the hyperspectral image data are analyzed using land surface temperature (LST) and NDVI channels which are acquired by Operative Module Imaging Spectral (OMIS) in Beijing Precision Agriculture Demonstration Base in Xiaotangshan town, Beijing in 26 Apr, 2001. Firstly, the 6 kinds of ground targets, which are winter wheat in booting stage and jointing stage, bare soil, water in ponds, sullage in dry ponds, aquatic grass, are well classified using LST and NDVI channels. Secondly, the triangle-like scatter-plot is built and analyzed using LST and NDVI channels, which is convenient to extract the information of vegetation growth and soil's moisture. Compared with the scatter-plot built by red and near-infrared bands, the spectral distance between different classes are larger, and the samples in the same class are more convergent. Finally, we design a logarithm VIT model to extract the surface soil water content (SWC) using LST and NDVI channel, which works well, and the coefficient of determination, R2, between the measured surface SWC and the estimated is 0.634. The mapping of surface SWC in the wheat area are calculated and illustrated, which is important for scientific irrigation and precise agriculture.

  3. Evaluating and Improving the SAMA (Segmentation Analysis and Market Assessment) Recruiting Model

    DTIC Science & Technology

    2015-06-01

    and rewarding me with your love every day. xx THIS PAGE INTENTIONALLY LEFT BLANK 1 I. INTRODUCTION A. THE UNITED STATES ARMY RECRUITING...the relationship between the calculated SAMA potential and the actual 2014 performance. The scatterplot in Figure 8 shows a strong linear... relationship between the SAMA calculated potential and the contracting achievement for 2014, with an R-squared value of 0.871. Simple Linear Regression of

  4. Superquantile Regression: Theory, Algorithms, and Applications

    DTIC Science & Technology

    2014-12-01

    Example C: Stack loss data scatterplot matrix. 91 Regression α c0 caf cwt cac R̄ 2 α R̄ 2 α,Adj Least Squares NA -39.9197 0.7156 1.2953 -0.1521 0.9136...This is due to a small 92 Model Regression α c0 cwt cwt2 R̄ 2 α R̄ 2 α,Adj f2 Least Squares NA -41.9109 2.8174 — 0.7665 0.7542 Quantile 0.25 -32.0000

  5. Boeing Michigan Aeronautical Research Center (BOMARC) Missile Shelters and Bunkers Scoping Survey Workplan

    DTIC Science & Technology

    2007-08-01

    Characterization (OHM 1998). From the plot, it is clear that the HEU dominates DU in the overall isotopic characteristic. Among the three uranium ... isotopes , 234U comprised about 90 % of the total activity, including naturally-occurring background sources. However, in comparison to the WGP, uranium ...listed for a few sampling locations that had isotopic plutonium analysis of wipe samples. Figure A-19 contains a scatterplot of the paired Table 4-13

  6. Exploring High-D Spaces with Multiform Matrices and Small Multiples

    PubMed Central

    MacEachren, Alan; Dai, Xiping; Hardisty, Frank; Guo, Diansheng; Lengerich, Gene

    2011-01-01

    We introduce an approach to visual analysis of multivariate data that integrates several methods from information visualization, exploratory data analysis (EDA), and geovisualization. The approach leverages the component-based architecture implemented in GeoVISTA Studio to construct a flexible, multiview, tightly (but generically) coordinated, EDA toolkit. This toolkit builds upon traditional ideas behind both small multiples and scatterplot matrices in three fundamental ways. First, we develop a general, MultiForm, Bivariate Matrix and a complementary MultiForm, Bivariate Small Multiple plot in which different bivariate representation forms can be used in combination. We demonstrate the flexibility of this approach with matrices and small multiples that depict multivariate data through combinations of: scatterplots, bivariate maps, and space-filling displays. Second, we apply a measure of conditional entropy to (a) identify variables from a high-dimensional data set that are likely to display interesting relationships and (b) generate a default order of these variables in the matrix or small multiple display. Third, we add conditioning, a kind of dynamic query/filtering in which supplementary (undisplayed) variables are used to constrain the view onto variables that are displayed. Conditioning allows the effects of one or more well understood variables to be removed from the analysis, making relationships among remaining variables easier to explore. We illustrate the individual and combined functionality enabled by this approach through application to analysis of cancer diagnosis and mortality data and their associated covariates and risk factors. PMID:21947129

  7. Use of locally weighted scatterplot smoothing (LOWESS) regression to study selection signatures in Piedmontese and Italian Brown cattle breeds.

    PubMed

    Pintus, Elia; Sorbolini, Silvia; Albera, Andrea; Gaspa, Giustino; Dimauro, Corrado; Steri, Roberto; Marras, Gabriele; Macciotta, Nicolò P P

    2014-02-01

    Selection is the major force affecting local levels of genetic variation in species. The availability of dense marker maps offers new opportunities for a detailed understanding of genetic diversity distribution across the animal genome. Over the last 50 years, cattle breeds have been subjected to intense artificial selection. Consequently, regions controlling traits of economic importance are expected to exhibit selection signatures. The fixation index (Fst ) is an estimate of population differentiation, based on genetic polymorphism data, and it is calculated using the relationship between inbreeding and heterozygosity. In the present study, locally weighted scatterplot smoothing (LOWESS) regression and a control chart approach were used to investigate selection signatures in two cattle breeds with different production aptitudes (dairy and beef). Fst was calculated for 42 514 SNP marker loci distributed across the genome in 749 Italian Brown and 364 Piedmontese bulls. The statistical significance of Fst values was assessed using a control chart. The LOWESS technique was efficient in removing noise from the raw data and was able to highlight selection signatures in chromosomes known to harbour genes affecting dairy and beef traits. Examples include the peaks detected for BTA2 in the region where the myostatin gene is located and for BTA6 in the region harbouring the ABCG2 locus. Moreover, several loci not previously reported in cattle studies were detected. © 2013 The Authors, Animal Genetics © 2013 Stichting International Foundation for Animal Genetics.

  8. Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.

    PubMed

    Azzari, Lucio; Foi, Alessandro

    2014-08-01

    We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.

  9. Stress indicators based on airborne thermal imagery for field phenotyping a heterogeneous tree population for response to water constraints

    PubMed Central

    Virlet, Nicolas; Lebourgeois, Valentine; Martinez, Sébastien; Costes, Evelyne; Labbé, Sylvain; Regnard, Jean-Luc

    2014-01-01

    As field phenotyping of plant response to water constraints constitutes a bottleneck for breeding programmes, airborne thermal imagery can contribute to assessing the water status of a wide range of individuals simultaneously. However, the presence of mixed soil–plant pixels in heterogeneous plant cover complicates the interpretation of canopy temperature. Moran’s Water Deficit Index (WDI = 1–ETact/ETmax), which was designed to overcome this difficulty, was compared with surface minus air temperature (T s–T a) as a water stress indicator. As parameterization of the theoretical equations for WDI computation is difficult, particularly when applied to genotypes with large architectural variability, a simplified procedure based on quantile regression was proposed to delineate the Vegetation Index–Temperature (VIT) scatterplot. The sensitivity of WDI to variations in wet and dry references was assessed by applying more or less stringent quantile levels. The different stress indicators tested on a series of airborne multispectral images (RGB, near-infrared, and thermal infrared) of a population of 122 apple hybrids, under two irrigation regimes, significantly discriminated the tree water statuses. For each acquisition date, the statistical method efficiently delineated the VIT scatterplot, while the limits obtained using the theoretical approach overlapped it, leading to inconsistent WDI values. Once water constraint was established, the different stress indicators were linearly correlated to the stem water potential among a tree subset. T s–T a showed a strong sensitivity to evaporative demand, which limited its relevancy for temporal comparisons. Finally, the statistical approach of WDI appeared the most suitable for high-throughput phenotyping. PMID:25080086

  10. Novel calcium infusion regimen after parathyroidectomy for renal hyperparathyroidism

    PubMed Central

    Tan, Jih Huei; Tan, Henry Chor Lip; Arulanantham, Sarojah A/P

    2017-01-01

    Abstract Aim Calcium infusion is used after parathyroid surgery for renal hyperparathyroidism to treat postoperative hypocalcaemia. We compared a new infusion regimen to one commonly used in Malaysia based on 2003 K/DOQI guidelines. Methods Retrospective data on serum calcium and infusion rates was collected from 2011–2015. The relationship between peak calcium efflux (PER) and time was determined using a scatterplot and linear regression. A comparison between regimens was made based on treatment efficacy (hypocalcaemia duration, total infusion amount and time) and calcium excursions (outside target range, peak and trough calcium) using bar charts and an unpaired t‐test. Results Fifty‐one and 34 patients on the original and new regimens respectively were included. Mean PER was lower (2.16 vs 2.56 mmol/h; P = 0.03) and occurred earlier (17.6 vs 23.2 h; P = 0.13) for the new regimen. Both scatterplot and regression showed a large correlation between PER and time (R‐square 0.64, SE 1.53, P < 0.001). The new regimen had shorter period of hypocalcaemia (28.9 vs 66.4 h, P = 0.04), and required less calcium infusion (67.7 vs 127.2 mmol, P = 0.02) for a shorter duration (57.3 vs 102.9 h, P = 0.001). Calcium excursions, peak and trough calcium were not significantly different between regimens. Early postoperative high excursions occurred when the infusion was started in spite of elevated peri‐operative calcium levels. Conclusion The new infusion regimen was superior to the original in that it required a shorter treatment period and resulted in less hypocalcaemia. We found that early aggressive calcium replacement is unnecessary and raises the risk of rebound hypercalcemia. PMID:26952689

  11. Assessing the role of pavement macrotexture in preventing crashes on highways.

    PubMed

    Pulugurtha, Srinivas S; Kusam, Prasanna R; Patel, Kuvleshay J

    2010-02-01

    The objective of this article is to assess the role of pavement macrotexture in preventing crashes on highways in the State of North Carolina. Laser profilometer data obtained from the North Carolina Department of Transportation (NCDOT) for highways comprising four corridors are processed to calculate pavement macrotexture at 100-m (approximately 330-ft) sections according to the American Society for Testing and Materials (ASTM) standards. Crash data collected over the same lengths of the corridors were integrated with the calculated pavement macrotexture for each section. Scatterplots were generated to assess the role of pavement macrotexture on crashes and logarithm of crashes. Regression analyses were conducted by considering predictor variables such as million vehicle miles of travel (as a function of traffic volume and length), the number of interchanges, the number of at-grade intersections, the number of grade-separated interchanges, and the number of bridges, culverts, and overhead signs along with pavement macrotexture to study the statistical significance of relationship between pavement macrotexture and crashes (both linear and log-linear) when compared to other predictor variables. Scatterplots and regression analysis conducted indicate a more statistically significant relationship between pavement macrotexture and logarithm of crashes than between pavement macrotexture and crashes. The coefficient for pavement macrotexture, in general, is negative, indicating that the number of crashes or logarithm of crashes decreases as it increases. The relation between pavement macrotexture and logarithm of crashes is generally stronger than between most other predictor variables and crashes or logarithm of crashes. Based on results obtained, it can be concluded that maintaining pavement macrotexture greater than or equal to 1.524 mm (0.06 in.) as a threshold limit would possibly reduce crashes and provide safe transportation to road users on highways.

  12. Digital image analysis of Ki67 proliferation index in breast cancer using virtual dual staining on whole tissue sections: clinical validation and inter-platform agreement.

    PubMed

    Koopman, Timco; Buikema, Henk J; Hollema, Harry; de Bock, Geertruida H; van der Vegt, Bert

    2018-05-01

    The Ki67 proliferation index is a prognostic and predictive marker in breast cancer. Manual scoring is prone to inter- and intra-observer variability. The aims of this study were to clinically validate digital image analysis (DIA) of Ki67 using virtual dual staining (VDS) on whole tissue sections and to assess inter-platform agreement between two independent DIA platforms. Serial whole tissue sections of 154 consecutive invasive breast carcinomas were stained for Ki67 and cytokeratin 8/18 with immunohistochemistry in a clinical setting. Ki67 proliferation index was determined using two independent DIA platforms, implementing VDS to identify tumor tissue. Manual Ki67 score was determined using a standardized manual counting protocol. Inter-observer agreement between manual and DIA scores and inter-platform agreement between both DIA platforms were determined and calculated using Spearman's correlation coefficients. Correlations and agreement were assessed with scatterplots and Bland-Altman plots. Spearman's correlation coefficients were 0.94 (p < 0.001) for inter-observer agreement between manual counting and platform A, 0.93 (p < 0.001) between manual counting and platform B, and 0.96 (p < 0.001) for inter-platform agreement. Scatterplots and Bland-Altman plots revealed no skewness within specific data ranges. In the few cases with ≥ 10% difference between manual counting and DIA, results by both platforms were similar. DIA using VDS is an accurate method to determine the Ki67 proliferation index in breast cancer, as an alternative to manual scoring of whole sections in clinical practice. Inter-platform agreement between two different DIA platforms was excellent, suggesting vendor-independent clinical implementability.

  13. An introduction to real-time graphical techniques for analyzing multivariate data

    NASA Astrophysics Data System (ADS)

    Friedman, Jerome H.; McDonald, John Alan; Stuetzle, Werner

    1987-08-01

    Orion I is a graphics system used to study applications of computer graphics - especially interactive motion graphics - in statistics. Orion I is the newest of a family of "Prim" systems, whose most striking common feature is the use of real-time motion graphics to display three dimensional scatterplots. Orion I differs from earlier Prim systems through the use of modern and relatively inexpensive raster graphics and microprocessor technology. It also delivers more computing power to its user; Orion I can perform more sophisticated real-time computations than were possible on previous such systems. We demonstrate some of Orion I's capabilities in our film: "Exploring data with Orion I".

  14. A Principled Way of Assessing Visualization Literacy.

    PubMed

    Boy, Jeremy; Rensink, Ronald A; Bertini, Enrico; Fekete, Jean-Daniel

    2014-12-01

    We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.

  15. Automated haematology analysis to diagnose malaria

    PubMed Central

    2010-01-01

    For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the method is applied. Future developments in new haematology analysers such as considerably simplified, robust and inexpensive devices for malaria detection fitted with an automatically generated alert could improve the detection capacity of these instruments and potentially expand their clinical utility in malaria diagnosis. PMID:21118557

  16. Analysis of polarization radar returns from ice clouds

    NASA Astrophysics Data System (ADS)

    Battaglia, A.; Sturniolo, O.; Prodi, F.

    Using a modified T-matrix code, some polarimetric single-scattering radar parameters ( Zh,v, LDR h,v, ρhv, ZDR and δhv) from populations of ice crystals in ice phase at 94 GHz, modeled with axisymmetric prolate and oblate spheroidal shapes for a Γ-size distribution with different α parameter ( α=0, 1, 2) and characteristic dimension Lm varying from 0.1 to 1.8 mm, have been computed. Some of the results for different radar elevation angles and different orientation distribution for fixed water content are shown. Deeper analysis has been carried out for pure extensive radar polarimetric variables; all of them are strongly dependent on the shapes (characterised by the aspect ratio), the canting angle and the radar elevation angle. Quantities like ZDR or δhv at side incidence or LDR h and ρhv at vertical incidence can be used to investigate the preferred orientation of the particles and, in some cases, their habits. We analyze scatterplots using couples of pure extensive variables. The scatterplots with the most evident clustering properties for the different habits seem to be those in the ( ZDR [ χ=0°], δhv [ χ=0°]), in the ( ZDR [ χ=0°], LDR h [ χ=90°]) and in the ( ZDR [ χ=0°], ρhv [ χ=90°]) plane. Among these, the most appealing one seems to be that involving ZDR and ρhv variables. To avoid the problem of having simultaneous measurements with a side and a vertical-looking radar, we believe that measurements of these two extensive variables using a radar with an elevation angle around 45° can be an effective instrument to identify different habits. In particular, this general idea can be useful for future space-borne polarimetric radars involved in the studies of high ice clouds. It is also believed that these results can be used in next challenge of developing probabilistic and expert methods for identifying hydrometeor types by W-band radars.

  17. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  18. Preliminary Geologic/spectral Analysis of LANDSAT-4 Thematic Mapper Data, Wind River/bighorn Basin Area, Wyoming

    NASA Technical Reports Server (NTRS)

    Lang, H. R.; Conel, J. E.; Paylor, E. D.

    1984-01-01

    A LIDQA evaluation for geologic applications of a LANDSAT TM scene covering the Wind River/Bighorn Basin area, Wyoming, is examined. This involves a quantitative assessment of data quality including spatial and spectral characteristics. Analysis is concentrated on the 6 visible, near infrared, and short wavelength infrared bands. Preliminary analysis demonstrates that: (1) principal component images derived from the correlation matrix provide the most useful geologic information. To extract surface spectral reflectance, the TM radiance data must be calibrated. Scatterplots demonstrate that TM data can be calibrated and sensor response is essentially linear. Low instrumental offset and gain settings result in spectral data that do not utilize the full dynamic range of the TM system.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad Allen

    EDENx is a multivariate data visualization tool that allows interactive user driven analysis of large-scale data sets with high dimensionality. EDENx builds on our earlier system, called EDEN to enable analysis of more dimensions and larger scale data sets. EDENx provides an initial overview of summary statistics for each variable in the data set under investigation. EDENx allows the user to interact with graphical summary plots of the data to investigate subsets and their statistical associations. These plots include histograms, binned scatterplots, binned parallel coordinate plots, timeline plots, and graphical correlation indicators. From the EDENx interface, a user can selectmore » a subsample of interest and launch a more detailed data visualization via the EDEN system. EDENx is best suited for high-level, aggregate analysis tasks while EDEN is more appropriate for detail data investigations.« less

  20. Ocean heat budget analysis on sea surface temperature anomaly in western Indian Ocean during strong-weak Asian summer monsoon

    NASA Astrophysics Data System (ADS)

    Fathrio, Ibnu; Manda, Atsuyoshi; Iizuka, Satoshi; Kodama, Yasu-Masa; Ishida, Sachinobu

    2018-05-01

    This study presents ocean heat budget analysis on seas surface temperature (SST) anomalies during strong-weak Asian summer monsoon (southwest monsoon). As discussed by previous studies, there was close relationship between variations of Asian summer monsoon and SST anomaly in western Indian Ocean. In this study we utilized ocean heat budget analysis to elucidate the dominant mechanism that is responsible for generating SST anomaly during weak-strong boreal summer monsoon. Our results showed ocean advection plays more important role to initate SST anomaly than the atmospheric prcess (surface heat flux). Scatterplot analysis showed that vertical advection initiated SST anomaly in western Arabian Sea and southwestern Indian Ocean, while zonal advection initiated SST anomaly in western equatorial Indian Ocean.

  1. Approximating scatterplots of large datasets using distribution splats

    NASA Astrophysics Data System (ADS)

    Camuto, Matthew; Crawfis, Roger; Becker, Barry G.

    2000-02-01

    Many situations exist where the plotting of large data sets with categorical attributes is desired in a 3D coordinate system. For example, a marketing company may conduct a survey involving one million subjects and then plot peoples favorite car type against their weight, height and annual income. Scatter point plotting, in which each point is individually plotted at its correspond cartesian location using a defined primitive, is usually used to render a plot of this type. If the dependent variable is continuous, we can discretize the 3D space into bins or voxels and retain the average value of all records falling within each voxel. Previous work employed volume rendering techniques, in particular, splatting, to represent this aggregated data, by mapping each average value to a representative color.

  2. Moving from Descriptive to Causal Analytics: Case Study of the Health Indicators Warehouse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, Jack C.; Shankar, Mallikarjun; Xu, Songhua

    The KDD community has described a multitude of methods for knowledge discovery on large datasets. We consider some of these methods and integrate them into an analyst s workflow that proceeds from the data-centric descriptive level to the model-centric causal level. Examples of the workflow are shown for the Health Indicators Warehouse, which is a public database for community health information that is a potent resource for conducting data science on a medium scale. We demonstrate the potential of HIW as a source of serious visual analytics efforts by showing correlation matrix visualizations, multivariate outlier analysis, multiple linear regression ofmore » Medicare costs, and scatterplot matrices for a broad set of health indicators. We conclude by sketching the first steps toward a causal dependence hypothesis.« less

  3. Qualitative human body composition analysis assessed with bioelectrical impedance.

    PubMed

    Talluri, T

    1998-12-01

    Body composition is generally aiming at quantitative estimates of fat mass, inadequate to assess nutritional states that on the other hand are well defined by the intra/extra cellular masses proportion (ECM/BCM). Direct measures performed with phase sensitive bioelectrical impedance analyzers can be used to define the current distribution in normal and abnormal populations. Phase angle and reactance nomogram is directly reflecting the ECM/BCM pathways proportions and body impedance analysis (BIA) is also validated to estimate the individual content of body cell mass (BCM). A new body cell mass index (BCMI) obtained dividing the weight of BCM in kilograms by the body surface in square meters is confronted to the scatterplot distribution of phase angle and reactance values obtained from controls and patients, and proposed as a qualitative approach to identify abnormal ECM/BCM ratios and nutritional states.

  4. Visualizing tumor evolution with the fishplot package for R.

    PubMed

    Miller, Christopher A; McMichael, Joshua; Dang, Ha X; Maher, Christopher A; Ding, Li; Ley, Timothy J; Mardis, Elaine R; Wilson, Richard K

    2016-11-07

    Massively-parallel sequencing at depth is now enabling tumor heterogeneity and evolution to be characterized in unprecedented detail. Tracking these changes in clonal architecture often provides insight into therapeutic response and resistance. In complex cases involving multiple timepoints, standard visualizations, such as scatterplots, can be difficult to interpret. Current data visualization methods are also typically manual and laborious, and often only approximate subclonal fractions. We have developed an R package that accurately and intuitively displays changes in clonal structure over time. It requires simple input data and produces illustrative and easy-to-interpret graphs suitable for diagnosis, presentation, and publication. The simplicity, power, and flexibility of this tool make it valuable for visualizing tumor evolution, and it has potential utility in both research and clinical settings. The fishplot package is available at https://github.com/chrisamiller/fishplot .

  5. Preterm infant thermal care: differing thermal environments produced by air versus skin servo-control incubators.

    PubMed

    Thomas, K A; Burr, R

    1999-06-01

    Incubator thermal environments produced by skin versus air servo-control were compared. Infant abdominal skin and incubator air temperatures were recorded from 18 infants in skin servo-control and 14 infants in air servo-control (26- to 29-week gestational age, 14 +/- 2 days postnatal age) for 24 hours. Differences in incubator and infant temperature, neutral thermal environment (NTE) maintenance, and infant and incubator circadian rhythm were examined using analysis of variance and scatterplots. Skin servo-control resulted in more variable air temperature, yet more stable infant temperature, and more time within the NTE. Circadian rhythm of both infant and incubator temperature differed by control mode and the relationship between incubator and infant temperature rhythms was a function of control mode. The differences between incubator control modes extend beyond temperature stability and maintenance of NTE. Circadian rhythm of incubator and infant temperatures is influenced by incubator control.

  6. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Undisturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-19

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to whichmore » the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.« less

  7. Classification of pollen species using autofluorescence image analysis.

    PubMed

    Mitsumoto, Kotaro; Yabusaki, Katsumi; Aoyagi, Hideki

    2009-01-01

    A new method to classify pollen species was developed by monitoring autofluorescence images of pollen grains. The pollens of nine species were selected, and their autofluorescence images were captured by a microscope equipped with a digital camera. The pollen size and the ratio of the blue to red pollen autofluorescence spectra (the B/R ratio) were calculated by image processing. The B/R ratios and pollen size varied among the species. Furthermore, the scatter-plot of pollen size versus the B/R ratio showed that pollen could be classified to the species level using both parameters. The pollen size and B/R ratio were confirmed by means of particle flow image analysis and the fluorescence spectra, respectively. These results suggest that a flow system capable of measuring both scattered light and the autofluorescence of particles could classify and count pollen grains in real time.

  8. OpinionSeer: interactive visualization of hotel customer feedback.

    PubMed

    Wu, Yingcai; Wei, Furu; Liu, Shixia; Au, Norman; Cui, Weiwei; Zhou, Hong; Qu, Huamin

    2010-01-01

    The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.

  9. Cholinesterases in Gambusia yucatana: Biochemical Characterization and its Relationship with Sex and Total Length.

    PubMed

    Rodríguez-Fuentes, Gabriela; Marín-López, Valeria; Hernández-Márquez, Esperanza

    2016-12-01

    Since several reports have indicated that cholinesterases (ChE) type and distribution is species specific and that in some species there is a relationship among gender, size and ChE activities, characterization has been suggested. The aim of the present study was to characterize the ChE present in head and muscle of Gambusia yucatana (using selective substrates and inhibitors) and to find its relationship with total length or gender. Results indicated that the ChE present in G. yucatana is an acetylcholinesterase (AChE) with high sensitivity to BW284C51 and an atypical smaller Km with butyrylthiocholine. Scatterplots indicated that there is no linearity between total length and AChE in male or female wild mosquitofish. There were no sex differences in AChE activities. Results indicated significant differences between a single collection site in the Yucatan peninsula and depurated organisms. This study emphasized the importance of characterizing ChE before usage in biomonitoring.

  10. Validation of the human activity profile questionnaire as a measure of physical activity levels in older community-dwelling women.

    PubMed

    Bastone, Alessandra de Carvalho; Moreira, Bruno de Souza; Vieira, Renata Alvarenga; Kirkwood, Renata Noce; Dias, João Marcos Domingues; Dias, Rosângela Corrêa

    2014-07-01

    The purpose of this study was to assess the validity of the Human Activity Profile (HAP) by comparing scores with accelerometer data and by objectively testing its cutoff points. This study included 120 older women (age 60-90 years). Average daily time spent in sedentary, moderate, and hard activity; counts; number of steps; and energy expenditure were measured using an accelerometer. Spearman rank order correlations were used to evaluate the correlation between the HAP scores and accelerometer variables. Significant relationships were detected (rho = .47-.75, p < .001), indicating that the HAP estimates physical activity at a group level well; however, scatterplots showed individual errors. Receiver operating characteristic curves were constructed to determine HAP cutoff points on the basis of physical activity level recommendations, and the cutoff points found were similar to the original HAP cutoff points. The HAP is a useful indicator of physical activity levels in older women.

  11. Local regression type methods applied to the study of geophysics and high frequency financial data

    NASA Astrophysics Data System (ADS)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  12. Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters.

    PubMed

    Ruddick, K G; Ovidio, F; Rijkeboer, M

    2000-02-20

    The standard SeaWiFS atmospheric correction algorithm, designed for open ocean water, has been extended for use over turbid coastal and inland waters. Failure of the standard algorithm over turbid waters can be attributed to invalid assumptions of zero water-leaving radiance for the near-infrared bands at 765 and 865 nm. In the present study these assumptions are replaced by the assumptions of spatial homogeneity of the 765:865-nm ratios for aerosol reflectance and for water-leaving reflectance. These two ratios are imposed as calibration parameters after inspection of the Rayleigh-corrected reflectance scatterplot. The performance of the new algorithm is demonstrated for imagery of Belgian coastal waters and yields physically realistic water-leaving radiance spectra. A preliminary comparison with in situ radiance spectra for the Dutch Lake Markermeer shows significant improvement over the standard atmospheric correction algorithm. An analysis is made of the sensitivity of results to the choice of calibration parameters, and perspectives for application of the method to other sensors are briefly discussed.

  13. Hardiness as a predictor of mental health and well-being of Australian army reservists on and after stability operations.

    PubMed

    Orme, Geoffrey J; Kehoe, E James

    2014-04-01

    This study tested whether cognitive hardiness moderates the adverse effects of deployment-related stressors on health and well-being of soldiers on short-tour (4-7 months), peacekeeping operations. Australian Army reservists (N = 448) were surveyed at the start, end, and up to 24 months after serving as peacekeepers in Timor-Leste or the Solomon Islands. They retained sound mental health throughout (Kessler 10, Post-Traumatic Checklist-Civilian, Depression Anxiety Stress Scale 42). Ratings of either traumatic or nontraumatic stress were low. Despite range restrictions, scores on the Cognitive Hardiness Scale moderated the relationship between deployment stressors and a composite measure of psychological distress. Scatterplots revealed an asymmetric pattern for hardiness scores and measures of psychological distress. When hardiness scores were low, psychological distress scores were widely dispersed. However, when hardiness scores were higher, psychological distress scores became concentrated at a uniformly low level. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  14. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  15. Characterization of cocoa butter and cocoa butter equivalents by bulk and molecular carbon isotope analyses: implications for vegetable fat quantification in chocolate.

    PubMed

    Spangenberg, J E; Dionisi, F

    2001-09-01

    The fatty acids from cocoa butters of different origins, varieties, and suppliers and a number of cocoa butter equivalents (Illexao 30-61, Illexao 30-71, Illexao 30-96, Choclin, Coberine, Chocosine-Illipé, Chocosine-Shea, Shokao, Akomax, Akonord, and Ertina) were investigated by bulk stable carbon isotope analysis and compound specific isotope analysis. The interpretation is based on principal component analysis combining the fatty acid concentrations and the bulk and molecular isotopic data. The scatterplot of the two first principal components allowed detection of the addition of vegetable fats to cocoa butters. Enrichment in heavy carbon isotope ((13)C) of the bulk cocoa butter and of the individual fatty acids is related to mixing with other vegetable fats and possibly to thermally or oxidatively induced degradation during processing (e.g., drying and roasting of the cocoa beans or deodorization of the pressed fat) or storage. The feasibility of the analytical approach for authenticity assessment is discussed.

  16. Foreshock Langmuir waves for unusually constant solar wind conditions: Data and implications for foreshock structure

    NASA Astrophysics Data System (ADS)

    Cairns, Iver H.; Robinson, P. A.; Anderson, Roger R.; Strangeway, R. J.

    1997-10-01

    Plasma wave data are compared with ISEE 1's position in the electron foreshock for an interval with unusually constant (but otherwise typical) solar wind magnetic field and plasma characteristics. For this period, temporal variations in the wave characteristics can be confidently separated from sweeping of the spatially varying foreshock back and forth across the spacecraft. The spacecraft's location, particularly the coordinate Df downstream from the foreshock boundary (often termed DIFF), is calculated by using three shock models and the observed solar wind magnetometer and plasma data. Scatterplots of the wave field versus Df are used to constrain viable shock models, to investigate the observed scatter in the wave fields at constant Df, and to test the theoretical predictions of linear instability theory. The scatterplots confirm the abrupt onset of the foreshock waves near the upstream boundary, the narrow width in Df of the region with high fields, and the relatively slow falloff of the fields at large Df, as seen in earlier studies, but with much smaller statistical scatter. The plots also show an offset of the high-field region from the foreshock boundary. It is shown that an adaptive, time-varying shock model with no free parameters, determined by the observed solar wind data and published shock crossings, is viable but that two alternative models are not. Foreshock wave studies can therefore remotely constrain the bow shock's location. The observed scatter in wave field at constant Df is shown to be real and to correspond to real temporal variations, not to unresolved changes in Df. By comparing the wave data with a linear instability theory based on a published model for the electron beam it is found that the theory can account qualitatively and semiquantitatively for the abrupt onset of the waves near Df=0, for the narrow width and offset of the high-field region, and for the decrease in wave intensity with increasing Df. Quantitative differences between observations and theory remain, including large overprediction of the wave fields and the slower than predicted falloff at large Df of the wave fields. These differences, as well as the unresolved issue of the electron beam speed in the high-field region of the foreshock, are discussed. The intrinsic temporal variability of the wave fields, as well as their overprediction based on homogeneous plasma theory, are indicative of stochastic growth physics, which causes wave growth to be random and varying in sign, rather than secular.

  17. Four types of ensemble coding in data visualizations.

    PubMed

    Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven

    2016-01-01

    Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.

  18. Scanning fluorescent microscopy is an alternative for quantitative fluorescent cell analysis.

    PubMed

    Varga, Viktor Sebestyén; Bocsi, József; Sipos, Ferenc; Csendes, Gábor; Tulassay, Zsolt; Molnár, Béla

    2004-07-01

    Fluorescent measurements on cells are performed today with FCM and laser scanning cytometry. The scientific community dealing with quantitative cell analysis would benefit from the development of a new digital multichannel and virtual microscopy based scanning fluorescent microscopy technology and from its evaluation on routine standardized fluorescent beads and clinical specimens. We applied a commercial motorized fluorescent microscope system. The scanning was done at 20 x (0.5 NA) magnification, on three channels (Rhodamine, FITC, Hoechst). The SFM (scanning fluorescent microscopy) software included the following features: scanning area, exposure time, and channel definition, autofocused scanning, densitometric and morphometric cellular feature determination, gating on scatterplots and frequency histograms, and preparation of galleries of the gated cells. For the calibration and standardization Immuno-Brite beads were used. With application of shading compensation, the CV of fluorescence of the beads decreased from 24.3% to 3.9%. Standard JPEG image compression until 1:150 resulted in no significant change. The change of focus influenced the CV significantly only after +/-5 microm error. SFM is a valuable method for the evaluation of fluorescently labeled cells. Copyright 2004 Wiley-Liss, Inc.

  19. Sequential simulation approach to modeling of multi-seam coal deposits with an application to the assessment of a Louisiana lignite

    USGS Publications Warehouse

    Olea, Ricardo A.; Luppens, James A.

    2012-01-01

    There are multiple ways to characterize uncertainty in the assessment of coal resources, but not all of them are equally satisfactory. Increasingly, the tendency is toward borrowing from the statistical tools developed in the last 50 years for the quantitative assessment of other mineral commodities. Here, we briefly review the most recent of such methods and formulate a procedure for the systematic assessment of multi-seam coal deposits taking into account several geological factors, such as fluctuations in thickness, erosion, oxidation, and bed boundaries. A lignite deposit explored in three stages is used for validating models based on comparing a first set of drill holes against data from infill and development drilling. Results were fully consistent with reality, providing a variety of maps, histograms, and scatterplots characterizing the deposit and associated uncertainty in the assessments. The geostatistical approach was particularly informative in providing a probability distribution modeling deposit wide uncertainty about total resources and a cumulative distribution of coal tonnage as a function of local uncertainty.

  20. ExAtlas: An interactive online tool for meta-analysis of gene expression data.

    PubMed

    Sharov, Alexei A; Schlessinger, David; Ko, Minoru S H

    2015-12-01

    We have developed ExAtlas, an on-line software tool for meta-analysis and visualization of gene expression data. In contrast to existing software tools, ExAtlas compares multi-component data sets and generates results for all combinations (e.g. all gene expression profiles versus all Gene Ontology annotations). ExAtlas handles both users' own data and data extracted semi-automatically from the public repository (GEO/NCBI database). ExAtlas provides a variety of tools for meta-analyses: (1) standard meta-analysis (fixed effects, random effects, z-score, and Fisher's methods); (2) analyses of global correlations between gene expression data sets; (3) gene set enrichment; (4) gene set overlap; (5) gene association by expression profile; (6) gene specificity; and (7) statistical analysis (ANOVA, pairwise comparison, and PCA). ExAtlas produces graphical outputs, including heatmaps, scatter-plots, bar-charts, and three-dimensional images. Some of the most widely used public data sets (e.g. GNF/BioGPS, Gene Ontology, KEGG, GAD phenotypes, BrainScan, ENCODE ChIP-seq, and protein-protein interaction) are pre-loaded and can be used for functional annotations.

  1. Global health inequalities and breast cancer: an impending public health problem for developing countries.

    PubMed

    Igene, Helen

    2008-01-01

    The aim of the study was to provide information on the global health inequality pattern produced by the increasing incidence of breast cancer and its relationship with the health expenditure of developing countries with emphasis on sub-Saharan Africa. It examines the difference between the health expenditure of developed and developing countries, and how this affects breast cancer incidence and mortality. The data collected from the World Health Organization and World Bank were examined, using bivariate analysis, through scatter-plots and Pearson's product moment correlation coefficient. Multivariate analysis was carried out by multiple regression analysis. National income, health expenditure affects breast cancer incidence, particularly between the developed and developing countries. However, these factors do not adequately explain variations in mortality rates. The study reveals the risk posed to developing countries to solving the present and predicted burden of breast cancer, currently characterized by late presentation, inadequate health care systems, and high mortality. Findings from this study contribute to the knowledge of the burden of disease in developing countries, especially sub-Saharan Africa, and how that is related to globalization and health inequalities.

  2. Titan's surface from the Cassini RADAR radiometry data during SAR mode

    USGS Publications Warehouse

    Paganelli, F.; Janssen, M.A.; Lopes, R.M.; Stofan, E.; Wall, S.D.; Lorenz, R.D.; Lunine, J.I.; Kirk, R.L.; Roth, L.; Elachi, C.

    2008-01-01

    We present initial results on the calibration and interpretation of the high-resolution radiometry data acquired during the Synthetic Aperture Radar (SAR) mode (SAR-radiometry) of the Cassini Radar Mapper during its first five flybys of Saturn's moon Titan. We construct maps of the brightness temperature at the 2-cm wavelength coincident with SAR swath imaging. A preliminary radiometry calibration shows that brightness temperature in these maps varies from 64 to 89 K. Surface features and physical properties derived from the SAR-radiometry maps and SAR imaging are strongly correlated; in general, we find that surface features with high radar reflectivity are associated with radiometrically cold regions, while surface features with low radar reflectivity correlate with radiometrically warm regions. We examined scatterplots of the normalized radar cross-section ??0 versus brightness temperature, outlining signatures that characterize various terrains and surface features. The results indicate that volume scattering is important in many areas of Titan's surface, particularly Xanadu, while other areas exhibit complex brightness temperature variations consistent with variable slopes or surface material and compositional properties. ?? 2007.

  3. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-22

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequentmore » to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.« less

  4. Geomorphometric comparative analysis of Latin-American volcanoes

    NASA Astrophysics Data System (ADS)

    Camiz, Sergio; Poscolieri, Maurizio; Roverato, Matteo

    2017-07-01

    The geomorphometric classifications of three groups of volcanoes situated in the Andes Cordillera, Central America, and Mexico are performed and compared. Input data are eight local topographic gradients (i.e. elevation differences) obtained by processing each volcano raster ASTER-GDEM data. The pixels of each volcano DEM have been classified into 17 classes through a K-means clustering procedure following principal component analysis of the gradients. The spatial distribution of the classes, representing homogeneous terrain units, is shown on thematic colour maps, where colours are assigned according to mean slope and aspect class values. The interpretation of the geomorphometric classification of the volcanoes is based on the statistics of both gradients and morphometric parameters (slope, aspect and elevation). The latter were used for a comparison of the volcanoes, performed through classes' slope/aspect scatterplots and multidimensional methods. In this paper, we apply the mentioned methodology on 21 volcanoes, randomly chosen from Mexico to Patagonia, to show how it may contribute to detect geomorphological similarities and differences among them. As such, both its descriptive and graphical abilities may be a useful complement to future volcanological studies.

  5. New approaches for calculating Moran's index of spatial autocorrelation.

    PubMed

    Chen, Yanguang

    2013-01-01

    Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.

  6. Bispectral infrared forest fire detection and analysis using classification techniques

    NASA Astrophysics Data System (ADS)

    Aranda, Jose M.; Melendez, Juan; de Castro, Antonio J.; Lopez, Fernando

    2004-01-01

    Infrared cameras are well established as a useful tool for fire detection, but their use for quantitative forest fire measurements faces difficulties, due to the complex spatial and spectral structure of fires. In this work it is shown that some of these difficulties can be overcome by applying classification techniques, a standard tool for the analysis of satellite multispectral images, to bi-spectral images of fires. Images were acquired by two cameras that operate in the medium infrared (MIR) and thermal infrared (TIR) bands. They provide simultaneous and co-registered images, calibrated in brightness temperatures. The MIR-TIR scatterplot of these images can be used to classify the scene into different fire regions (background, ashes, and several ember and flame regions). It is shown that classification makes possible to obtain quantitative measurements of physical fire parameters like rate of spread, embers temperature, and radiated power in the MIR and TIR bands. An estimation of total radiated power and heat release per unit area is also made and compared with values derived from heat of combustion and fuel consumption.

  7. A preliminary analysis of quantifying computer security vulnerability data in "the wild"

    NASA Astrophysics Data System (ADS)

    Farris, Katheryn A.; McNamara, Sean R.; Goldstein, Adam; Cybenko, George

    2016-05-01

    A system of computers, networks and software has some level of vulnerability exposure that puts it at risk to criminal hackers. Presently, most vulnerability research uses data from software vendors, and the National Vulnerability Database (NVD). We propose an alternative path forward through grounding our analysis in data from the operational information security community, i.e. vulnerability data from "the wild". In this paper, we propose a vulnerability data parsing algorithm and an in-depth univariate and multivariate analysis of the vulnerability arrival and deletion process (also referred to as the vulnerability birth-death process). We find that vulnerability arrivals are best characterized by the log-normal distribution and vulnerability deletions are best characterized by the exponential distribution. These distributions can serve as prior probabilities for future Bayesian analysis. We also find that over 22% of the deleted vulnerability data have a rate of zero, and that the arrival vulnerability data is always greater than zero. Finally, we quantify and visualize the dependencies between vulnerability arrivals and deletions through a bivariate scatterplot and statistical observations.

  8. An analysis of wildfire frequency and burned area relationships with human pressure and climate gradients in the context of fire regime

    NASA Astrophysics Data System (ADS)

    Jiménez-Ruano, Adrián; Rodrigues Mimbrero, Marcos; de la Riva Fernández, Juan

    2017-04-01

    Understanding fire regime is a crucial step towards achieving a better knowledge of the wildfire phenomenon. This study proposes a method for the analysis of fire regime based on multidimensional scatterplots (MDS). MDS are a visual approach that allows direct comparison among several variables and fire regime features so that we are able to unravel spatial patterns and relationships within the region of analysis. Our analysis is conducted in Spain, one of the most fire-affected areas within the Mediterranean region. Specifically, the Spanish territory has been split into three regions - Northwest, Hinterland and Mediterranean - considered as representative fire regime zones according to MAGRAMA (Spanish Ministry of Agriculture, Environment and Food). The main goal is to identify key relationships between fire frequency and burnt area, two of the most common fire regime features, with socioeconomic activity and climate. In this way we will be able to better characterize fire activity within each fire region. Fire data along the period 1974-2010 was retrieved from the General Statistics Forest Fires database (EGIF). Specifically, fire frequency and burnt area size was examined for each region and fire season (summer and winter). Socioeconomic activity was defined in terms of human pressure on wildlands, i.e. the presence and intensity of anthropogenic activity near wildland or forest areas. Human pressure was built from GIS spatial information about land use (wildland-agriculture and wildland-urban interface) and demographic potential. Climate variables (average maximum temperature and annual precipitation) were extracted from MOTEDAS (Monthly Temperature Dataset of Spain) and MOPREDAS (Monthly Precipitation Dataset of Spain) datasets and later reclassified into ten categories. All these data were resampled to fit the 10x10 Km grid used as spatial reference for fire data. Climate and socioeconomic variables were then explored by means of MDS to find the extent to which fire frequency and burnt areas are controlled by either environmental, human, or both factors. Results reveal a noticeable link between fire frequency and human activity, especially in the Northwest area during winter. On the other hand, in the Hinterland and Mediterranean regions, human and climate factors 'work' together in terms of their relationship with fire activity, being the concurrence of high human pressure and favourable climate conditions the main driver. In turn, burned area shows a similar behaviour except in the Hinterland region, were fire-affected area depends mostly on climate factors. Overall, we can conclude that the visual analysis of multidimensional scatterplots has proved to be a powerful tool that facilitates characterization and investigation of fire regimes.

  9. Foreshock Langmuir Waves for Unusually Constant Solar Wind Conditions: Data and Implications for Foreshock Structure

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.; Robinson, P. A.; Anderson, Roger R.; Strangeway, R. J.

    1997-01-01

    Plasma wave data are compared with ISEE 1's position in the electron foreshock for an interval with unusually constant (but otherwise typical) solar wind magnetic field and plasma characteristics. For this period, temporal variations in the wave characteristics can be confidently separated from sweeping of the spatially varying foreshock back and forth across the spacecraft. The spacecraft's location, particularly the coordinate D(sub f) downstream from the foreshock boundary (often termed DIFF), is calculated by using three shock models and the observed solar wind magnetometer and plasma data. Scatterplots of the wave field versus D(sub f) are used to constrain viable shock models, to investigate the observed scatter in the wave fields at constant D(sub f), and to test the theoretical predictions of linear instability theory. The scatterplots confirm the abrupt onset of the foreshock waves near the upstream boundary, the narrow width in D(sub f) of the region with high fields, and the relatively slow falloff of the fields at large D(sub f), as seen in earlier studies, but with much smaller statistical scatter. The plots also show an offset of the high-field region from the foreshock boundary. It is shown that an adaptive, time-varying shock model with no free parameters, determined by the observed solar wind data and published shock crossings, is viable but that two alternative models are not. Foreshock wave studies can therefore remotely constrain the bow shock's location. The observed scatter in wave field at constant D(sub f) is shown to be real and to correspond to real temporal variations, not to unresolved changes in D(sub f). By comparing the wave data with a linear instability theory based on a published model for the electron beam it is found that the theory can account qualitatively and semiquantitatively for the abrupt onset of the waves near D(sub f) = 0, for the narrow width and offset of the high-field region, and for the decrease in wave intensity with increasing D(sub f). Quantitative differences between observations and theory remain, including large overprediction of the wave fields and the slower than predicted falloff at large D(sub f) of the wave fields. These differences, as well as the unresolved issue of the electron beam speed in the high-field region of the foreshock, are discussed. The intrinsic temporal variability of the wave fields, as well as their overprediction based on homogeneous plasma theory, are indicative of stochastic growth physics, which causes wave growth to be random and varying in sign, rather than secular.

  10. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  11. Characteristic of Noise-induced Hearing Loss among Workers in Construction Industries

    NASA Astrophysics Data System (ADS)

    Naadia Mazlan, Ain; Yahya, Khairulzan; Haron, Zaiton; Amsharija Mohamed, Nik; Rasib, Edrin Nazri Abdul; Jamaludin, Nizam; Darus, Nadirah

    2018-03-01

    Noise-induced hearing loss (NIHL) is among the most common occupational disease in industries. This paper investigates NIHL in construction related industries in Malaysia with particular emphasis on its relation with risk factors. The objectives of this research were to (1) quantify the prevalence of NIHL in construction related industries, and (2) assess the relationship between hearing loss and risk factors and it's characteristic. The study was conducted using 110 NIHL compensation record collected from Social Security Organisation (SOCSO), Malaysia. Risk factors namely area noise, age, temperature, smoking habit, hobby, diabetic and cardiovascular disease were identified and analysed. Results showed that there was no direct relationship between area noise with hearing impairment while there was only low relationship between age and hearing impairment. The range for area noise and age were between 70 to 140 dB(A) and 20 to 70 years, respectively. The other risk factors classified as categorical data and analysed using frequency method. Grade of impairment does not depend solely on area noise but also in combination with age and other risk factors. Characteristic of NIHL prevailed in construction related industries were presented using scatterplots and can serve as a references for future hazard control on site.

  12. Prevalence and Losses in Quality-Adjusted Life Years of Child Health Conditions: A Burden of Disease Analysis.

    PubMed

    Craig, Benjamin M; Hartman, John D; Owens, Michelle A; Brown, Derek S

    2016-04-01

    To estimate the prevalence and losses in quality-adjusted life years (QALYs) associated with 20 child health conditions. Using data from the 2009-2010 National Survey of Children with Special Health Care Needs, preference weights were applied to 14 functional difficulties to summarize the quality of life burden of 20 health conditions. Among the 14 functional difficulties, "a little trouble with breathing" had the highest prevalence (37.1 %), but amounted to a loss of just 0.16 QALYs from the perspective of US adults. Though less prevalent, "a lot of behavioral problems" and "chronic pain" were associated with the greatest losses (1.86 and 3.43 QALYs). Among the 20 conditions, allergies and asthma were the most prevalent but were associated with the least burden. Muscular dystrophy and cerebral palsy were among the least prevalent and most burdensome. Furthermore, a scatterplot shows the association between condition prevalence and burden. In child health, condition prevalence is negatively associated with quality of life burden from the perspective of US adults. Both should be considered carefully when evaluating the appropriate role for public health prevention and interventions.

  13. What can we learn from the Dutch cannabis coffeeshop system?

    PubMed

    MacCoun, Robert J

    2011-11-01

    To examine the empirical consequences of officially tolerated retail sales of cannabis in the Netherlands, and possible implications for the legalization debate. Available Dutch data on the prevalence and patterns of use, treatment, sanctioning, prices and purity for cannabis dating back to the 1970s are compared to similar indicators in Europe and the United States. The available evidence suggests that the prevalence of cannabis use among Dutch citizens rose and fell as the number of coffeeshops increased and later declined, but only modestly. The coffeeshops do not appear to encourage escalation into heavier use or lengthier using careers, although treatment rates for cannabis are higher than elsewhere in Europe. Scatterplot analyses suggest that Dutch patterns of use are very typical for Europe, and that the 'separation of markets' may indeed have somewhat weakened the link between cannabis use and the use of cocaine or amphetamines. Cannabis consumption in the Netherlands is lower than would be expected in an unrestricted market, perhaps because cannabis prices have remained high due to production-level prohibitions. The Dutch system serves as a nuanced alternative to both full prohibition and full legalization. © 2011 The Author, Addiction © 2011 Society for the Study of Addiction.

  14. Influence of transport and time on blood variables commonly measured for the athlete biological passport.

    PubMed

    Robinson, Neil; Giraud, Sylvain; Schumacher, Yorck Olaf; Saugy, Martial

    2016-02-01

    Some recent studies have characterized the stability of blood variables commonly measured for the Athlete Biological Passport. The aim of this study was to characterize the impact of different shipments conditions and the quality of the results returned by the haematological analyzer. Twenty-two healthy male subjects provided five EDTA tubes each. Four shipment conditions (24, 36, 48, 72 h) under refrigerated conditions were tested and compared to a set of samples left in the laboratory also under refrigerated conditions (group control). All measurements were conducted using two Sysmex XT-2000i analyzers. Haemoglobin concentration, reticulocytes percentage, and OFF-score numerical data were the same for samples analyzed just after collection and after a shipment under refrigerated conditions up to 72 h. Detailed information reported especially by the differential (DIFF) channel scatterplot of the Sysmex XT-2000i indicated that there were signs of blood deterioration, but were not of relevance for the variables used in the Athlete Biological Passport. As long as the cold chain is guaranteed, the time delay between the collection and the analyses of blood variables can be extended. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Titan's surface from Cassini RADAR SAR and high resolution radiometry data of the first five flybys

    USGS Publications Warehouse

    Paganelli, F.; Janssen, M.A.; Stiles, B.; West, R.; Lorenz, R.D.; Lunine, J.I.; Wall, S.D.; Callahan, P.; Lopes, R.M.; Stofan, E.; Kirk, R.L.; Johnson, W.T.K.; Roth, L.; Elachi, C.; ,

    2007-01-01

    The first five Titan flybys with Cassini's Synthetic Aperture RADAR (SAR) and radiometer are examined with emphasis on the calibration and interpretation of the high-resolution radiometry data acquired during the SAR mode (SAR-radiometry). Maps of the 2-cm wavelength brightness temperature are obtained coincident with the SAR swath imaging, with spatial resolution approaching 6 km. A preliminary calibration shows that brightness temperature in these maps varies from 64 to 89 K. Surface features and physical properties derived from the SAR-radiometry maps and SAR imaging are strongly correlated; in general, we find that surface features with high radar reflectivity are associated with radiometrically cold regions, while surface features with low radar reflectivity correlate with radiometrically warm regions. We examined scatterplots of the normalized radar cross-section ??0 versus brightness temperature, finding differing signatures that characterize various terrains and surface features. Implications for the physical and compositional properties of these features are discussed. The results indicate that volume scattering is important in many areas of Titan's surface, particularly Xanadu, while other areas exhibit complex brightness temperature variations consistent with variable slopes or surface material and compositional properties. ?? 2007 Elsevier Inc.

  16. Theory Can Help Structure Regression Models for Projecting Stream Conditions Under Alternative Land Use Scenarios

    NASA Astrophysics Data System (ADS)

    van Sickle, J.; Baker, J.; Herlihy, A.

    2005-05-01

    We built multiple regression models for Emphemeroptera/ Plecoptera/ Tricoptera (EPT) taxon richness and other indicators of biological condition in streams of the Willamette River Basin, Oregon, USA. The models were used to project the changes in condition that would be expected in all 2-4th order streams of the 30000 sq km basin under alternative scenarios of future land use. In formulating the models, we invoked the theory of limiting factors to express the interactive effects of stream power and watershed land use on EPT richness. The resulting models were parsimonious, and they fit the data in our wedge-shaped scatterplots slightly better than did a naive additive-effects model. Just as theory helped formulate our regression models, the models in turn helped us identify a new research need for the Basin's streams. Our future scenarios project that conversions of agricultural to urban uses may dominate landscape dynamics in the basin over the next 50 years. But our models could not detect any difference between the effects of agricultural and urban development in watersheds on stream biota. This result points to an increased need for understanding how agricultural and urban land uses in the Basin differentially influence stream ecosystems.

  17. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    DOE PAGES

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-06

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinatemore » system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.« less

  18. A New Methodology of Spatial Cross-Correlation Analysis

    PubMed Central

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran’s index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson’s correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China’s urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes. PMID:25993120

  19. A new methodology of spatial cross-correlation analysis.

    PubMed

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran's index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson's correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China's urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes.

  20. Partitioning degrees of freedom in hierarchical and other richly-parameterized models.

    PubMed

    Cui, Yue; Hodges, James S; Kong, Xiaoxiao; Carlin, Bradley P

    2010-02-01

    Hodges & Sargent (2001) developed a measure of a hierarchical model's complexity, degrees of freedom (DF), that is consistent with definitions for scatterplot smoothers, interpretable in terms of simple models, and that enables control of a fit's complexity by means of a prior distribution on complexity. DF describes complexity of the whole fitted model but in general it is unclear how to allocate DF to individual effects. We give a new definition of DF for arbitrary normal-error linear hierarchical models, consistent with Hodges & Sargent's, that naturally partitions the n observations into DF for individual effects and for error. The new conception of an effect's DF is the ratio of the effect's modeled variance matrix to the total variance matrix. This gives a way to describe the sizes of different parts of a model (e.g., spatial clustering vs. heterogeneity), to place DF-based priors on smoothing parameters, and to describe how a smoothed effect competes with other effects. It also avoids difficulties with the most common definition of DF for residuals. We conclude by comparing DF to the effective number of parameters p(D) of Spiegelhalter et al (2002). Technical appendices and a dataset are available online as supplemental materials.

  1. Spatial study of mortality in motorcycle accidents in the State of Pernambuco, Northeastern Brazil.

    PubMed

    Silva, Paul Hindenburg Nobre de Vasconcelos; Lima, Maria Luiza Carvalho de; Moreira, Rafael da Silveira; Souza, Wayner Vieira de; Cabral, Amanda Priscila de Santana

    2011-04-01

    To analyze the spatial distribution of mortality due to motorcycle accidents in the state of Pernambuco, Northeastern Brazil. A population-based ecological study using data on mortality in motorcycle accidents from 01/01/2000 to 31/12/2005. The analysis units were the municipalities. For the spatial distribution analysis, an average mortality rate was calculated, using deaths from motorcycle accidents recorded in the Mortality Information System as the numerator, and as the denominator the population of the mid-period. Spatial analysis techniques, mortality smoothing coefficient estimate by the local empirical Bayesian method and Moran scatterplot, applied to the digital cartographic base of Pernambuco were used. The average mortality rate for motorcycle accidents in Pernambuco was 3.47 per 100 thousand inhabitants. Of the 185 municipalities, 16 were part of five clusters identified with average mortality rates ranging from 5.66 to 11.66 per 100 thousand inhabitants, and were considered critical areas. Three clusters are located in the area known as sertão and two in the agreste of the state. The risk of dying from a motorcycle accident is greater in conglomerate areas outside the metropolitan axis, and intervention measures should consider the economic, social and cultural contexts.

  2. New Approaches for Calculating Moran’s Index of Spatial Autocorrelation

    PubMed Central

    Chen, Yanguang

    2013-01-01

    Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran’s index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran’s index. Moran’s scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran’s index and Geary’s coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran’s index and Geary’s coefficient will be clarified and defined. One of theoretical findings is that Moran’s index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation. PMID:23874592

  3. Recognition of units in coarse, unconsolidated braided-stream deposits from geophysical log data with principal components analysis

    USGS Publications Warehouse

    Morin, R.H.

    1997-01-01

    Returns from drilling in unconsolidated cobble and sand aquifers commonly do not identify lithologic changes that may be meaningful for Hydrogeologic investigations. Vertical resolution of saturated, Quaternary, coarse braided-slream deposits is significantly improved by interpreting natural gamma (G), epithermal neutron (N), and electromagnetically induced resistivity (IR) logs obtained from wells at the Capital Station site in Boise, Idaho. Interpretation of these geophysical logs is simplified because these sediments are derived largely from high-gamma-producing source rocks (granitics of the Boise River drainage), contain few clays, and have undergone little diagenesis. Analysis of G, N, and IR data from these deposits with principal components analysis provides an objective means to determine if units can be recognized within the braided-stream deposits. In particular, performing principal components analysis on G, N, and IR data from eight wells at Capital Station (1) allows the variable system dimensionality to be reduced from three to two by selecting the two eigenvectors with the greatest variance as axes for principal component scatterplots, (2) generates principal components with interpretable physical meanings, (3) distinguishes sand from cobble-dominated units, and (4) provides a means to distinguish between cobble-dominated units.

  4. On the Relation Between Sunspot Area and Sunspot Number

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2006-01-01

    Often, the relation between monthly or yearly averages of total sunspot area, A, and sunspot number, R, has been described using the formula A = 16.7 R. Such a simple relation, however, is erroneous. The yearly ratio of A/R has varied between 5.3 in 1964 to 19.7 in 1926, having a mean of 13.1 with a standard deviation of 3.5. For 1875-1976 (corresponding to the Royal Greenwich Observatory timeframe), the yearly ratio of A/R has a mean of 14.1 with a standard deviation of 3.2, and it is found to differ significantly from the mean for 1977-2004 (corresponding to the United States Air Force/National Oceanic and Atmospheric Administration Solar Optical Observing Network timeframe), which equals 9.8 with a standard deviation of 2.1. Scatterplots of yearly values of A versus R are highly correlated for both timeframes and they suggest that a value of R = 100 implies A=1,538 +/- 174 during the first timeframe, but only A=1,076 +/- 123 for the second timeframe. Comparison of the yearly ratios adjusted for same day coverage against yearly ratios using Rome Observatory measures for the interval 1958-1998 indicates that sunspot areas during the second timeframe are inherently too low.

  5. Modified constraint-induced movement therapy for clients with chronic stroke: interrupted time series (ITS) design.

    PubMed

    Park, JuHyung; Lee, NaYun; Cho, YongHo; Yang, YeongAe

    2015-03-01

    [Purpose] The purpose of this study was to investigate the impact that modified constraint-induced movement therapy has on upper extremity function and the daily life of chronic stroke patients. [Subjects and Methods] Modified constraint-induced movement therapy was conduct for 2 stroke patients with hemiplegia. It was performed 5 days a week for 2 weeks, and the participants performed their daily living activities wearing mittens for 6 hours a day, including the 2 hours of the therapy program. The assessment was conducted 5 times in 3 weeks before and after intervention. The upper extremity function was measured using the box and block test and a dynamometer, and performance daily of living activities was assessed using the modified Barthel index. The results were analyzed using a scatterplot and linear regression. [Results] All the upper extremity functions of the participants all improved after the modified constraint-induced movement therapy. Performance of daily living activities by participant 1 showed no change, but the results of participant 2 had improved after the intervention. [Conclusion] Through the results of this research, it was identified that modified constraint-induced movement therapy is effective at improving the upper extremity functions and the performance of daily living activities of chronic stroke patients.

  6. Investigation of coherent structures in a superheated jet using decomposition methods

    NASA Astrophysics Data System (ADS)

    Sinha, Avick; Gopalakrishnan, Shivasubramanian; Balasubramanian, Sridhar

    2016-11-01

    A superheated turbulent jet, commonly encountered in many engineering flows, is complex two phase mixture of liquid and vapor. The superposition of temporally and spatially evolving coherent vortical motions, known as coherent structures (CS), govern the dynamics of such a jet. Both POD and DMD are employed to analyze such vortical motions. PIV data is used in conjunction with the decomposition methods to analyze the CS in the flow. The experiments were conducted using water emanating into a tank containing homogeneous fluid at ambient condition. Three inlet pressure were employed in the study, all at a fixed inlet temperature. 90% of the total kinetic energy in the mean flow is contained within the first five modes. The scatterplot for any two POD coefficients predominantly showed a circular distribution, representing a strong connection between the two modes. We speculate that the velocity and vorticity contours of spatial POD basis functions show presence of K-H instability in the flow. From DMD, eigenvalues away from the origin is observed for all the cases indicating the presence of a non-oscillatory structure. Spatial structures are also obtained from DMD. The authors are grateful to Confederation of Indian Industry and General Electric India Pvt. Ltd. for partial funding of this project.

  7. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  8. ClustVis: a web tool for visualizing clustering of multivariate data using Principal Component Analysis and heatmap

    PubMed Central

    Metsalu, Tauno; Vilo, Jaak

    2015-01-01

    The Principal Component Analysis (PCA) is a widely used method of reducing the dimensionality of high-dimensional data, often followed by visualizing two of the components on the scatterplot. Although widely used, the method is lacking an easy-to-use web interface that scientists with little programming skills could use to make plots of their own data. The same applies to creating heatmaps: it is possible to add conditional formatting for Excel cells to show colored heatmaps, but for more advanced features such as clustering and experimental annotations, more sophisticated analysis tools have to be used. We present a web tool called ClustVis that aims to have an intuitive user interface. Users can upload data from a simple delimited text file that can be created in a spreadsheet program. It is possible to modify data processing methods and the final appearance of the PCA and heatmap plots by using drop-down menus, text boxes, sliders etc. Appropriate defaults are given to reduce the time needed by the user to specify input parameters. As an output, users can download PCA plot and heatmap in one of the preferred file formats. This web server is freely available at http://biit.cs.ut.ee/clustvis/. PMID:25969447

  9. Data visualization, bar naked: A free tool for creating interactive graphics.

    PubMed

    Weissgerber, Tracey L; Savic, Marko; Winham, Stacey J; Stanisavljevic, Dejana; Garovic, Vesna D; Milic, Natasa M

    2017-12-15

    Although bar graphs are designed for categorical data, they are routinely used to present continuous data in studies that have small sample sizes. This presentation is problematic, as many data distributions can lead to the same bar graph, and the actual data may suggest different conclusions from the summary statistics. To address this problem, many journals have implemented new policies that require authors to show the data distribution. This paper introduces a free, web-based tool for creating an interactive alternative to the bar graph (http://statistika.mfub.bg.ac.rs/interactive-dotplot/). This tool allows authors with no programming expertise to create customized interactive graphics, including univariate scatterplots, box plots, and violin plots, for comparing values of a continuous variable across different study groups. Individual data points may be overlaid on the graphs. Additional features facilitate visualization of subgroups or clusters of non-independent data. A second tool enables authors to create interactive graphics from data obtained with repeated independent experiments (http://statistika.mfub.bg.ac.rs/interactive-repeated-experiments-dotplot/). These tools are designed to encourage exploration and critical evaluation of the data behind the summary statistics and may be valuable for promoting transparency, reproducibility, and open science in basic biomedical research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  10. Laws of attraction: from perceptual forces to conceptual similarity.

    PubMed

    Ziemkiewicz, Caroline; Kosara, Robert

    2010-01-01

    Many of the pressing questions in information visualization deal with how exactly a user reads a collection of visual marks as information about relationships between entities. Previous research has suggested that people see parts of a visualization as objects, and may metaphorically interpret apparent physical relationships between these objects as suggestive of data relationships. We explored this hypothesis in detail in a series of user experiments. Inspired by the concept of implied dynamics in psychology, we first studied whether perceived gravity acting on a mark in a scatterplot can lead to errors in a participant's recall of the mark's position. The results of this study suggested that such position errors exist, but may be more strongly influenced by attraction between marks. We hypothesized that such apparent attraction may be influenced by elements used to suggest relationship between objects, such as connecting lines, grouping elements, and visual similarity. We further studied what visual elements are most likely to cause this attraction effect, and whether the elements that best predicted attraction errors were also those which suggested conceptual relationships most strongly. Our findings show a correlation between attraction errors and intuitions about relatedness, pointing towards a possible mechanism by which the perception of visual marks becomes an interpretation of data relationships.

  11. Matisse: A Visual Analytics System for Exploring Emotion Trends in Social Media Text Streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Drouhard, Margaret MEG G; Beaver, Justin M

    Dynamically mining textual information streams to gain real-time situational awareness is especially challenging with social media systems where throughput and velocity properties push the limits of a static analytical approach. In this paper, we describe an interactive visual analytics system, called Matisse, that aids with the discovery and investigation of trends in streaming text. Matisse addresses the challenges inherent to text stream mining through the following technical contributions: (1) robust stream data management, (2) automated sentiment/emotion analytics, (3) interactive coordinated visualizations, and (4) a flexible drill-down interaction scheme that accesses multiple levels of detail. In addition to positive/negative sentiment prediction,more » Matisse provides fine-grained emotion classification based on Valence, Arousal, and Dominance dimensions and a novel machine learning process. Information from the sentiment/emotion analytics are fused with raw data and summary information to feed temporal, geospatial, term frequency, and scatterplot visualizations using a multi-scale, coordinated interaction model. After describing these techniques, we conclude with a practical case study focused on analyzing the Twitter sample stream during the week of the 2013 Boston Marathon bombings. The case study demonstrates the effectiveness of Matisse at providing guided situational awareness of significant trends in social media streams by orchestrating computational power and human cognition.« less

  12. Inferring spatial and temporal behavioral patterns of free-ranging manatees using saltwater sensors of telemetry tags

    USGS Publications Warehouse

    Castelblanco-Martínez, Delma Nataly; Morales-Vela, Benjamin; Slone, Daniel H.; Padilla-Saldívar, Janneth Adriana; Reid, James P.; Hernández-Arana, Héctor Abuid

    2015-01-01

    Diving or respiratory behavior in aquatic mammals can be used as an indicator of physiological activity and consequently, to infer behavioral patterns. Five Antillean manatees, Trichechus manatus manatus, were captured in Chetumal Bay and tagged with GPS tracking devices. The radios were equipped with a micropower saltwater sensor (SWS), which records the times when the tag assembly was submerged. The information was analyzed to establish individual fine-scale behaviors. For each fix, we established the following variables: distance (D), sampling interval (T), movement rate (D/T), number of dives (N), and total diving duration (TDD). We used logic criteria and simple scatterplots to distinguish between behavioral categories: ‘Travelling’ (D/T ≥ 3 km/h), ‘Surface’ (↓TDD, ↓N), ‘Bottom feeding’ (↑TDD, ↑N) and ‘Bottom resting’ (↑TDD, ↓N). Habitat categories were qualitatively assigned: Lagoon, Channels, Caye shore, City shore, Channel edge, and Open areas. The instrumented individuals displayed a daily rhythm of bottom activities, with surfacing activities more frequent during the night and early in the morning. More investigation into those cycles and other individual fine-scale behaviors related to their proximity to concentrations of human activity would be informative

  13. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  14. Asthma Is More Severe in Older Adults

    PubMed Central

    Dweik, Raed A.; Comhair, Suzy A.; Bleecker, Eugene R.; Moore, Wendy C.; Peters, Stephen P.; Busse, William W.; Jarjour, Nizar N.; Calhoun, William J.; Castro, Mario; Chung, K. Fan; Fitzpatrick, Anne; Israel, Elliot; Teague, W. Gerald; Wenzel, Sally E.; Love, Thomas E.; Gaston, Benjamin M.

    2015-01-01

    Background Severe asthma occurs more often in older adult patients. We hypothesized that the greater risk for severe asthma in older individuals is due to aging, and is independent of asthma duration. Methods This is a cross-sectional study of prospectively collected data from adult participants (N=1130; 454 with severe asthma) enrolled from 2002 – 2011 in the Severe Asthma Research Program. Results The association between age and the probability of severe asthma, which was performed by applying a Locally Weighted Scatterplot Smoother, revealed an inflection point at age 45 for risk of severe asthma. The probability of severe asthma increased with each year of life until 45 years and thereafter increased at a much slower rate. Asthma duration also increased the probability of severe asthma but had less effect than aging. After adjustment for most comorbidities of aging and for asthma duration using logistic regression, asthmatics older than 45 maintained the greater probability of severe asthma [OR: 2.73 (95 CI: 1.96; 3.81)]. After 45, the age-related risk of severe asthma continued to increase in men, but not in women. Conclusions Overall, the impact of age and asthma duration on risk for asthma severity in men and women is greatest over times of 18-45 years of age; age has a greater effect than asthma duration on risk of severe asthma. PMID:26200463

  15. Effect of disease progression on liver apparent diffusion coefficient and T2 values in a murine model of hepatic fibrosis at 11.7 Tesla MRI.

    PubMed

    Anderson, Stephan W; Jara, Hernan; Ozonoff, Al; O'Brien, Michael; Hamilton, James A; Soto, Jorge A

    2012-01-01

    To evaluate the effects of hepatic fibrosis on ADC and T(2) values of ex vivo murine liver specimens imaged using 11.7 Tesla (T) MRI. This animal study was IACUC approved. Seventeen male, C57BL/6 mice were divided into control (n = 2) and experimental groups (n = 15), the latter fed a 3, 5-dicarbethoxy-1, 4-dihydrocollidine (DDC) supplemented diet, inducing hepatic fibrosis. Ex vivo liver specimens were imaged using an 11.7T MRI scanner. Spin-echo pulsed field gradient and multi-echo spin-echo acquisitions were used to generate parametric ADC and T(2) maps, respectively. Degrees of fibrosis were determined by the evaluation of a pathologist as well as digital image analysis. Scatterplot graphs comparing ADC and T(2) to degrees of fibrosis were generated and correlation coefficients were calculated. Strong correlation was found between degrees of hepatic fibrosis and ADC with higher degrees of fibrosis associated with lower hepatic ADC values. Moderate correlation between hepatic fibrosis and T(2) values was seen with higher degrees of fibrosis associated with lower T(2) values. Inverse relationships between degrees of fibrosis and both ADC and T(2) are seen, highlighting the utility of these parameters in the ongoing development of an MRI methodology to quantify hepatic fibrosis. Copyright © 2011 Wiley Periodicals, Inc.

  16. The validity of a structured interactive 24-hour recall in estimating energy and nutrient intakes in 15-month-old rural Malawian children.

    PubMed

    Thakwalakwa, Chrissie M; Kuusipalo, Heli M; Maleta, Kenneth M; Phuka, John C; Ashorn, Per; Cheung, Yin Bun

    2012-07-01

    This study aimed to compare the nutritional intake values among 15-month-old rural Malawian children obtained by weighed food record (WFR) with those obtained by modified 24-hour recall (mod 24-HR), and to develop algorithm for adjusting mod 24-HR values so as to predict mean intake based on WFRs. The study participants were 169 15-month-old children who participated in a clinical trial. Food consumption on one day was observed and weighed (established criterion) by a research assistant to provide the estimates of energy and nutrient intakes. On the following day, another research assistant, blinded to the direct observation, conducted the structured interactive 24-hour recall (24-HR) interview (test method). Paired t-tests and scatter-plots were used to compare intake values of the two methods. The structured interactive 24-HR method tended to overestimate energy and nutrient intakes (each P < 0.001). The regression-through-the-origin method was used to develop adjustment algorithms. Results showed that multiplying the mean energy, protein, fat, iron, zinc and vitamin A intake estimates based on the test method by 0.86, 0.80, 0.68, 0.69, 0.72 and 0.76, respectively, provides an approximation of the mean values based on WFRs. © 2011 Blackwell Publishing Ltd.

  17. CProb: a computational tool for conducting conditional probability analysis.

    PubMed

    Hollister, Jeffrey W; Walker, Henry A; Paul, John F

    2008-01-01

    Conditional probability is the probability of observing one event given that another event has occurred. In an environmental context, conditional probability helps to assess the association between an environmental contaminant (i.e., the stressor) and the ecological condition of a resource (i.e., the response). These analyses, when combined with controlled experiments and other methodologies, show great promise in evaluating ecological conditions from observational data and in defining water quality and other environmental criteria. Current applications of conditional probability analysis (CPA) are largely done via scripts or cumbersome spreadsheet routines, which may prove daunting to end-users and do not provide access to the underlying scripts. Combining spreadsheets with scripts eases computation through a familiar interface (i.e., Microsoft Excel) and creates a transparent process through full accessibility to the scripts. With this in mind, we developed a software application, CProb, as an Add-in for Microsoft Excel with R, R(D)com Server, and Visual Basic for Applications. CProb calculates and plots scatterplots, empirical cumulative distribution functions, and conditional probability. In this short communication, we describe CPA, our motivation for developing a CPA tool, and our implementation of CPA as a Microsoft Excel Add-in. Further, we illustrate the use of our software with two examples: a water quality example and a landscape example. CProb is freely available for download at http://www.epa.gov/emap/nca/html/regions/cprob.

  18. Reliability of landmark identification in cephalometric radiography acquired by a storage phosphor imaging system.

    PubMed

    Chen, Y-J; Chen, S-K; Huang, H-W; Yao, C-C; Chang, H-F

    2004-09-01

    To compare the cephalometric landmark identification on softcopy and hardcopy of direct digital cephalography acquired by a storage-phosphor (SP) imaging system. Ten digital cephalograms and their conventional counterpart, hardcopy on a transparent blue film, were obtained by a SP imaging system and a dye sublimation printer. Twelve orthodontic residents identified 19 cephalometric landmarks on monitor-displayed SP digital images with computer-aided method and on their hardcopies with conventional method. The x- and y-coordinates for each landmark, indicating the horizontal and vertical positions, were analysed to assess the reliability of landmark identification and evaluate the concordance of the landmark locations in softcopy and hardcopy of SP digital cephalometric radiography. For each of the 19 landmarks, the location differences as well as the horizontal and vertical components were statistically significant between SP digital cephalometric radiography and its hardcopy. Smaller interobserver errors on SP digital images than those on their hardcopies were noted for all the landmarks, except point Go in vertical direction. The scatter-plots demonstrate the characteristic distribution of the interobserver error in both horizontal and vertical directions. Generally, the dispersion of interobserver error on SP digital cephalometric radiography is less than that on its hardcopy with conventional method. The SP digital cephalometric radiography could yield better or comparable level of performance in landmark identification as its hardcopy, except point Go in vertical direction.

  19. Quantitative analysis of diffusion tensor orientation: theoretical framework.

    PubMed

    Wu, Yu-Chien; Field, Aaron S; Chung, Moo K; Badie, Benham; Alexander, Andrew L

    2004-11-01

    Diffusion-tensor MRI (DT-MRI) yields information about the magnitude, anisotropy, and orientation of water diffusion of brain tissues. Although white matter tractography and eigenvector color maps provide visually appealing displays of white matter tract organization, they do not easily lend themselves to quantitative and statistical analysis. In this study, a set of visual and quantitative tools for the investigation of tensor orientations in the human brain was developed. Visual tools included rose diagrams, which are spherical coordinate histograms of the major eigenvector directions, and 3D scatterplots of the major eigenvector angles. A scatter matrix of major eigenvector directions was used to describe the distribution of major eigenvectors in a defined anatomic region. A measure of eigenvector dispersion was developed to describe the degree of eigenvector coherence in the selected region. These tools were used to evaluate directional organization and the interhemispheric symmetry of DT-MRI data in five healthy human brains and two patients with infiltrative diseases of the white matter tracts. In normal anatomical white matter tracts, a high degree of directional coherence and interhemispheric symmetry was observed. The infiltrative diseases appeared to alter the eigenvector properties of affected white matter tracts, showing decreased eigenvector coherence and interhemispheric symmetry. This novel approach distills the rich, 3D information available from the diffusion tensor into a form that lends itself to quantitative analysis and statistical hypothesis testing. (c) 2004 Wiley-Liss, Inc.

  20. Quantitative skeletal evaluation based on cervical vertebral maturation: a longitudinal study of adolescents with normal occlusion.

    PubMed

    Chen, L; Liu, J; Xu, T; Long, X; Lin, J

    2010-07-01

    The study aims were to investigate the correlation between vertebral shape and hand-wrist maturation and to select characteristic parameters of C2-C5 (the second to fifth cervical vertebrae) for cervical vertebral maturation determination by mixed longitudinal data. 87 adolescents (32 males, 55 females) aged 8-18 years with normal occlusion were studied. Sequential lateral cephalograms and hand-wrist radiographs were taken annually for 6 consecutive years. Lateral cephalograms were divided into 11 maturation groups according to Fishman Skeletal Maturity Indicators (SMI). 62 morphological measurements of C2-C5 at 11 different developmental stages (SMI1-11) were measured and analysed. Locally weighted scatterplot smoothing, correlation coefficient analysis and variable cluster analysis were used for statistical analysis. Of the 62 cervical vertebral parameters, 44 were positively correlated with SMI, 6 were negatively correlated and 12 were not correlated. The correlation coefficients between cervical vertebral parameters and SMI were relatively high. Characteristic parameters for quantitative analysis of cervical vertebral maturation were selected. In summary, cervical vertebral maturation could be used reliably to evaluate the skeletal stage instead of the hand-wrist radiographic method. Selected characteristic parameters offered a simple and objective reference for the assessment of skeletal maturity and timing of orthognathic surgery. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. Morphological analysis of Trichomycterus areolatus Valenciennes, 1846 from southern Chilean rivers using a truss-based system (Siluriformes, Trichomycteridae).

    PubMed

    Colihueque, Nelson; Corrales, Olga; Yáñez, Miguel

    2017-01-01

    Trichomycterus areolatus Valenciennes, 1846 is a small endemic catfish inhabiting the Andean river basins of Chile. In this study, the morphological variability of three T. areolatus populations, collected in two river basins from southern Chile, was assessed with multivariate analyses, including principal component analysis (PCA) and discriminant function analysis (DFA). It is hypothesized that populations must segregate morphologically from each other based on the river basin that they were sampled from, since each basin presents relatively particular hydrological characteristics. Significant morphological differences among the three populations were found with PCA (ANOSIM test, r = 0.552, p < 0.0001) and DFA (Wilks's λ = 0.036, p < 0.01). PCA accounted for a total variation of 56.16% by the first two principal components. The first Principal Component (PC1) and PC2 explained 34.72 and 21.44% of the total variation, respectively. The scatter-plot of the first two discriminant functions (DF1 on DF2) also validated the existence of three different populations. In group classification using DFA, 93.3% of the specimens were correctly-classified into their original populations. Of the total of 22 transformed truss measurements, 17 exhibited highly significant ( p < 0.01) differences among populations. The data support the existence of T. areolatus morphological variation across different rivers in southern Chile, likely reflecting the geographic isolation underlying population structure of the species.

  2. On the Bimodality of ENSO Cycle Extremes

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2000-01-01

    On the basis of sea surface temperature in the El Nino 3.4 region (5 deg. N.,-5 deg. S., 120-170 deg. W.) during the interval of 1950-1997, Kevin Trenberth previously has identified some 16 El Nino and 10 La Nina, these 26 events representing the extremes of the quasi-periodic El Nino-Southern Oscillation (ENSO) cycle. Runs testing shows that the duration, recurrence period, and sequencing of these extremes vary randomly. Hence, the decade of the 1990's, especially for El Nino, is not significantly different from that of previous decadal epochs, at least, on the basis of the frequency of onsets of ENSO extremes. Additionally, the distribution of duration for both El Nino and La Nina looks strikingly bimodal, each consisting of two preferred modes, about 8- and 16-mo long for El Nino and about 9- and 18-mo long for La Nina, as does the distribution of the recurrence period for El Nino, consisting of two preferred modes about 21- and 50-mo long. Scatterplots of the recurrence period versus duration for El Nino are found to be statistically important, displaying preferential associations that link shorter (longer) duration with shorter (longer) recurrence periods. Because the last onset of El Nino occurred in April 1997 and the event was of longer than average duration, onset of the next anticipated El Nino is not expected until February 2000 or later.

  3. On The Bimodality of ENSO Cycle Extremes

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2000-01-01

    On the basis of sea surface temperature in the El Nino 3.4 region (5N.-5S., 120-170W.) during the interval of 1950-1997, Kevin Trenberth previously has identified some 16 El Nino and 10 La Nina, these 26 events representing the extremes of the quasi-periodic El Nino-Southern Oscillation (ENSO) cycle. Runs testing shows that the duration, recurrence period, and sequencing of these extremes vary randomly. Hence, the decade of the 1990's, especially for El Nino, is not significantly different from that of previous decadal epochs, at least, on the basis of the frequency of onsets of ENSO extremes. Additionally, the distribution of duration for both El Nino and La Nina looks strikingly bimodal, each consisting of two preferred modes, about 8- and 16-months long for El Nino and about 9- and 18-months long for La Nina, as does the distribution of the recurrence period for El Nino, consisting of two preferred modes about 21- and 50- mo long. Scatterplots of the recurrence period versus duration for El Nino are found to be statistically important, displaying preferential associations that link shorter (longer) duration with shorter (longer) recurrence periods. Because the last onset of El Nino occurred in April 1997 and the event was of longer than average duration, onset of the next anticipated El Nino is not expected until February 2000 or later.

  4. BusyBee Web: metagenomic data analysis by bootstrapped supervised binning and annotation

    PubMed Central

    Kiefer, Christina; Fehlmann, Tobias; Backes, Christina

    2017-01-01

    Abstract Metagenomics-based studies of mixed microbial communities are impacting biotechnology, life sciences and medicine. Computational binning of metagenomic data is a powerful approach for the culture-independent recovery of population-resolved genomic sequences, i.e. from individual or closely related, constituent microorganisms. Existing binning solutions often require a priori characterized reference genomes and/or dedicated compute resources. Extending currently available reference-independent binning tools, we developed the BusyBee Web server for the automated deconvolution of metagenomic data into population-level genomic bins using assembled contigs (Illumina) or long reads (Pacific Biosciences, Oxford Nanopore Technologies). A reversible compression step as well as bootstrapped supervised binning enable quick turnaround times. The binning results are represented in interactive 2D scatterplots. Moreover, bin quality estimates, taxonomic annotations and annotations of antibiotic resistance genes are computed and visualized. Ground truth-based benchmarks of BusyBee Web demonstrate comparably high performance to state-of-the-art binning solutions for assembled contigs and markedly improved performance for long reads (median F1 scores: 70.02–95.21%). Furthermore, the applicability to real-world metagenomic datasets is shown. In conclusion, our reference-independent approach automatically bins assembled contigs or long reads, exhibits high sensitivity and precision, enables intuitive inspection of the results, and only requires FASTA-formatted input. The web-based application is freely accessible at: https://ccb-microbe.cs.uni-saarland.de/busybee. PMID:28472498

  5. The validity of using an electrocutaneous device for pain assessment in patients with cervical radiculopathy.

    PubMed

    Abbott, Allan; Ghasemi-Kafash, Elaheh; Dedering, Åsa

    2014-10-01

    The purpose of this study was to evaluate the validity and preference for assessing pain magnitude with electrocutaneous testing (ECT) compared to the visual analogue scale (VAS) and Borg CR10 scale in men and women with cervical radiculopathy of varying sensory phenotypes. An additional purpose was to investigate ECT sensory and pain thresholds in men and women with cervical radiculopathy of varying sensory phenotypes. This is a cross-sectional study of 34 patients with cervical radiculopathy. Scatterplots and linear regression were used to investigate bivariate relationships between ECT, VAS and Borg CR10 methods of pain magnitude measurement as well as ECT sensory and pain thresholds. The use of the ECT pain magnitude matching paradigm for patients with cervical radiculopathy with normal sensory phenotype shows good linear association with arm pain VAS (R(2) = 0.39), neck pain VAS (R(2) = 0.38), arm pain Borg CR10 scale (R(2) = 0.50) and neck pain Borg CR10 scale (R(2) = 0.49) suggesting acceptable validity of the procedure. For patients with hypoesthesia and hyperesthesia sensory phenotypes, the ECT pain magnitude matching paradigm does not show adequate linear association with rating scale methods rendering the validity of the procedure as doubtful. ECT for sensory and pain threshold investigation, however, provides a method to objectively assess global sensory function in conjunction with sensory receptor specific bedside examination measures.

  6. Morphological analysis of Trichomycterus areolatus Valenciennes, 1846 from southern Chilean rivers using a truss-based system (Siluriformes, Trichomycteridae)

    PubMed Central

    Colihueque, Nelson; Corrales, Olga; Yáñez, Miguel

    2017-01-01

    Abstract Trichomycterus areolatus Valenciennes, 1846 is a small endemic catfish inhabiting the Andean river basins of Chile. In this study, the morphological variability of three T. areolatus populations, collected in two river basins from southern Chile, was assessed with multivariate analyses, including principal component analysis (PCA) and discriminant function analysis (DFA). It is hypothesized that populations must segregate morphologically from each other based on the river basin that they were sampled from, since each basin presents relatively particular hydrological characteristics. Significant morphological differences among the three populations were found with PCA (ANOSIM test, r = 0.552, p < 0.0001) and DFA (Wilks’s λ = 0.036, p < 0.01). PCA accounted for a total variation of 56.16% by the first two principal components. The first Principal Component (PC1) and PC2 explained 34.72 and 21.44% of the total variation, respectively. The scatter-plot of the first two discriminant functions (DF1 on DF2) also validated the existence of three different populations. In group classification using DFA, 93.3% of the specimens were correctly-classified into their original populations. Of the total of 22 transformed truss measurements, 17 exhibited highly significant (p < 0.01) differences among populations. The data support the existence of T. areolatus morphological variation across different rivers in southern Chile, likely reflecting the geographic isolation underlying population structure of the species. PMID:29134012

  7. Genetic diversity and relationships among different tomato varieties revealed by EST-SSR markers.

    PubMed

    Korir, N K; Diao, W; Tao, R; Li, X; Kayesh, E; Li, A; Zhen, W; Wang, S

    2014-01-08

    The genetic diversity and relationship of 42 tomato varieties sourced from different geographic regions was examined with EST-SSR markers. The genetic diversity was between 0.18 and 0.77, with a mean of 0.49; the polymorphic information content ranged from 0.17 to 0.74, with a mean of 0.45. This indicates a fairly high degree of diversity among these tomato varieties. Based on the cluster analysis using unweighted pair-group method with arithmetic average (UPGMA), all the tomato varieties fell into 5 groups, with no obvious geographical distribution characteristics despite their diverse sources. The principal component analysis (PCA) supported the clustering result; however, relationships among varieties were more complex in the PCA scatterplot than in the UPGMA dendrogram. This information about the genetic relationships between these tomato lines helps distinguish these 42 varieties and will be useful for tomato variety breeding and selection. We confirm that the EST-SSR marker system is useful for studying genetic diversity among tomato varieties. The high degree of polymorphism and the large number of bands obtained per assay shows that SSR is the most informative marker system for tomato genotyping for purposes of rights/protection and for the tomato industry in general. It is recommended that these varieties be subjected to identification using an SSR-based manual cultivar identification diagram strategy or other easy-to-use and referable methods so as to provide a complete set of information concerning genetic relationships and a readily usable means of identifying these varieties.

  8. Adjustment of pesticide concentrations for temporal changes in analytical recovery, 1992–2010

    USGS Publications Warehouse

    Martin, Jeffrey D.; Eberle, Michael

    2011-01-01

    Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ("spiked" QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as a percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in apparent environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report presents data and models related to the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as "pesticides") that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 through 2010 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Models of recovery, based on robust, locally weighted scatterplot smooths (lowess smooths) of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.

  9. H-index and academic rank in general surgery and surgical specialties in the United States.

    PubMed

    Ashfaq, Awais; Kalagara, Roshini; Wasif, Nabil

    2018-09-01

    H-index serves as an alternative to measure academic achievement. Our objective is to study the h-index as a measure of academic attainment in general surgery and surgical specialties. A database of all surgical programs in the United States was created. Publish or Perish software was used to determine surgeons h-index. A total of 134 hospitals and 3712 surgeons (79% male) were included. Overall, mean h-index was 14.9 ± 14.8. H-index increased linearly with academic rank: 6.8 ± 6.4 for assistant professors (n = 1557, 41.9%), 12.9 ± 9.3 for associate professors (n = 891, 24%), and 27.9 ± 17.4 for professors (n = 1170, 31.5%); P < 0.001. Thoracic surgery and surgical oncology had the highest subspecialty mean h-indices (18.7 ± 16.7 and 18.4 ± 17.6, respectively). Surgeons with additional postgraduate degrees, university affiliations and male had higher mean h-indices; P < 0.001. Scatterplot analysis showed a strong correlation between h-index and the number of publications (R2 = 0.817) and citations (R2 = 0.768). The h-index of academic surgeons correlates with academic rank and serves a potential tool to measure academic productivity. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. On the importance of geological data for hydraulic tomography analysis: Laboratory sandbox study

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.

    2016-11-01

    This paper investigates the importance of geological data in Hydraulic Tomography (HT) through sandbox experiments. In particular, four groundwater models with homogeneous geological units constructed with borehole data of varying accuracy are jointly calibrated with multiple pumping test data of two different pumping and observation densities. The results are compared to those from a geostatistical inverse model. Model calibration and validation performances are quantitatively assessed using drawdown scatterplots. We find that accurate and inaccurate geological models can be well calibrated, despite the estimated K values for the poor geological models being quite different from the actual values. Model validation results reveal that inaccurate geological models yield poor drawdown predictions, but using more calibration data improves its predictive capability. Moreover, model comparisons among a highly parameterized geostatistical and layer-based geological models show that, (1) as the number of pumping tests and monitoring locations are reduced, the performance gap between the approaches decreases, and (2) a simplified geological model with a fewer number of layers is more reliable than the one based on the wrong description of stratigraphy. Finally, using a geological model as prior information in geostatistical inverse models results in the preservation of geological features, especially in areas where drawdown data are not available. Overall, our sandbox results emphasize the importance of incorporating geological data in HT surveys when data from pumping tests is sparse. These findings have important implications for field applications of HT where well distances are large.

  11. Craniometric relationships among medieval Central European populations: implications for Croat migration and expansion.

    PubMed

    Slaus, Mario; Tomicić, Zeljko; Uglesić, Ante; Jurić, Radomir

    2004-08-01

    To determine the ethnic composition of the early medieval Croats, the location from which they migrated to the east coast of the Adriatic, and to separate early medieval Croats from Bijelo brdo culture members, using principal components analysis and discriminant function analysis of craniometric data from Central and South-East European medieval archaeological sites. Mean male values for 8 cranial measurements from 39 European and 5 Iranian sites were analyzed by principal components analysis. Raw data for 17 cranial measurements for 103 female and 112 male skulls were used to develop discriminant functions. The scatter-plot of the analyzed sites on the first 2 principal components showed a pattern of intergroup relationships consistent with geographical and archaeological information not included in the data set. The first 2 principal components separated the sites into 4 distinct clusters: Avaroslav sites west of the Danube, Avaroslav sites east of the Danube, Bijelo brdo sites, and Polish sites. All early medieval Croat sites were located in the cluster of Polish sites. Two discriminant functions successfully differentiated between early medieval Croats and Bijelo brdo members. Overall accuracies were high -- 89.3% for males, and 97.1% for females. Early medieval Croats seem to be of Slavic ancestry, and at one time shared a common homeland with medieval Poles. Application of unstandardized discriminant function coefficients to unclassified crania from 18 sites showed an expansion of early medieval Croats into continental Croatia during the 10th to 13th century.

  12. The correlation between preoperative volumetry and real graft weight: comparison of two volumetry programs.

    PubMed

    Mussin, Nadiar; Sumo, Marco; Lee, Kwang-Woong; Choi, YoungRok; Choi, Jin Yong; Ahn, Sung-Woo; Yoon, Kyung Chul; Kim, Hyo-Sin; Hong, Suk Kyun; Yi, Nam-Joon; Suh, Kyung-Suk

    2017-04-01

    Liver volumetry is a vital component in living donor liver transplantation to determine an adequate graft volume that meets the metabolic demands of the recipient and at the same time ensures donor safety. Most institutions use preoperative contrast-enhanced CT image-based software programs to estimate graft volume. The objective of this study was to evaluate the accuracy of 2 liver volumetry programs (Rapidia vs . Dr. Liver) in preoperative right liver graft estimation compared with real graft weight. Data from 215 consecutive right lobe living donors between October 2013 and August 2015 were retrospectively reviewed. One hundred seven patients were enrolled in Rapidia group and 108 patients were included in the Dr. Liver group. Estimated graft volumes generated by both software programs were compared with real graft weight measured during surgery, and further classified into minimal difference (≤15%) and big difference (>15%). Correlation coefficients and degree of difference were determined. Linear regressions were calculated and results depicted as scatterplots. Minimal difference was observed in 69.4% of cases from Dr. Liver group and big difference was seen in 44.9% of cases from Rapidia group (P = 0.035). Linear regression analysis showed positive correlation in both groups (P < 0.01). However, the correlation coefficient was better for the Dr. Liver group (R 2 = 0.719), than for the Rapidia group (R 2 = 0.688). Dr. Liver can accurately predict right liver graft size better and faster than Rapidia, and can facilitate preoperative planning in living donor liver transplantation.

  13. Unconditional reference values for the amniotic fluid index measurement between 26w0d and 41w6d of gestation in low-risk pregnancies.

    PubMed

    Peixoto, Alberto Borges; Caldas, Taciana Mara Rodrigues da Cunha; Martins, Wellington P; Da Silva Costa, Fabricio; Araujo Júnior, Edward

    2016-10-01

    To establish reference values for the amniotic fluid index (AFI) measurement between 26w0d and 41w6d of gestation in a Brazilian population. We performed a cross-sectional study with 1984 low-risk singleton pregnant women between 26w0d and 41w6d of gestation. AFI was measured according to the technique proposed by Phelan et al. Maternal abdomen was divided into four quadrants using the umbilicus and linea nigra as landmarks. Single vertical pocket in each quadrant was measured and the AFI was generated by the sum of these four values without umbilical cord or fetal parts. All ultrasound exams were performed by only two experienced examiners. AFI was expressed as median, interquartile range, mean and ranges in each gestational age (GA) interval. Polynomial regressions were performed to obtain the best fit with adjustment by the determination coefficient (R(2)). Mean of AFI ranged from 14.0 ± 4.1 cm (range, 9.7-14.0) at 26w0d to 8.3 ± 4.7 cm (range, 1.9-16.5) at 41w6d, respectively. The best polynomial regression fit curve was a first-degree: AFI = 16.29-0.125*GA (R(2) = 0.01). According the scatterplot, AFI values practically did not vary with advancing GA. Reference values for the AFI measurement between 26w0d and 41w6d of gestation in a low-risk Brazilian population were established.

  14. Oregon Elks Children's Eye Clinic vision screening results for astigmatism.

    PubMed

    Vaughan, Joannah; Dale, Talitha; Herrera, Daniel; Karr, Daniel

    2018-04-19

    In the Elks Preschool Vision Screening program, which uses the plusoptiX S12 to screen children 36-60 months of age, the most common reason for over-referral, using the 1.50 D referral criterion, was found to be astigmatism. The goal of this study was to compare the accuracy of the 2.25 D referral criterion for astigmatism to the 1.50 D referral criterion using screening data from 2013-2014. Vision screenings were conducted on Head Start children 36-72 months of age by Head Start teachers and Elks Preschool Vision Screening staff using the plusoptiX S12. Data on 4,194 vision screenings in 2014 and 4,077 in 2013 were analyzed. Area under the curve (AUC) and receiver operating characteristic curve (ROC) analysis were performed to determine the optimal referral criteria. A t test and scatterplot analysis were performed to compare how many children required treatment using the different criteria. The medical records of 136 (2.25 D) and 117 children (1.50 D) who were referred by the plusoptiX screening for potential astigmatism and received dilated eye examinations from their local eye doctors were reviewed retrospectively. Mean subject age was 4 years. Treatment for astigmatism was prescribed to 116 of 136 using the 2.25 D setting compared to 60 of 117 using the 1.50 D setting. In 2013 the program used the 1.50 D setting for astigmatism. Changing the astigmatism setting to 2.25 D; , 85% of referrals required treatment, reducing false positives by 34%. Copyright © 2018. Published by Elsevier Inc.

  15. The mitochondrial DNA makeup of Romanians: A forensic mtDNA control region database and phylogenetic characterization.

    PubMed

    Turchi, Chiara; Stanciu, Florin; Paselli, Giorgia; Buscemi, Loredana; Parson, Walther; Tagliabracci, Adriano

    2016-09-01

    To evaluate the pattern of Romanian population from a mitochondrial perspective and to establish an appropriate mtDNA forensic database, we generated a high-quality mtDNA control region dataset from 407 Romanian subjects belonging to four major historical regions: Moldavia, Transylvania, Wallachia and Dobruja. The entire control region (CR) was analyzed by Sanger-type sequencing assays and the resulting 306 different haplotypes were classified into haplogroups according to the most updated mtDNA phylogeny. The Romanian gene pool is mainly composed of West Eurasian lineages H (31.7%), U (12.8%), J (10.8%), R (10.1%), T (9.1%), N (8.1%), HV (5.4%),K (3.7%), HV0 (4.2%), with exceptions of East Asian haplogroup M (3.4%) and African haplogroup L (0.7%). The pattern of mtDNA variation observed in this study indicates that the mitochondrial DNA pool is geographically homogeneous across Romania and that the haplogroup composition reveals signals of admixture of populations of different origin. The PCA scatterplot supported this scenario, with Romania located in southeastern Europe area, close to Bulgaria and Hungary, and as a borderland with respect to east Mediterranean and other eastern European countries. High haplotype diversity (0.993) and nucleotide diversity indices (0.00838±0.00426), together with low random match probability (0.0087) suggest the usefulness of this control region dataset as a forensic database in routine forensic mtDNA analysis and in the investigation of maternal genetic lineages in the Romanian population. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Reproduction accuracy of articulator mounting with an arbitrary face-bow vs. average values-a controlled, randomized, blinded patient simulator study.

    PubMed

    Ahlers, M Oliver; Edelhoff, Daniel; Jakstat, Holger A

    2018-06-21

    The benefit from positioning the maxillary casts with the aid of face-bows has been questioned in the past. Therefore, the aim of this study was to investigate the reliability and validity of arbitrary face-bow transfers compared to a process solely based on the orientation by means of average values. For optimized validity, the study was conducted using a controlled, randomized, anonymized, and blinded patient simulator study design. Thirty-eight undergraduate dental students were randomly divided into two groups; both groups were applied to both methods, in opposite sequences. Investigated methods were the transfer of casts using an arbitrary face-bow in comparison to the transfer using average values based on Bonwill's triangle and the Balkwill angle. The "patient" used in this study was a patient simulator. All casts were transferred to the same individual articulator, and all the transferred casts were made using type IV special hard stone plaster; for the attachment into the articulator, type II plaster was used. A blinded evaluation was performed based on three-dimensional measurements of three reference points. The results are presented three-dimensionally in scatterplots. Statistical analysis indicated a significantly smaller variance (Student's t test, p < 0.05) for the transfer using a face-bow, applicable for all three reference points. The use of an arbitrary face-bow significantly improves the transfer reliability and hence the validity. To simulate the patient situation in an individual articulator correctly, casts should be transferred at least by means of an arbitrary face-bow.

  17. Oral mucosal color changes as a clinical biomarker for cancer detection.

    PubMed

    Latini, Giuseppe; De Felice, Claudio; Barducci, Alessandro; Chitano, Giovanna; Pignatelli, Antonietta; Grimaldi, Luca; Tramacere, Francesco; Laurini, Ricardo; Andreassi, Maria Grazia; Portaluri, Maurizio

    2012-07-01

    Screening is a key tool for early cancer detection/prevention and potentially saves lives. Oral mucosal vascular aberrations and color changes have been reported in hereditary nonpolyposis colorectal cancer patients, possibly reflecting a subclinical extracellular matrix abnormality implicated in the general process of cancer development. Reasoning that physicochemical changes of a tissue should affect its optical properties, we investigated the diagnostic ability of oral mucosal color to identify patients with several types of cancer. A total of 67 patients with several histologically proven malignancies at different stages were enrolled along with a group of 60 healthy controls of comparable age and sex ratio. Oral mucosal color was measured in selected areas, and then univariate, cluster, and principal component analyses were carried out. Lower red and green and higher blue values were significantly associated with evidence of cancer (all P<0.0001), and efficiently discriminated patients from controls. The blue color coordinate showed significantly higher sensitivity and specificity (96.66±2.77 and 97.16±3.46%, respectively) compared with the red and green coordinates. Likewise, the second principal component coordinate of the red-green clusters discriminated patients from controls with 98.2% sensitivity and 95% specificity (cut-off criterion≤0.4547; P=0.0001). The scatterplots of the chrominances revealed the formation of two well separated clusters, separating cancer patients from controls with a 99.4% probability of correct classification. These findings highlight the ability of oral color to encode clinically relevant biophysical information. In the near future, this low-cost and noninvasive method may become a useful tool for early cancer detection.

  18. The ambiguity of drought events, a bottleneck for Amazon forest drought response modelling

    NASA Astrophysics Data System (ADS)

    De Deurwaerder, Hannes; Verbeeck, Hans; Baker, Timothy; Christoffersen, Bradley; Ciais, Philippe; Galbraith, David; Guimberteau, Matthieu; Kruijt, Bart; Langerwisch, Fanny; Meir, Patrick; Rammig, Anja; Thonicke, Kirsten; Von Randow, Celso; Zhang, Ke

    2016-04-01

    Considering the important role of the Amazon forest in the global water and carbon cycle, the prognosis of altered hydrological patterns resulting from climate change provides strong incentive for apprehending the direct implications of drought on the vegetation of this ecosystem. Dynamic global vegetation models have the potential of providing a useful tool to study drought impacts on various spatial and temporal scales. This however assumes the models being able to properly represent drought impact mechanisms. But how well do the models succeed in meeting this assumption? Within this study meteorological driver data and model output data of 4 different DGVMs, i.e. ORCHIDEE, JULES, INLAND and LPGmL, are studied. Using the palmer drought severity index (PDSI) and the mean cumulative water deficit (MWD), temporal and spatial representation of drought events are studied in the driver data and are referenced to historical extreme drought events in the Amazon. Subsequently, within the resulting temporal and spatial frame, we studied the drought impact on the above ground biomass (AGB) and gross primary production (GPP) fluxes. Flux tower data, field inventory data and the JUNG data-driven GPP product for the Amazon region are used for validation. Our findings not only suggest that the current state of the studied DGVMs is inadequate in representing Amazon droughts in general, but also highlights strong inter-model differences in drought responses. Using scatterplot-studies and input-output correlations, we provide insight in the origin of these encountered inter-model differences. In addition, we present directives of model development and improvement in scope of Amazon forest drought response modelling.

  19. Chemoprevention of Cigarette Smoke–Induced Alterations of MicroRNA Expression in Rat Lungs

    PubMed Central

    Izzotti, Alberto; Calin, George A.; Steele, Vernon E.; Cartiglia, Cristina; Longobardi, Mariagrazia; Croce, Carlo M.; De Flora, Silvio

    2015-01-01

    We previously showed that exposure to environmental cigarette smoke (ECS) for 28 days causes extensive downregulation of microRNA expression in the lungs of rats, resulting in the overexpression of multiple genes and proteins. In the present study, we evaluated by microarray the expression of 484 microRNAs in the lungs of either ECS-free or ECS-exposed rats treated with the orally administered chemopreventive agents N-acetylcysteine, oltipraz, indole-3-carbinol, 5,6-benzoflavone, and phenethyl isothiocyanate (as single agents or in combinations). This is the first study of microRNA modulation by chemopreventive agents in nonmalignant tissues. Scatterplot, hierarchical cluster, and principal component analyses of microarray and quantitative PCR data showed that none of the above chemopreventive regimens appreciably affected the baseline microRNA expression, indicating potential safety. On the other hand, all of them attenuated ECS-induced alterations but to a variable extent and with different patterns, indicating potential preventive efficacy. The main ECS-altered functions that were modulated by chemopreventive agents included cell proliferation, apoptosis, differentiation, Ras activation, P53 functions, NF-κB pathway, transforming growth factor–related stress response, and angiogenesis. Some micro-RNAs known to be polymorphic in humans were downregulated by ECS and were protected by chemopreventive agents. This study provides proof-of-concept and validation of technology that we are further refining to screen and prioritize potential agents for continued development and to help elucidate their biological effects and mechanisms. Therefore, microRNA analysis may provide a new tool for predicting at early carcinogenesis stages both the potential safety and efficacy of cancer chemopreventive agents. PMID:20051373

  20. Considering body mass differences, who are the world's strongest women?

    PubMed

    Vanderburgh, P M; Dooman, C

    2000-01-01

    Allometric modeling (AM) has been used to determine the world's strongest body mass-adjusted man. Recently, however, AM was shown to demonstrate body mass bias in elite Olympic weightlifting performance. A second order polynomial (2OP) provided a better fit than AM with no body mass bias for men and women. The purpose of this study was to apply both AM and 2OP models to women's world powerlifting records (more a function of pure strength and less power than Olympic lifts) to determine the optimal model approach as well as the strongest body mass-adjusted woman in each event. Subjects were the 36 (9 per event) current women world record holders (as of Nov., 1997) for bench press (BP), deadlift (DL), squat (SQ), and total (TOT) lift (BP + DL + SQ) according to the International Powerlifting Federation (IPF). The 2OP model demonstrated the superior fit and no body mass bias as indicated by the coefficient of variation and residuals scatterplot inspection, respectively, for DL, SQ, and TOT. The AM for these three lifts, however, showed favorable bias toward the middle weight classes. The 2OP and AM yielded an essentially identical fit for BP. Although body mass-adjusted world records were dependent on the model used, Carrie Boudreau (U.S., 56-kg weight class), who received top scores in TOT and DL with both models, is arguably the world's strongest woman overall. Furthermore, although the 2OP model provides a better fit than AM for this elite population, a case can still be made for AM use, particularly in light of theoretical superiority.

  1. Brightness of Solar Magnetic Elements As a Function of Magnetic Flux at High Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Kahil, F.; Riethmüller, T. L.; Solanki, S. K.

    2017-03-01

    We investigate the relationship between the photospheric magnetic field of small-scale magnetic elements in the quiet-Sun (QS) at disk center and the brightness at 214, 300, 313, 388, 397, and 525.02 nm. To this end, we analyzed spectropolarimetric and imaging time series acquired simultaneously by the Imaging Magnetograph eXperiment magnetograph and the SuFI filter imager on board the balloon-borne observatory {{S}}{{UNRISE}} during its first science flight in 2009, with high spatial and temporal resolution. We find a clear dependence of the contrast in the near ultraviolet and the visible on the line-of-sight component of the magnetic field, B LOS, which is best described by a logarithmic model. This function effectively represents the relationship between the Ca II H-line emission and B LOS and works better than the power-law fit adopted by previous studies. This, along with the high contrast reached at these wavelengths, will help with determining the contribution of small-scale elements in the QS to the irradiance changes for wavelengths below 388 nm. At all wavelengths, including the continuum at 525.40 nm, the intensity contrast does not decrease with increasing B LOS. This result also strongly supports the fact that {{S}}{{UNRISE}} has resolved small strong magnetic field elements in the internetwork, resulting in constant contrasts for large magnetic fields in our continuum contrast at 525.40 nm versus the B LOS scatterplot, unlike the turnover obtained in previous observational studies. This turnover is due to the intermixing of the bright magnetic features with the dark intergranular lanes surrounding them.

  2. Heavy metal concentrations in commercial deep-sea fish from the Rockall Trough

    NASA Astrophysics Data System (ADS)

    Mormede, S.; Davies, I. M.

    2001-05-01

    Samples of monkfish ( Lophius piscatorius), black scabbard ( Aphanopus carbo), blue ling ( Molva dypterygia), blue whiting ( Micromesistius poutassou) and hake ( Merluccius merluccius) were obtained from 400 to 1150 m depth on the continental slope of Rockall Trough west of Scotland. Muscle, liver, gill and gonad tissue were analysed for arsenic, cadmium, copper, lead, mercury and zinc by various atomic absorption techniques. Median concentrations of arsenic in the muscle tissue ranged from 1.25 to 8.63 mg/kg wet weight; in liver tissue from 3.04 to 5.72 mg/kg wet weight; cadmium in muscle tissue from <0.002 to 0.034 mg/kg wet weight, in liver tissue from 0.11 to 6.98 mg/kg wet weight; copper in the muscle from 0.12 to 0.29 mg/kg wet weight, in the liver from 3.47 to 11.87 mg/kg wet weight; lead levels in muscle from <0.002 to 0.009 mg/kg wet weight, respectively, and in liver tissue <0.05 mg/kg wet weight for all species. In general, the concentrations are similar to those previously published on deep-sea fish, and higher or similar to those published for shallow water counterparts. All metal levels in black scabbard livers are much higher than in the other fish, and between 2 and 30 times higher than the limits of the European Dietary Standards and Guidelines. Differences in accumulation patterns between species and elements, as well as between organs are described using univariate and multivariate statistics (scatterplots, discriminant analysis, triangular plots).

  3. Study of risk factors for gastric cancer by populational databases analysis

    PubMed Central

    Ferrari, Fangio; Reis, Marco Antonio Moura

    2013-01-01

    AIM: To study the association between the incidence of gastric cancer and populational exposure to risk/protective factors through an analysis of international databases. METHODS: Open-access global databases concerning the incidence of gastric cancer and its risk/protective factors were identified through an extensive search on the Web. As its distribution was neither normal nor symmetric, the cancer incidence of each country was categorized according to ranges of percentile distribution. The association of each risk/protective factor with exposure was measured between the extreme ranges of the incidence of gastric cancer (under the 25th percentile and above the 75th percentile) by the use of the Mann-Whitney test, considering a significance level of 0.05. RESULTS: A variable amount of data omission was observed among all of the factors under study. A weak or nonexistent correlation between the incidence of gastric cancer and the study variables was shown by a visual analysis of scatterplot dispersion. In contrast, an analysis of categorized incidence revealed that the countries with the highest human development index (HDI) values had the highest rates of obesity in males and the highest consumption of alcohol, tobacco, fruits, vegetables and meat, which were associated with higher incidences of gastric cancer. There was no significant difference for the risk factors of obesity in females and fish consumption. CONCLUSION: Higher HDI values, coupled with a higher prevalence of male obesity and a higher per capita consumption of alcohol, tobacco, fruits, vegetables and meat, are associated with a higher incidence of gastric cancer based on an analysis of populational global data. PMID:24409066

  4. The relation between visualization size, grouping, and user performance.

    PubMed

    Gramazio, Connor C; Schloss, Karen B; Laidlaw, David H

    2014-12-01

    In this paper we make the following contributions: (1) we describe how the grouping, quantity, and size of visual marks affects search time based on the results from two experiments; (2) we report how search performance relates to self-reported difficulty in finding the target for different display types; and (3) we present design guidelines based on our findings to facilitate the design of effective visualizations. Both Experiment 1 and 2 asked participants to search for a unique target in colored visualizations to test how the grouping, quantity, and size of marks affects user performance. In Experiment 1, the target square was embedded in a grid of squares and in Experiment 2 the target was a point in a scatterplot. Search performance was faster when colors were spatially grouped than when they were randomly arranged. The quantity of marks had little effect on search time for grouped displays ("pop-out"), but increasing the quantity of marks slowed reaction time for random displays. Regardless of color layout (grouped vs. random), response times were slowest for the smallest mark size and decreased as mark size increased to a point, after which response times plateaued. In addition to these two experiments we also include potential application areas, as well as results from a small case study where we report preliminary findings that size may affect how users infer how visualizations should be used. We conclude with a list of design guidelines that focus on how to best create visualizations based on grouping, quantity, and size of visual marks.

  5. The Effect of Patient and Surgical Characteristics on Renal Function After Partial Nephrectomy.

    PubMed

    Winer, Andrew G; Zabor, Emily C; Vacchio, Michael J; Hakimi, A Ari; Russo, Paul; Coleman, Jonathan A; Jaimes, Edgar A

    2018-06-01

    The purpose of the study was to identify patient and disease characteristics that have an adverse effect on renal function after partial nephrectomy. We conducted a retrospective review of 387 patients who underwent partial nephrectomy for renal tumors between 2006 and 2014. A line plot with a locally weighted scatterplot smoothing was generated to visually assess renal function over time. Univariable and multivariable longitudinal regression analyses incorporated a random intercept and slope to evaluate the association between patient and disease characteristics with renal function after surgery. Median age was 60 years and most patients were male (255 patients [65.9%]) and white (343 patients [88.6%]). In univariable analysis, advanced age at surgery, larger tumor size, male sex, longer ischemia time, history of smoking, and hypertension were significantly associated with lower preoperative estimated glomerular filtration rate (eGFR). In multivariable analysis, independent predictors of reduced renal function after surgery included advanced age, lower preoperative eGFR, and longer ischemia time. Length of time from surgery was strongly associated with improvement in renal function among all patients. Independent predictors of postoperative decline in renal function include advanced age, lower preoperative eGFR, and longer ischemia time. A substantial number of subjects had recovery in renal function over time after surgery, which continued past the 12-month mark. These findings suggest that patients who undergo partial nephrectomy can experience long-term improvement in renal function. This improvement is most pronounced among younger patients with higher preoperative eGFR. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. A new method for correlation analysis of compositional (environmental) data - a worked example.

    PubMed

    Reimann, C; Filzmoser, P; Hron, K; Kynčlová, P; Garrett, R G

    2017-12-31

    Most data in environmental sciences and geochemistry are compositional. Already the unit used to report the data (e.g., μg/l, mg/kg, wt%) implies that the analytical results for each element are not free to vary independently of the other measured variables. This is often neglected in statistical analysis, where a simple log-transformation of the single variables is insufficient to put the data into an acceptable geometry. This is also important for bivariate data analysis and for correlation analysis, for which the data need to be appropriately log-ratio transformed. A new approach based on the isometric log-ratio (ilr) transformation, leading to so-called symmetric coordinates, is presented here. Summarizing the correlations in a heat-map gives a powerful tool for bivariate data analysis. Here an application of the new method using a data set from a regional geochemical mapping project based on soil O and C horizon samples is demonstrated. Differences to 'classical' correlation analysis based on log-transformed data are highlighted. The fact that some expected strong positive correlations appear and remain unchanged even following a log-ratio transformation has probably led to the misconception that the special nature of compositional data can be ignored when working with trace elements. The example dataset is employed to demonstrate that using 'classical' correlation analysis and plotting XY diagrams, scatterplots, based on the original or simply log-transformed data can easily lead to severe misinterpretations of the relationships between elements. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Preterm Versus Term Children: Analysis of Sedation/Anesthesia Adverse Events and Longitudinal Risk.

    PubMed

    Havidich, Jeana E; Beach, Michael; Dierdorf, Stephen F; Onega, Tracy; Suresh, Gautham; Cravero, Joseph P

    2016-03-01

    Preterm and former preterm children frequently require sedation/anesthesia for diagnostic and therapeutic procedures. Our objective was to determine the age at which children who are born <37 weeks gestational age are no longer at increased risk for sedation/anesthesia adverse events. Our secondary objective was to describe the nature and incidence of adverse events. This is a prospective observational study of children receiving sedation/anesthesia for diagnostic and/or therapeutic procedures outside of the operating room by the Pediatric Sedation Research Consortium. A total of 57,227 patients 0 to 22 years of age were eligible for this study. All adverse events and descriptive terms were predefined. Logistic regression and locally weighted scatterplot regression were used for analysis. Preterm and former preterm children had higher adverse event rates (14.7% vs 8.5%) compared with children born at term. Our analysis revealed a biphasic pattern for the development of adverse sedation/anesthesia events. Airway and respiratory adverse events were most commonly reported. MRI scans were the most commonly performed procedures in both categories of patients. Patients born preterm are nearly twice as likely to develop sedation/anesthesia adverse events, and this risk continues up to 23 years of age. We recommend obtaining birth history during the formulation of an anesthetic/sedation plan, with heightened awareness that preterm and former preterm children may be at increased risk. Further prospective studies focusing on the etiology and prevention of adverse events in former preterm patients are warranted. Copyright © 2016 by the American Academy of Pediatrics.

  8. Functional analysis and classification of phytoplankton based on data from an automated flow cytometer.

    PubMed

    Malkassian, Anthony; Nerini, David; van Dijk, Mark A; Thyssen, Melilotus; Mante, Claude; Gregori, Gerald

    2011-04-01

    Analytical flow cytometry (FCM) is well suited for the analysis of phytoplankton communities in fresh and sea waters. The measurement of light scatter and autofluorescence properties of particles by FCM provides optical fingerprints, which enables different phytoplankton groups to be separated. A submersible version of the CytoSense flow cytometer (the CytoSub) has been designed for in situ autonomous sampling and analysis, making it possible to monitor phytoplankton at a short temporal scale and obtain accurate information about its dynamics. For data analysis, a manual clustering is usually performed a posteriori: data are displayed on histograms and scatterplots, and group discrimination is made by drawing and combining regions (gating). The purpose of this study is to provide greater objectivity in the data analysis by applying a nonmanual and consistent method to automatically discriminate clusters of particles. In other words, we seek for partitioning methods based on the optical fingerprints of each particle. As the CytoSense is able to record the full pulse shape for each variable, it quickly generates a large and complex dataset to analyze. The shape, length, and area of each curve were chosen as descriptors for the analysis. To test the developed method, numerical experiments were performed on simulated curves. Then, the method was applied and validated on phytoplankton cultures data. Promising results have been obtained with a mixture of various species whose optical fingerprints overlapped considerably and could not be accurately separated using manual gating. Copyright © 2011 International Society for Advancement of Cytometry.

  9. TripAdvisor^{N-D}: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail.

    PubMed

    Nam, Julia EunJu; Mueller, Klaus

    2013-02-01

    Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.

  10. Analysis of multispectral and hyperspectral longwave infrared (LWIR) data for geologic mapping

    NASA Astrophysics Data System (ADS)

    Kruse, Fred A.; McDowell, Meryl

    2015-05-01

    Multispectral MODIS/ASTER Airborne Simulator (MASTER) data and Hyperspectral Thermal Emission Spectrometer (HyTES) data covering the 8 - 12 μm spectral range (longwave infrared or LWIR) were analyzed for an area near Mountain Pass, California. Decorrelation stretched images were initially used to highlight spectral differences between geologic materials. Both datasets were atmospherically corrected using the ISAC method, and the Normalized Emissivity approach was used to separate temperature and emissivity. The MASTER data had 10 LWIR spectral bands and approximately 35-meter spatial resolution and covered a larger area than the HyTES data, which were collected with 256 narrow (approximately 17nm-wide) spectral bands at approximately 2.3-meter spatial resolution. Spectra for key spatially-coherent, spectrally-determined geologic units for overlap areas were overlain and visually compared to determine similarities and differences. Endmember spectra were extracted from both datasets using n-dimensional scatterplotting and compared to emissivity spectral libraries for identification. Endmember distributions and abundances were then mapped using Mixture-Tuned Matched Filtering (MTMF), a partial unmixing approach. Multispectral results demonstrate separation of silica-rich vs non-silicate materials, with distinct mapping of carbonate areas and general correspondence to the regional geology. Hyperspectral results illustrate refined mapping of silicates with distinction between similar units based on the position, character, and shape of high resolution emission minima near 9 μm. Calcite and dolomite were separated, identified, and mapped using HyTES based on a shift of the main carbonate emissivity minimum from approximately 11.3 to 11.2 μm respectively. Both datasets demonstrate the utility of LWIR spectral remote sensing for geologic mapping.

  11. Genetic perspective of uniparental mitochondrial DNA landscape on the Punjabi population, Pakistan.

    PubMed

    Bhatti, Shahzad; Abbas, Sana; Aslamkhan, Muhammad; Attimonelli, Marcella; Trinidad, Magali Segundo; Aydin, Hikmet Hakan; de Souza, Erica Martinha Silva; Gonzalez, Gerardo Rodriguez

    2017-07-26

    To investigate the uniparental genetic structure of the Punjabi population from mtDNA aspect and to set up an appropriate mtDNA forensic database, we studied maternally unrelated Punjabi (N = 100) subjects from two caste groups (i.e. Arain and Gujar) belonging to territory of Punjab. The complete control region was elucidated by Sanger sequencing and the subsequent 58 different haplotypes were designated into appropriate haplogroups according to the most recently updated mtDNA phylogeny. We found a homogenous dispersal of Eurasian haplogroup uniformity among the Punjab Province and exhibited a strong connotation with the European populations. Punjabi castes are primarily a composite of substantial South Asian, East Asian and West Eurasian lineages. Moreover, for the first time we have defined the newly sub-haplogroup M52b1 characterized by 16223 T, 16275 G and 16438 A in Gujar caste. The vast array of mtDNA variants displayed in this study suggested that the haplogroup composition radiates signals of extensive genetic conglomeration, population admixture and demographic expansion that was equipped with diverse origin, whereas matrilineal gene pool was phylogeographically homogenous across the Punjab. This context was further fully acquainted with the facts supported by PCA scatterplot that Punjabi population clustered with South Asian populations. Finally, the high power of discrimination (0.8819) and low random match probability (0.0085%) proposed a worthy contribution of mtDNA control region dataset as a forensic database that considered a gold standard of today to get deeper insight into the genetic ancestry of contemporary matrilineal phylogeny.

  12. Dose-response relationship of robot-assisted stroke motor rehabilitation: the impact of initial motor status.

    PubMed

    Hsieh, Yu-wei; Wu, Ching-yi; Lin, Keh-chung; Yao, Grace; Wu, Kuen-yuh; Chang, Ya-ju

    2012-10-01

    The increasing availability of robot-assisted therapy (RT), which provides quantifiable, reproducible, interactive, and intensive practice, holds promise for stroke rehabilitation, but data on its dose-response relation are scanty. This study used 2 different intensities of RT to examine the treatment effects of RT and the effect on outcomes of the severity of initial motor deficits. Fifty-four patients with stroke were randomized to a 4-week intervention of higher-intensity RT, lower-intensity RT, or control treatment. The primary outcome, the Fugl-Meyer Assessment, was administered at baseline, midterm, and posttreatment. Secondary outcomes included the Medical Research Council scale, the Motor Activity Log, and the physical domains of the Stroke Impact Scale. The higher-intensity RT group showed significantly greater improvements on the Fugl-Meyer Assessment than the lower-intensity RT and control treatment groups at midterm (P=0.003 and P=0.02) and at posttreatment (P=0.04 and P=0.02). Within-group gains on the secondary outcomes were significant, but the differences among the 3 groups did not reach significance. Recovery rates of the higher-intensity RT group were higher than those of the lower-intensity RT group, particularly on the Fugl-Meyer Assessment. Scatterplots with curve fitting showed that patients with moderate motor deficits gained more improvements than those with severe or mild deficits after the higher-intensity RT. This study demonstrated the higher treatment intensity provided by RT was associated with better motor outcome for patients with stroke, which may shape further stroke rehabilitation. Clinical Trial Registration- URL: http://clinicaltrials.gov. Unique identifier: NCT00917605.

  13. The correlation between preoperative volumetry and real graft weight: comparison of two volumetry programs

    PubMed Central

    Mussin, Nadiar; Sumo, Marco; Choi, YoungRok; Choi, Jin Yong; Ahn, Sung-Woo; Yoon, Kyung Chul; Kim, Hyo-Sin; Hong, Suk Kyun; Yi, Nam-Joon; Suh, Kyung-Suk

    2017-01-01

    Purpose Liver volumetry is a vital component in living donor liver transplantation to determine an adequate graft volume that meets the metabolic demands of the recipient and at the same time ensures donor safety. Most institutions use preoperative contrast-enhanced CT image-based software programs to estimate graft volume. The objective of this study was to evaluate the accuracy of 2 liver volumetry programs (Rapidia vs. Dr. Liver) in preoperative right liver graft estimation compared with real graft weight. Methods Data from 215 consecutive right lobe living donors between October 2013 and August 2015 were retrospectively reviewed. One hundred seven patients were enrolled in Rapidia group and 108 patients were included in the Dr. Liver group. Estimated graft volumes generated by both software programs were compared with real graft weight measured during surgery, and further classified into minimal difference (≤15%) and big difference (>15%). Correlation coefficients and degree of difference were determined. Linear regressions were calculated and results depicted as scatterplots. Results Minimal difference was observed in 69.4% of cases from Dr. Liver group and big difference was seen in 44.9% of cases from Rapidia group (P = 0.035). Linear regression analysis showed positive correlation in both groups (P < 0.01). However, the correlation coefficient was better for the Dr. Liver group (R2 = 0.719), than for the Rapidia group (R2 = 0.688). Conclusion Dr. Liver can accurately predict right liver graft size better and faster than Rapidia, and can facilitate preoperative planning in living donor liver transplantation. PMID:28382294

  14. The dynamics of ant mosaics in tropical rainforests characterized using the Self-Organizing Map algorithm.

    PubMed

    Dejean, Alain; Azémar, Frédéric; Céréghino, Régis; Leponce, Maurice; Corbara, Bruno; Orivel, Jérôme; Compin, Arthur

    2016-08-01

    Ants, the most abundant taxa among canopy-dwelling animals in tropical rainforests, are mostly represented by territorially dominant arboreal ants (TDAs) whose territories are distributed in a mosaic pattern (arboreal ant mosaics). Large TDA colonies regulate insect herbivores, with implications for forestry and agronomy. What generates these mosaics in vegetal formations, which are dynamic, still needs to be better understood. So, from empirical research based on 3 Cameroonian tree species (Lophira alata, Ochnaceae; Anthocleista vogelii, Gentianaceae; and Barteria fistulosa, Passifloraceae), we used the Self-Organizing Map (SOM, neural network) to illustrate the succession of TDAs as their host trees grow and age. The SOM separated the trees by species and by size for L. alata, which can reach 60 m in height and live several centuries. An ontogenic succession of TDAs from sapling to mature trees is shown, and some ecological traits are highlighted for certain TDAs. Also, because the SOM permits the analysis of data with many zeroes with no effect of outliers on the overall scatterplot distributions, we obtained ecological information on rare species. Finally, the SOM permitted us to show that functional groups cannot be selected at the genus level as congeneric species can have very different ecological niches, something particularly true for Crematogaster spp., which include a species specifically associated with B. fistulosa, nondominant species and TDAs. Therefore, the SOM permitted the complex relationships between TDAs and their growing host trees to be analyzed, while also providing new information on the ecological traits of the ant species involved. © 2015 Institute of Zoology, Chinese Academy of Sciences.

  15. Relative importance of P and N in macrophyte and epilithic algae biomass in a wastewater-impacted oligotrophic river.

    PubMed

    Taube, Nadine; He, Jianxun; Ryan, M Cathryn; Valeo, Caterina

    2016-08-01

    The role of nutrient loading on biomass growth in wastewater-impacted rivers is important in order to effectively optimize wastewater treatment to avoid excessive biomass growth in the receiving water body. This paper directly relates wastewater treatment plant (WWTP) effluent nutrients (including ammonia (NH3-N), nitrate (NO3-N) and total phosphorus (TP)) to the temporal and spatial distribution of epilithic algae and macrophyte biomass in an oligotrophic river. Annual macrophyte biomass, epilithic algae data and WWTP effluent nutrient data from 1980 to 2012 were statistically analysed. Because discharge can affect aquatic biomass growth, locally weighted scatterplot smoothing (LOWESS) was used to remove the influence of river discharge from the aquatic biomass (macrophytes and algae) data before further analysis was conducted. The results from LOWESS indicated that aquatic biomass did not increase beyond site-specific threshold discharge values in the river. The LOWESS-estimated biomass residuals showed a variable response to different nutrients. Macrophyte biomass residuals showed a decreasing trend concurrent with enhanced nutrient removal at the WWTP and decreased effluent P loading, whereas epilithic algae biomass residuals showed greater response to enhanced N removal. Correlation analysis between effluent nutrient concentrations and the biomass residuals (both epilithic algae and macrophytes) suggested that aquatic biomass is nitrogen limited, especially by NH3-N, at most sampling sites. The response of aquatic biomass residuals to effluent nutrient concentrations did not change with increasing distance to the WWTP but was different for P and N, allowing for additional conclusions about nutrient limitation in specific river reaches. The data further showed that the mixing process between the effluent and the river has an influence on the spatial distribution of biomass growth.

  16. Psychosocial wellbeing and physical health among Tamil schoolchildren in northern Sri Lanka.

    PubMed

    Hamilton, Alexander; Foster, Charlie; Richards, Justin; Surenthirakumaran, Rajendra

    2016-01-01

    Mental disorders contribute to the global disease burden and have an increased prevalence among children in emergency settings. Good physical health is crucial for mental well-being, although physical health is multifactorial and the nature of this relationship is not fully understood. Using Sri Lanka as a case study, we assessed the baseline levels of, and the association between, mental health and physical health in Tamil school children. We conducted a cross sectional study of mental and physical health in 10 schools in Kilinochchi town in northern Sri Lanka. All Grade 8 children attending selected schools were eligible to participate in the study. Mental health was assessed using the Sri Lankan Index for Psychosocial Stress - Child Version. Physical health was assessed using Body Mass Index for age, height for age Z scores and the Multi-stage Fitness Test. Association between physical and mental health variables was assessed using scatterplots and correlation was assessed using Pearson's R. There were 461 participants included in the study. Girls significantly outperformed boys in the MH testing t (459) = 2.201, p < 0.05. Boys had significantly lower average Body Mass Index for age and height for age Z scores than girls (BMI: t (459) = -4.74, p <0.001; Height: t (459) = -3.54, p < 0.001). When compared to global averages, both sexes underperformed in the Multi-Stage Fitness Test, and had a higher prevalence of thinness and stunting. We identified no meaningful association between the selected variables. Our results do not support the supposition that the selected elements of physical health are related to mental health in post-conflict Sri Lanka. However, we identified a considerable physical health deficit in Tamil school children.

  17. Alterations in plasma phosphorus, red cell 2,3-diphosphoglycerate and P50 following open heart surgery.

    PubMed

    Hasan, R A; Sarnaik, A P; Meert, K L; Dabbagh, S; Simpson, P; Makimi, M

    1994-12-01

    To evaluate changes in and the correlation between plasma phosphorus, red cell 2,3-diphosphoglycerate (DPG) and adenosine triphosphate (ATP), and P50 in children following heart surgery. Prospective, observational study with factorial design. A pediatric intensive care unit in a university hospital. Twenty children undergoing open heart surgery for congenital heart defects. None. Red cell 2,3-DPG and ATP, P50, plasma phosphorus, and arterial lactate were obtained before and at 1, 8, 16, 24, 48, and 72 hours after surgery. The amount of intravenous fluid and glucose administered, and age of blood utilized were documented. Variables were analyzed by repeated measure analysis of variance followed by paired t-tests. To investigate the relationship between variables at each time point, scatterplot matrices and correlation coefficients were obtained. There was a reduction in plasma phosphorus, red cell 2,3-DPG, and P50 and an increase in arterial lactate at 1, 8, 16, 24, 48, and 72 hours after surgery. Red cell 2,3-DPG correlated with P50 at 1, 8 and 16 hours. The decrease in the plasma phosphorus correlated with the amounts of intravenous fluid and glucose administered on the day of surgery and on the first and second postoperative days. The age of the blood utilized correlated with the decrease in red cell 2,3-DPG on the day of surgery. Reduction in red cell 2,3-DPG, P50, and plasma phosphorus occurs after open heart surgery in children. These changes can potentially contribute to impaired oxygen utilization in the postoperative period, when adequacy of tissue oxygenation is critical.

  18. Validating hospital antibiotic purchasing data as a metric of inpatient antibiotic use.

    PubMed

    Tan, Charlie; Ritchie, Michael; Alldred, Jason; Daneman, Nick

    2016-02-01

    Antibiotic purchasing data are a widely used, but unsubstantiated, measure of antibiotic consumption. To validate this source, we compared purchasing data from hospitals and external medical databases with patient-level dispensing data. Antibiotic purchasing and dispensing data from internal hospital records and purchasing data from IMS Health were obtained for two hospitals between May 2013 and April 2015. Internal purchasing data were validated against dispensing data, and IMS data were compared with both internal metrics. Scatterplots of individual antimicrobial data points were generated; Pearson's correlation and linear regression coefficients were computed. A secondary analysis re-examined these correlations over shorter calendar periods. Internal purchasing data were strongly correlated with dispensing data, with correlation coefficients of 0.90 (95% CI = 0.83-0.95) and 0.98 (95% CI = 0.95-0.99) at hospitals A and B, respectively. Although dispensing data were consistently lower than purchasing data, this was attributed to a single antibiotic at both hospitals. IMS data were favourably correlated with, but underestimated, internal purchasing and dispensing data. This difference was accounted for by eight antibiotics for which direct sales from some manufacturers were not included in the IMS database. The correlation between purchasing and dispensing data was consistent across periods as short as 3 months, but not at monthly intervals. Both internal and external antibiotic purchasing data are strongly correlated with dispensing data. If outliers are accounted for appropriately, internal purchasing data could be used for cost-effective evaluation of antimicrobial stewardship programmes, and external data sets could be used for surveillance and research across geographical regions. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Validating hospital antibiotic purchasing data as a metric of inpatient antibiotic use

    PubMed Central

    Tan, Charlie; Ritchie, Michael; Alldred, Jason; Daneman, Nick

    2016-01-01

    Objectives Antibiotic purchasing data are a widely used, but unsubstantiated, measure of antibiotic consumption. To validate this source, we compared purchasing data from hospitals and external medical databases with patient-level dispensing data. Methods Antibiotic purchasing and dispensing data from internal hospital records and purchasing data from IMS Health were obtained for two hospitals between May 2013 and April 2015. Internal purchasing data were validated against dispensing data, and IMS data were compared with both internal metrics. Scatterplots of individual antimicrobial data points were generated; Pearson's correlation and linear regression coefficients were computed. A secondary analysis re-examined these correlations over shorter calendar periods. Results Internal purchasing data were strongly correlated with dispensing data, with correlation coefficients of 0.90 (95% CI = 0.83–0.95) and 0.98 (95% CI = 0.95–0.99) at hospitals A and B, respectively. Although dispensing data were consistently lower than purchasing data, this was attributed to a single antibiotic at both hospitals. IMS data were favourably correlated with, but underestimated, internal purchasing and dispensing data. This difference was accounted for by eight antibiotics for which direct sales from some manufacturers were not included in the IMS database. The correlation between purchasing and dispensing data was consistent across periods as short as 3 months, but not at monthly intervals. Conclusions Both internal and external antibiotic purchasing data are strongly correlated with dispensing data. If outliers are accounted for appropriately, internal purchasing data could be used for cost-effective evaluation of antimicrobial stewardship programmes, and external data sets could be used for surveillance and research across geographical regions. PMID:26546668

  20. Biogeochemical patterns of intermittent streams over space and time as surface flows decrease

    NASA Astrophysics Data System (ADS)

    MacNeille, R. B.; Lohse, K. A.; Godsey, S.; McCorkle, E. P.; Parsons, S.; Baxter, C.

    2016-12-01

    Climate change in the western United States is projected to lead to earlier snowmelt, increasing fire risk and potentially transitioning perennial streams to intermittent ones. Differences between perennial and intermittent streams, especially the temporal and spatial patterns of carbon and nutrient dynamics during periods of drying, are understudied. We examined spatial and temporal patterns in surface water biogeochemistry in southwest Idaho and hypothesized that as streams dry, carbon concentrations would increase due to evapoconcentration and/or increased in-stream production. Furthermore, we expected that biogeochemical patterns of streams would become increasingly spatially heterogeneous with drying. Finally, we expected that these patterns would vary in response to fire. To test these hypotheses, we collected water samples every 50 meters from two intermittent streams, one burned and one unburned, in April, May and June, 2016 to determine surface water biogeochemistry. Results showed average concentrations of dissolved inorganic carbon (DIC) and dissolved organic carbon (DOC) increased 3-fold from April to June in the burned site compared to the unburned site where concentrations remained relatively constant. Interestingly, average concentrations of total nitrogen (TN) dropped substantially for the burned site over these three months, but only decreased slightly for the unburned site over the same time period. We also assessed changes in spatial correlation between the burned and unburned site: carbon concentrations were less spatially correlated at the unburned site than at the burned site. Scatterplot matrices of DIC values indicated that at a lag distance of 300 m in April and June, the unburned site had r-values of 0.7416 and 0.5975, respectively, while the burned site had r-values of 0.9468 and 0.8783, respectively. These initial findings support our hypotheses that carbon concentrations and spatial heterogeneity increased over time.

  1. Moving towards Hyper-Resolution Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Rouf, T.; Maggioni, V.; Houser, P.; Mei, Y.

    2017-12-01

    Developing a predictive capability for terrestrial hydrology across landscapes, with water, energy and nutrients as the drivers of these dynamic systems, faces the challenge of scaling meter-scale process understanding to practical modeling scales. Hyper-resolution land surface modeling can provide a framework for addressing science questions that we are not able to answer with coarse modeling scales. In this study, we develop a hyper-resolution forcing dataset from coarser resolution products using a physically based downscaling approach. These downscaling techniques rely on correlations with landscape variables, such as topography, roughness, and land cover. A proof-of-concept has been implemented over the Oklahoma domain, where high-resolution observations are available for validation purposes. Hourly NLDAS (North America Land Data Assimilation System) forcing data (i.e., near-surface air temperature, pressure, and humidity) have been downscaled to 500m resolution over the study area for 2015-present. Results show that correlation coefficients between the downscaled temperature dataset and ground observations are consistently higher than the ones between the NLDAS temperature data at their native resolution and ground observations. Not only correlation coefficients are higher, but also the deviation around the 1:1 line in the density scatterplots is smaller for the downscaled dataset than the original one with respect to the ground observations. Results are therefore encouraging as they demonstrate that the 500m temperature dataset has a good agreement with the ground information and can be adopted to force the land surface model for soil moisture estimation. The study has been expanded to wind speed and direction, incident longwave and shortwave radiation, pressure, and precipitation. Precipitation is well known to vary dramatically with elevation and orography. Therefore, we are pursuing a downscaling technique based on both topographical and vegetation characteristics.

  2. Counting Steps in Activities of Daily Living in People With a Chronic Disease Using Nine Commercially Available Fitness Trackers: Cross-Sectional Validity Study

    PubMed Central

    Beekman, Emmylou; Theunissen, Kyra; Braun, Susy; Beurskens, Anna J

    2018-01-01

    Background Measuring physical activity with commercially available activity trackers is gaining popularity. People with a chronic disease can especially benefit from knowledge about their physical activity pattern in everyday life since sufficient physical activity can contribute to wellbeing and quality of life. However, no validity data are available for this population during activities of daily living. Objective The aim of this study was to investigate the validity of 9 commercially available activity trackers for measuring step count during activities of daily living in people with a chronic disease receiving physiotherapy. Methods The selected activity trackers were Accupedo (Corusen LLC), Activ8 (Remedy Distribution Ltd), Digi-Walker CW-700 (Yamax), Fitbit Flex (Fitbit inc), Lumoback (Lumo Bodytech), Moves (ProtoGeo Oy), Fitbit One (Fitbit inc), UP24 (Jawbone), and Walking Style X (Omron Healthcare Europe BV). In total, 130 persons with chronic diseases performed standardized activity protocols based on activities of daily living that were recorded on video camera and analyzed for step count (gold standard). The validity of the trackers’ step count was assessed by correlation coefficients, t tests, scatterplots, and Bland-Altman plots. Results The correlations between the number of steps counted by the activity trackers and the gold standard were low (range: –.02 to .33). For all activity trackers except for Fitbit One, a significant systematic difference with the gold standard was found for step count. Plots showed a wide range in scores for all activity trackers; Activ8 showed an average overestimation and the other 8 trackers showed underestimations. Conclusions This study showed that the validity of 9 commercially available activity trackers is low measuring steps while individuals with chronic diseases receiving physiotherapy engage in activities of daily living. PMID:29610110

  3. A Wetness Index Using Terrain-Corrected Surface Temperature and Normalized Difference Vegetation Index Derived from Standard MODIS Products: An Evaluation of Its Use in a Humid Forest-Dominated Region of Eastern Canada

    PubMed Central

    Hassan, Quazi K.; Bourque, Charles P.-A.; Meng, Fan-Rui; Cox, Roger M.

    2007-01-01

    In this paper we develop a method to estimate land-surface water content in a mostly forest-dominated (humid) and topographically-varied region of eastern Canada. The approach is centered on a temperature-vegetation wetness index (TVWI) that uses standard 8-day MODIS-based image composites of land surface temperature (TS) and surface reflectance as primary input. In an attempt to improve estimates of TVWI in high elevation areas, terrain-induced variations in TS are removed by applying grid, digital elevation model-based calculations of vertical atmospheric pressure to calculations of surface potential temperature (θS). Here, θS corrects TS to the temperature value to what it would be at mean sea level (i.e., ∼101.3 kPa) in a neutral atmosphere. The vegetation component of the TVWI uses 8-day composites of surface reflectance in the calculation of normalized difference vegetation index (NDVI) values. TVWI and corresponding wet and dry edges are based on an interpretation of scatterplots generated by plotting θS as a function of NDVI. A comparison of spatially-averaged field measurements of volumetric soil water content (VSWC) and TVWI for the 2003-2005 period revealed that variation with time to both was similar in magnitudes. Growing season, point mean measurements of VSWC and TVWI were 31.0% and 28.8% for 2003, 28.6% and 29.4% for 2004, and 40.0% and 38.4% for 2005, respectively. An evaluation of the long-term spatial distribution of land-surface wetness generated with the new θS-NDVI function and a process-based model of soil water content showed a strong relationship (i.e., r2 = 95.7%). PMID:28903212

  4. Detection of selection signatures in Piemontese and Marchigiana cattle, two breeds with similar production aptitudes but different selection histories.

    PubMed

    Sorbolini, Silvia; Marras, Gabriele; Gaspa, Giustino; Dimauro, Corrado; Cellesi, Massimo; Valentini, Alessio; Macciotta, Nicolò Pp

    2015-06-23

    Domestication and selection are processes that alter the pattern of within- and between-population genetic variability. They can be investigated at the genomic level by tracing the so-called selection signatures. Recently, sequence polymorphisms at the genome-wide level have been investigated in a wide range of animals. A common approach to detect selection signatures is to compare breeds that have been selected for different breeding goals (i.e. dairy and beef cattle). However, genetic variations in different breeds with similar production aptitudes and similar phenotypes can be related to differences in their selection history. In this study, we investigated selection signatures between two Italian beef cattle breeds, Piemontese and Marchigiana, using genotyping data that was obtained with the Illumina BovineSNP50 BeadChip. The comparison was based on the fixation index (Fst), combined with a locally weighted scatterplot smoothing (LOWESS) regression and a control chart approach. In addition, analyses of Fst were carried out to confirm candidate genes. In particular, data were processed using the varLD method, which compares the regional variation of linkage disequilibrium between populations. Genome scans confirmed the presence of selective sweeps in the genomic regions that harbour candidate genes that are known to affect productive traits in cattle such as DGAT1, ABCG2, CAPN3, MSTN and FTO. In addition, several new putative candidate genes (for example ALAS1, ABCB8, ACADS and SOD1) were detected. This study provided evidence on the different selection histories of two cattle breeds and the usefulness of genomic scans to detect selective sweeps even in cattle breeds that are bred for similar production aptitudes.

  5. Season of Sampling and Season of Birth Influence Serotonin Metabolite Levels in Human Cerebrospinal Fluid

    PubMed Central

    Luykx, Jurjen J.; Bakker, Steven C.; Lentjes, Eef; Boks, Marco P. M.; van Geloven, Nan; Eijkemans, Marinus J. C.; Janson, Esther; Strengman, Eric; de Lepper, Anne M.; Westenberg, Herman; Klopper, Kai E.; Hoorn, Hendrik J.; Gelissen, Harry P. M. M.; Jordan, Julian; Tolenaar, Noortje M.; van Dongen, Eric P. A.; Michel, Bregt; Abramovic, Lucija; Horvath, Steve; Kappen, Teus; Bruins, Peter; Keijzers, Peter; Borgdorff, Paul; Ophoff, Roel A.; Kahn, René S.

    2012-01-01

    Background Animal studies have revealed seasonal patterns in cerebrospinal fluid (CSF) monoamine (MA) turnover. In humans, no study had systematically assessed seasonal patterns in CSF MA turnover in a large set of healthy adults. Methodology/Principal Findings Standardized amounts of CSF were prospectively collected from 223 healthy individuals undergoing spinal anesthesia for minor surgical procedures. The metabolites of serotonin (5-hydroxyindoleacetic acid, 5-HIAA), dopamine (homovanillic acid, HVA) and norepinephrine (3-methoxy-4-hydroxyphenylglycol, MPHG) were measured using high performance liquid chromatography (HPLC). Concentration measurements by sampling and birth dates were modeled using a non-linear quantile cosine function and locally weighted scatterplot smoothing (LOESS, span = 0.75). The cosine model showed a unimodal season of sampling 5-HIAA zenith in April and a nadir in October (p-value of the amplitude of the cosine = 0.00050), with predicted maximum (PCmax) and minimum (PCmin) concentrations of 173 and 108 nmol/L, respectively, implying a 60% increase from trough to peak. Season of birth showed a unimodal 5-HIAA zenith in May and a nadir in November (p = 0.00339; PCmax = 172 and PCmin = 126). The non-parametric LOESS showed a similar pattern to the cosine in both season of sampling and season of birth models, validating the cosine model. A final model including both sampling and birth months demonstrated that both sampling and birth seasons were independent predictors of 5-HIAA concentrations. Conclusion In subjects without mental illness, 5-HT turnover shows circannual variation by season of sampling as well as season of birth, with peaks in spring and troughs in fall. PMID:22312427

  6. Weak cation magnetic separation technology and MALDI-TOF-MS in screening serum protein markers in primary type I osteoporosis.

    PubMed

    Shi, X L; Li, C W; Liang, B C; He, K H; Li, X Y

    2015-11-30

    We investigated weak cation magnetic separation technology and matrix-assisted laser desorption ionization-time of flight-mass spectrometry (MALDI-TOF-MS) in screening serum protein markers of primary type I osteoporosis. We selected 16 postmenopausal women with osteoporosis and nine postmenopausal women as controls to find a new method for screening biomarkers and establishing a diagnostic model for primary type I osteoporosis. Serum samples were obtained from controls and patients. Serum protein was extracted with the WCX protein chip system; protein fingerprints were examined using MALDI-TOF-MS. The preprocessed and model construction data were handled by the ProteinChip system. The diagnostic models were established using a genetic arithmetic model combined with a support vector machine (SVM). The SVM model with the highest Youden index was selected. Combinations with the highest accuracy in distinguishing different groups of data were selected as potential biomarkers. From the two groups of serum proteins, 123 cumulative MS protein peaks were selected. Significant intensity differences in the protein peaks of 16 postmenopausal women with osteoporosis were screened. The difference in Youden index between the four groups of protein peaks showed that the highest peaks had mass-to-charge ratios of 8909.047, 8690.658, 13745.48, and 15114.52. A diagnosis model was established with these four markers as the candidates, and the model specificity and sensitivity were found to be 100%. Two groups of specimens in the SVM results on the scatterplot were distinguishable. We established a diagnosis model, and provided a new serological method for screening and diagnosis of osteoporosis with high sensitivity and specificity.

  7. Situational judgment test as an additional tool in a medical admission test: an observational investigation.

    PubMed

    Luschin-Ebengreuth, Marion; Dimai, Hans P; Ithaler, Daniel; Neges, Heide M; Reibnegger, Gilbert

    2015-03-14

    In the framework of medical university admission procedures the assessment of non-cognitive abilities is increasingly demanded. As tool for assessing personal qualities or the ability to handle theoretical social constructs in complex situations, the Situational Judgment Test (SJT), among other measurement instruments, is discussed in the literature. This study focuses on the development and the results of the SJT as part of the admission test for the study of human medicine and dentistry at one medical university in Austria. Observational investigation focusing on the results of the SJT. 4741 applicants were included in the study. To yield comparable results for the different test parts, "relative scores" for each test part were calculated. Performance differences between women and men in the various test parts are analyzed using effect sizes based on comparison of mean values (Cohen's d). The associations between the relative scores achieved in the various test parts were assessed by computing pairwise linear correlation coefficients between all test parts and visualized by bivariate scatterplots. Among successful candidates, men consistently outperform women. Men perform better in physics and mathematics. Women perform better in the SJT part. The least discriminatory test part was the SJT. A strong correlation between biology and chemistry and moderate correlations between the other test parts except SJT is obvious. The relative scores are not symmetrically distributed. The cognitive loading of the performed SJTs points to the low correlation between the SJTs and cognitive abilities. Adding the SJT part into the admission test, in order to cover more than only knowledge and understanding of natural sciences among the applicants has been quite successful.

  8. What has methylmercury in umbilical cords told us? - Minamata disease.

    PubMed

    Yorifuji, Takashi; Kashima, Saori; Tsuda, Toshihide; Harada, Masazumi

    2009-12-20

    Severe methylmercury poisoning occurred in Minamata and neighboring communities in the 1950s and 1960s. The exposed patients manifested neurological signs, and some patients exposed in utero were born with so-called congenital Minamata disease. In a previous report, Nishigaki and Harada evaluated the methylmercury concentrations in the umbilical cords of inhabitants and demonstrated that methylmercury actually passed through the placenta (Nishigaki and Harada, 1975). However, the report involved a limited number of cases (only 35) and did not quantitatively evaluate the regional differences in the transition of methylmercury exposure. Therefore, in the present study, we evaluated the temporal and spatial distributions of methylmercury concentrations in umbilical cords, with an increased number of participants and additional descriptive analyses. Then, we examined whether the methylmercury concentrations corresponded with the history of the Minamata disease incident. A total of 278 umbilical cord specimens collected after birth were obtained from babies born between 1925 and 1980 in four study areas exposed to methylmercury. Then, we conducted descriptive analyses, and drew scatterplots of the methylmercury concentrations of all the participants and separated by the areas. In the Minamata area, where the first patient was identified in 1956, the methylmercury concentration reached a peak around 1955. Subsequently, about 5 years later, the concentrations peaked in other exposed areas with the expected exposure distribution corresponding with acetaldehyde production (the origin of methylmercury). This historical incident several decades ago in Minamata and neighboring communities clearly shows that regional pollution affected the environment in utero. Furthermore, the temporal and spatial distributions of the methylmercury concentrations in the umbilical cords tell us the history of the Minamata disease incident.

  9. Spatial variation of dental caries in late holocene samples of Southern South America: A geostatistical study.

    PubMed

    Menéndez, Lumila Paula

    2016-11-01

    The spatial variation of dental caries in late Holocene southern South American populations will be analyzed using geostatistical methods. The existence of a continuous geographical pattern of dental caries variation will be tested. The author recorded dental caries in 400 individuals, collated this information with published caries data from 666 additional individuals, and calculated a Caries Index. The caries spatial distribution was evaluated by means of 2D maps and scatterplots. Geostatistical analyses were performed by calculating Moran's I, correlograms and a Procrustes analysis. There is a relatively strong latitudinal continuous gradient of dental caries variation, especially in the extremes of the distribution. Moreover, the association between dental caries and geography was relatively high (m 12  = 0.6). Although northern and southern samples had the highest and lowest frequencies of dental caries, respectively, the central ones had the largest variation and had lower rates of caries than expected. The large variation in frequencies of dental caries in populations located in the center of the distribution could be explained by their subsistence strategies, characterized either by the consumption of wild cariogenic plants or cultigens (obtained locally or by exchange), a reliance on fishing, or the incorporation of plants rich in starch rather than carbohydrates. It is suggested that dental caries must be considered a multifactorial disease which results from the interaction of cultural practices and environmental factors. This can change how we understand subsistence strategies as well as how we interpret dental caries rates. Am. J. Hum. Biol., 2016. © 2016 Wiley Periodicals, Inc. Am. J. Hum. Biol. 28:825-836, 2016. © 2016Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Mobile gait analysis via eSHOEs instrumented shoe insoles: a pilot study for validation against the gold standard GAITRite®.

    PubMed

    Jagos, Harald; Pils, Katharina; Haller, Michael; Wassermann, Claudia; Chhatwal, Christa; Rafolt, Dietmar; Rattay, Frank

    2017-07-01

    Clinical gait analysis contributes massively to rehabilitation support and improvement of in-patient care. The research project eSHOE aspires to be a useful addition to the rich variety of gait analysis systems. It was designed to fill the gap of affordable, reasonably accurate and highly mobile measurement devices. With the overall goal of enabling individual home-based monitoring and training for people suffering from chronic diseases, affecting the locomotor system. Motion and pressure sensors gather movement data directly on the (users) feet, store them locally and/or transmit them wirelessly to a PC. A combination of pattern recognition and feature extraction algorithms translates the motion data into standard gait parameters. Accuracy of eSHOE were evaluated against the reference system GAITRite in a clinical pilot study. Eleven hip fracture patients (78.4 ± 7.7 years) and twelve healthy subjects (40.8 ± 9.1 years) were included in these trials. All subjects performed three measurements at a comfortable walking speed over 8 m, including the 6-m long GAITRite mat. Six standard gait parameters were extracted from a total of 347 gait cycles. Agreement was analysed via scatterplots, histograms and Bland-Altman plots. In the patient group, the average differences between eSHOE and GAITRite range from -0.046 to 0.045 s and in the healthy group from -0.029 to 0.029 s. Therefore, it can be concluded that eSHOE delivers adequately accurate results. Especially with the prospect as an at home supplement or follow-up to clinical gait analysis and compared to other state of the art wearable motion analysis systems.

  11. Living donor right liver lobes: preoperative CT volumetric measurement for calculation of intraoperative weight and volume.

    PubMed

    Lemke, Arne-Jörn; Brinkmann, Martin Julius; Schott, Thomas; Niehues, Stefan Markus; Settmacher, Utz; Neuhaus, Peter; Felix, Roland

    2006-09-01

    To prospectively develop equations for the calculation of expected intraoperative weight and volume of a living donor's right liver lobe by using preoperative computed tomography (CT) for volumetric measurement. After medical ethics committee and state medical board approval, informed consent was obtained from eight female and eight male living donors (age range, 18-63 years) for participation in preoperative CT volumetric measurement of the right liver lobes by using the summation-of-area method. Intraoperatively, the graft was weighed, and the volume of the graft was determined by means of water displacement. Distributions of pre- and intraoperative data were depicted as Tukey box-and-whisker diagrams. Then, linear regressions were calculated, and the results were depicted as scatterplots. On the basis of intraoperative data, physical density of the parenchyma was calculated by dividing weight by volume of the graft. Preoperative measurement of grafts resulted in a mean volume of 929 mL +/- 176 (standard deviation); intraoperative mean weight and volume of the grafts were 774 g +/- 138 and 697 mL +/- 139, respectively. All corresponding pre- and intraoperative data correlated significantly (P < .001) with each other. Intraoperatively expected volume (V(intraop)) in millilliters and weight (W(intraop)) in grams can be calculated with the equations V(intra)(op) = (0.656 . V(preop)) + 87.629 mL and W(intra)(op) = (0.678 g/mL . V(preop)) + 143.704 g, respectively, where preoperative volume is V(preop) in milliliters. Physical density of transplanted liver lobes was 1.1172 g/mL +/- 0.1015. By using two equations developed from the data obtained in this study, expected intraoperative weight and volume can properly be determined from CT volumetric measurements. (c) RSNA, 2006.

  12. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing

    NASA Astrophysics Data System (ADS)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  13. Respiratory alkalosis and primary hypocapnia in Labrador Retrievers participating in field trials in high-ambient-temperature conditions.

    PubMed

    Steiss, Janet E; Wright, James C

    2008-10-01

    To determine whether Labrador Retrievers participating in field trials develop respiratory alkalosis and hypocapnia primarily in conditions of high ambient temperatures. 16 Labrador Retrievers. At each of 5 field trials, 5 to 10 dogs were monitored during a test (retrieval of birds over a variable distance on land [1,076 to 2,200 m]; 36 assessments); ambient temperatures ranged from 2.2 degrees to 29.4 degrees C. For each dog, rectal temperature was measured and a venous blood sample was collected in a heparinized syringe within 5 minutes of test completion. Blood samples were analyzed on site for Hct; pH; sodium, potassium, ionized calcium, glucose, lactate, bicarbonate, and total CO2 concentrations; and values of PvO2 and PvCO2. Scatterplots of each variable versus ambient temperature were reviewed. Regression analysis was used to evaluate the effect of ambient temperature (< or = 21 degrees C and > 21 degrees C) on each variable. Compared with findings at ambient temperatures < or = 21 degrees C, venous blood pH was increased (mean, 7.521 vs 7.349) and PvCO2 was decreased (mean, 17.8 vs 29.3 mm Hg) at temperatures > 21 degrees C; rectal temperature did not differ. Two dogs developed signs of heat stress in 1 test at an ambient temperature of 29 degrees C; their rectal temperatures were higher and PvCO2 values were lower than findings in other dogs. When running distances frequently encountered at field trials, healthy Labrador Retrievers developed hyperthermia regardless of ambient temperature. Dogs developed respiratory alkalosis and hypocapnia at ambient temperatures > 21 degrees C.

  14. Can telemetry data obviate the need for sleep studies in Pierre Robin Sequence?

    PubMed

    Aaronson, Nicole Leigh; Jabbour, Noel

    2017-09-01

    This study looks to correlate telemetry data gathered on patients with Pierre Robin Sequence (PRS) with sleep study data. Strong correlation might allow obstructive sleep apnea (OSA) to be reasonably predicted without the need for sleep study. Charts from forty-six infants with PRS who presented to our children's hospital between 2005 and 2015 and received a polysomnogram (PSG) prior to surgical intervention were retrospectively reviewed. Correlations and scatterplots were used to compare average daily oxygen nadir, overall oxygen nadir, and average number of daily desaturations from telemetry data with apnea-hypopnea index (AHI) and oxygen nadir on sleep study. Results were also categorized into groups of AHI ≥ or <10 and oxygen nadir ≥ or <80% for chi-squared analysis. Our data did not show significant correlations between telemetry data and sleep study data. Patients with O2 nadir below 80% on telemetry were not more likely to have an O2 nadir below 80% on sleep study. Patients with an average O2 nadir below 80% did show some correlation with having an AHI greater than 10 on sleep study but this relationship did not reach significance. Of 22 patients who did not have any desaturations on telemetry below 80%, 16 (73%) had an AHI >10 on sleep study. In the workup of infants with PRS, the index of suspicion is high for OSA. In our series, telemetry data was not useful in ruling out severe OSA. Thus our data do not support forgoing sleep study in patients with PRS and concern for OSA despite normal telemetry patterns. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    PubMed

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  16. Phonation Quotient in Women: A Measure of Vocal Efficiency Using Three Aerodynamic Instruments.

    PubMed

    Joshi, Ashwini; Watts, Christopher R

    2017-03-01

    The purpose of this study was to examine measures of vital capacity and phonation quotient across three age groups in women using three different aerodynamic instruments representing low-tech and high-tech options. This study has a prospective, repeated measures design. Fifteen women in each age group of 25-39 years, 40-59 years, and 60-79 years were assessed using maximum phonation time and vital capacity obtained from three aerodynamic instruments: a handheld analog windmill type spirometer, a handheld digital spirometer, and the Phonatory Aerodynamic System (PAS), Model 6600. Phonation quotient was calculated using vital capacity from each instrument. Analyses of variance were performed to test for main effects of the instruments and age on vital capacity and derived phonation quotient. Pearson product moment correlation was performed to assess measurement reliability (parallel forms) between the instruments. Regression equations, scatterplots, and coefficients of determination were also calculated. Statistically significant differences were found in vital capacity measures for the digital spirometer compared with the windmill-type spirometer and PAS across age groups. Strong positive correlations were present between all three instruments for both vital capacity and derived phonation quotient measurements. Measurement precision for the digital spirometer was lower than the windmill spirometer compared with the PAS. However, all three instruments had strong measurement reliability. Additionally, age did not have an effect on the measurement across instruments. These results are consistent with previous literature reporting data from male speakers and support the use of low-tech options for measurement of basic aerodynamic variables associated with voice production. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Clinical utility of breath ammonia for evaluation of ammonia physiology in healthy and cirrhotic adults

    PubMed Central

    Spacek, Lisa A; Mudalel, Matthew; Tittel, Frank; Risby, Terence H; Solga, Steven F

    2016-01-01

    Blood ammonia is routinely used in clinical settings to assess systemic ammonia in hepatic encephalopathy and urea cycle disorders. Despite its drawbacks, blood measurement is often used as a comparator in breath studies because it is a standard clinical test. We sought to evaluate sources of measurement error and potential clinical utility of breath ammonia compared to blood ammonia. We measured breath ammonia in real time by quartz enhanced photoacoustic spectrometry and blood ammonia in 10 healthy and 10 cirrhotic participants. Each participant contributed 5 breath samples and blood for ammonia measurement within 1 h. We calculated the coefficient of variation (CV) for 5 breath ammonia values, reported medians of healthy and cirrhotic participants, and used scatterplots to display breath and blood ammonia. For healthy participants, mean age was 22 years (±4), 70% were men, and body mass index (BMI) was 27 (±5). For cirrhotic participants, mean age was 61 years (±8), 60% were men, and BMI was 31 (±7). Median blood ammonia for healthy participants was within normal range, 10 μmol L−1 (interquartile range (IQR), 3–18) versus 46 μmol L−1 (IQR, 23–66) for cirrhotic participants. Median breath ammonia was 379 pmol mL−1 CO2 (IQR, 265–765) for healthy versus 350 pmol mL−1 CO2 (IQR, 180–1013) for cirrhotic participants. CV was 17 ± 6%. There remains an important unmet need in the evaluation of systemic ammonia, and breath measurement continues to demonstrate promise to fulfill this need. Given the many differences between breath and blood ammonia measurement, we examined biological explanations for our findings in healthy and cirrhotic participants. We conclude that based upon these preliminary data breath may offer clinically important information this is not provided by blood ammonia. PMID:26658550

  18. Counting Steps in Activities of Daily Living in People With a Chronic Disease Using Nine Commercially Available Fitness Trackers: Cross-Sectional Validity Study.

    PubMed

    Ummels, Darcy; Beekman, Emmylou; Theunissen, Kyra; Braun, Susy; Beurskens, Anna J

    2018-04-02

    Measuring physical activity with commercially available activity trackers is gaining popularity. People with a chronic disease can especially benefit from knowledge about their physical activity pattern in everyday life since sufficient physical activity can contribute to wellbeing and quality of life. However, no validity data are available for this population during activities of daily living. The aim of this study was to investigate the validity of 9 commercially available activity trackers for measuring step count during activities of daily living in people with a chronic disease receiving physiotherapy. The selected activity trackers were Accupedo (Corusen LLC), Activ8 (Remedy Distribution Ltd), Digi-Walker CW-700 (Yamax), Fitbit Flex (Fitbit inc), Lumoback (Lumo Bodytech), Moves (ProtoGeo Oy), Fitbit One (Fitbit inc), UP24 (Jawbone), and Walking Style X (Omron Healthcare Europe BV). In total, 130 persons with chronic diseases performed standardized activity protocols based on activities of daily living that were recorded on video camera and analyzed for step count (gold standard). The validity of the trackers' step count was assessed by correlation coefficients, t tests, scatterplots, and Bland-Altman plots. The correlations between the number of steps counted by the activity trackers and the gold standard were low (range: -.02 to .33). For all activity trackers except for Fitbit One, a significant systematic difference with the gold standard was found for step count. Plots showed a wide range in scores for all activity trackers; Activ8 showed an average overestimation and the other 8 trackers showed underestimations. This study showed that the validity of 9 commercially available activity trackers is low measuring steps while individuals with chronic diseases receiving physiotherapy engage in activities of daily living. ©Darcy Ummels, Emmylou Beekman, Kyra Theunissen, Susy Braun, Anna J Beurskens. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 02.04.2018.

  19. Association between progression-free survival and health-related quality of life in oncology: a systematic review protocol

    PubMed Central

    Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng

    2016-01-01

    Introduction There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Methods and analysis Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Discussion Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. PMID:27591026

  20. Normal development of human brain white matter from infancy to early adulthood: a diffusion tensor imaging study.

    PubMed

    Uda, Satoshi; Matsui, Mie; Tanaka, Chiaki; Uematsu, Akiko; Miura, Kayoko; Kawana, Izumi; Noguchi, Kyo

    2015-01-01

    Diffusion tensor imaging (DTI), which measures the magnitude of anisotropy of water diffusion in white matter, has recently been used to visualize and quantify parameters of neural tracts connecting brain regions. In order to investigate the developmental changes and sex and hemispheric differences of neural fibers in normal white matter, we used DTI to examine 52 healthy humans ranging in age from 2 months to 25 years. We extracted the following tracts of interest (TOIs) using the region of interest method: the corpus callosum (CC), cingulum hippocampus (CGH), inferior longitudinal fasciculus (ILF), and superior longitudinal fasciculus (SLF). We measured fractional anisotropy (FA), apparent diffusion coefficient (ADC), axial diffusivity (AD), and radial diffusivity (RD). Approximate values and changes in growth rates of all DTI parameters at each age were calculated and analyzed using LOESS (locally weighted scatterplot smoothing). We found that for all TOIs, FA increased with age, whereas ADC, AD and RD values decreased with age. The turning point of growth rates was at approximately 6 years. FA in the CC was greater than that in the SLF, ILF and CGH. Moreover, FA, ADC and AD of the splenium of the CC (sCC) were greater than in the genu of the CC (gCC), whereas the RD of the sCC was lower than the RD of the gCC. The FA of right-hemisphere TOIs was significantly greater than that of left-hemisphere TOIs. In infants, growth rates of both FA and RD were larger than those of AD. Our data show that developmental patterns differ by TOIs and myelination along with the development of white matter, which can be mainly expressed as an increase in FA together with a decrease in RD. These findings clarify the long-term normal developmental characteristics of white matter microstructure from infancy to early adulthood. © 2015 S. Karger AG, Basel.

  1. Global Nonlinear Optimization for the Interpretation of Magnetic Anomalies Over Idealized Geological Bodies for Ore Exploration - An Insight about Uncertainty

    NASA Astrophysics Data System (ADS)

    Biswas, A.

    2016-12-01

    A Very Fast Simulated Annealing (VFSA) global optimization code is produced for elucidation of magnetic data over various idealized bodies for mineral investigation. The way of uncertainty in the interpretation is additionally analyzed in the present study. This strategy fits the watched information exceptionally well by some straightforward geometrically body in the confined class of Sphere, horizontal cylinder, thin dyke and sheet type models. The consequences of VFSA improvement uncover that different parameters demonstrate various identical arrangements when state of the objective body is not known and shape factor "q" is additionally advanced together with other model parameters. The study uncovers that amplitude coefficient k is firmly subject to shape factor. This demonstrates there is multi-model sort vulnerability between these two model parameters. Be that as it may, the assessed estimations of shape factor from different VFSA runs without a doubt show whether the subsurface structure is sphere, horizontal cylinder, and dyke or sheet type structure. Thus, the precise shape element (2.5 for sphere, 2.0 for horizontal cylinder and 1.0 for dyke and sheet) is settled and improvement procedure is rehashed. Next, altering the shape factor and investigation of uncertainty as well as scatter-plots demonstrates a very much characterized uni-model characteristics. The mean model figured in the wake of settling the shape factor gives the highest dependable results. Inversion of noise-free and noisy synthetic data information and additionally field information shows the adequacy of the methodology. The procedure has been carefully and practically connected to five genuine field cases with the nearness of mineralized bodies covered at various profundities in the subsurface and complex geological settings. The method can be to a great degree appropriate for mineral investigation, where the attractive information is seen because of mineral body established in the shallow/deeper subsurface and the calculation time for the entire procedure are short. Keywords: Magnetic anomaly, idealized body, uncertainty, VFSA, multiple structure, ore exploration.

  2. SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.

    PubMed

    Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B

    2016-02-04

    Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with limited bioinformatics experience to analyze their data and integrate next generation sequencing (NGS) technologies into the classroom. The SPARTA software and tutorial are available at sparta.readthedocs.org.

  3. The relationship between burden of childhood disease and foreign aid for child health.

    PubMed

    Bavinger, J Clay; Wise, Paul; Bendavid, Eran

    2017-09-15

    We sought to examine the relationship between child specific health aid (CHA) and burden of disease. Based on existing evidence, we hypothesized that foreign aid for child health would not be proportional to burden of disease. In order to examine CHA and burden of disease, we obtained estimates of these parameters from established sources. Estimates of disability adjusted life years (DALYs) in children (0-5 years) were obtained from the World Health Organization for 2000 and 2012. The 10 most burdensome disease categories in each continent, excluding high-income countries, were identified for study. Descriptions of all foreign aid commitments between 1996 and 2009 were obtained from AidData, and an algorithm to designate the target diseases of the commitments was constructed. Data were examined in scatterplots for trends. The most burdensome childhood diseases varied by continent. In all continents, newborn diseases, vaccine-preventable diseases (lower respiratory diseases, measles, meningitis, tetanus, and pertussis), and diarrheal diseases ranked within the four most burdensome diseases. Infectious diseases such as malaria, tuberculosis, and HIV were also among the ten most burdensome diseases in sub-Saharan Africa, and non-communicable diseases were associated with much of the burden in the other continents. CHA grew from $7.4 billion in 1996 to $17.7 billion in 2009 for our study diseases. Diarrheal diseases and malnutrition received the most CHA as well as the most CHA per DALY. CHA directed at HIV increased dramatically over our study period, from $227,000 in 1996 to $3.4 billion in 2008. Little aid was directed at injuries such as drowning, car accidents, and fires, as well as complex medical diseases such as leukemia and endocrine disorders. CHA has grown significantly over the last two decades. There is no clear relationship between CHA and burden of disease. This report provides a description of foreign aid for child health, and hopes to inform policy and decision-making regarding foreign aid.

  4. Diagnostic and prognostic value of cardiac troponin I assays in patients admitted with symptoms suggestive of acute coronary syndrome.

    PubMed

    Apple, Fred S; Quist, Heidi E; Murakami, MaryAnn M

    2004-04-01

    Increasing numbers of patients are presenting to emergency departments with symptoms suggestive of an acute myocardial infarction. To demonstrate the comparative performance of the Ortho Vitros Troponin I and Beckman Access AccuTnI assays used to detect myocardial infarction and to develop risk stratification schemes for all-cause death in patients who presented with myocardial ischemia symptoms that were suggestive of acute coronary syndrome (ACS). The prospective enrollment of patients with ACS and the measurement of serial plasma samples by 2 commercial cardiac troponin I (cTnI) assays. A metropolitan medical center that admitted patients with ACS during a 2-month period. The study population consisted of 200 consecutively admitted patients who presented with symptoms that were suggestive of ACS. Correlation scatterplots showed no significant bias between cTnI assays based on 659 specimens across the dynamic range of each assay. Only minor differences in slopes and intercepts were observed between assays when correlations were based across selected concentration ranges. The receiver operating characteristic curve areas for the detection of myocardial infarction were not significantly different (Ortho,.991; Beckman,.995). At the 99th percentile (Beckman, 0.04 microg/L; Ortho, 0.08 microg/L), each assay demonstrated 100% sensitivity with 78% and 80% specificity, respectively. Kaplan-Meier survival curves and the log-rank test were used to compare time-to-event data. Patients with increased baseline cTnI values had higher odds ratios of death than did those with normal concentrations. For Ortho, the 99th percentile cutoff was 5.9, and the 10% coefficient of variation cutoff was 10.3; for Beckman, the 99th percentile cutoff was 31.4, and the 10% coefficient of variation cutoff was 15.3. Comparable diagnostic and risk stratification abilities were demonstrated in patients with ACS by the Ortho Vitros and Beckman Access cTnI assays, with no significant analytic bias between cTnI assays.

  5. How easily can omission of patients, or selection amongst poorly-reproducible measurements, create artificial correlations? Methods for detection and implications for observational research design in cardiology.

    PubMed

    Francis, Darrel P

    2013-07-15

    When reported correlation coefficients seem too high to be true, does investigative verification of source data provide suitable reassurance? This study tests how easily omission of patients or selection amongst irreproducible measurements generate fictitious strong correlations, without data fabrication. Two forms of manipulation are applied to a pair of normally-distributed, uncorrelated variables: first, exclusion of patients least favourable to a hypothesised association and, second, making multiple poorly-reproducible measurements per patient and choosing the most supportive. Excluding patients raises correlations powerfully, from 0.0 ± 0.11 (no patients omitted) to 0.40 ± 0.11 (one-fifth omitted), 0.59 ± 0.08 (one-third omitted) and 0.78 ± 0.05 (half omitted). Study size offers no protection: omitting just one-fifth of 75 patients (i.e. publishing 60) makes 92% of correlations statistically significant. Worse, simply selecting the most favourable amongst several measurements raises correlations from 0.0 ± 0.12 (single measurement of each variable) to 0.73 ± 0.06 (best of 2), and 0.90 ± 0.03 (best of 4). 100% of correlation coefficients become statistically significant. Scatterplots may reveal a telltale "shave sign" or "bite sign". Simple statistical tests are presented for these suspicious signatures in single or multiple studies. Correlations are vulnerable to data manipulation. Cardiology is especially vulnerable to patient deletion (because cardiologists ourselves might completely control enrolment and measurement), and selection of "best" measurements (because alternative heartbeats are numerous, and some modalities poorly reproducible). Source data verification cannot detect these but tests might highlight suspicious data and--aggregating across studies--unreliable laboratories or research fields. Cardiological correlation research needs adequately-informed planning and guarantees of integrity, with teeth. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. Circulating CD14+ HLA-DR-/low myeloid-derived suppressor cells predicted early recurrence of hepatocellular carcinoma after surgery.

    PubMed

    Gao, Xing-Hui; Tian, Lu; Wu, Jiong; Ma, Xiao-Lu; Zhang, Chun-Yan; Zhou, Yan; Sun, Yun-Fan; Hu, Bo; Qiu, Shuang-Jian; Zhou, Jian; Fan, Jia; Guo, Wei; Yang, Xin-Rong

    2017-09-01

    Myeloid-derived suppressor cells (MDSCs) play an important role in tumor progression. The aim of the present study was to investigate the prognostic value of MDSCs for early recurrence of hepatocellular carcinoma (HCC) in patients undergoing curative resection. Myeloid-derived suppressor cells were measured by flow cytometry. The correlation between MDSCs and tumor recurrence was analyzed using a cohort of 183 patients who underwent curative resection between February 2014 and July 2015. Prognostic significance was further assessed using Kaplan-Meier survival estimates and log-rank tests. In vivo, CD14 + HLA-DR -/low MDSCs inhibit T cell proliferation and secretion. The frequency of CD14 + HLA-DR -/low MDSCs was significantly higher in HCC patients (3.7 ± 5.3%, n = 183) than in chronic hepatitis patients (1.4 ± 0.6%, n = 25) and healthy controls (1.1 ± 0.5%, n = 50). High frequency of MDSCs was significantly correlated with recurrence (time to recurrence) (P < 0.001) and overall survival (P = 0.034). Patients with HCC in the high MDSC group were prone to more vascular invasion (P = 0.018) and high systemic immune-inflammation index (SII) (P = 0.009) than those in the low MDSC group. Scatter-plot analyses revealed a significant positive correlation between the SII level and the frequency of MDSCs (r = 0.188, P = 0.011). Patients with HCC with a high MDSC frequency and high SII level had significantly shorter time to recurrence (P < 0.001) and overall survival (P = 0.028) than those with a low MDSC frequency and low SII. An increased frequency of MDSCs was correlated with early recurrence and predicted the prognosis of patients with HCC undergoing curative resection. The HCC patients with high frequency of MDSCs should be provided more advanced management and frequent monitoring. © 2016 The Japan Society of Hepatology.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Yu, H; Jara, H

    Purpose: To compare enhanced Laws texture derived from parametric proton density (PD) maps to other MRI-based surrogate markers (T2, PD, ADC) in assessing degrees of liver fibrosis in a murine model of hepatic fibrosis using 11.7T scanner. Methods: This animal study was IACUC approved. Fourteen mice were divided into control (n=1) and experimental (n=13). The latter were fed a DDC-supplemented diet to induce hepatic fibrosis. Liver specimens were imaged using an 11.7T scanner; the parametric PD, T2, and ADC maps were generated from spin-echo pulsed field gradient and multi-echo spin-echo acquisitions. Enhanced Laws texture analysis was applied to the PDmore » maps: first, hepatic blood vessels and liver margins were segmented/removed using an automated dual-clustering algorithm; secondly, an optimal thresholding algorithm was applied to reduce the partial volume artifact; next, mean and stdev were corrected to minimize grayscale variation across images; finally, Laws texture was extracted. Degrees of fibrosis was assessed by an experienced pathologist and digital image analysis (%Area Fibrosis). Scatterplots comparing enhanced Laws texture, T2, PD, and ADC values to degrees of fibrosis were generated and correlation coefficients were calculated. Unenhanced Laws texture was also compared to assess the effectiveness of the proposed enhancements. Results: Hepatic fibrosis and the enhanced Laws texture were strongly correlated with higher %Area Fibrosis associated with higher Laws texture (r=0.89). Only a moderate correlation was detected between %Area Fibrosis and unenhanced Laws texture (r=0.70). Strong correlation also existed between ADC and %Area Fibrosis (r=0.86). Moderate correlations were seen between %Area Fibrosis and PD (r=0.65) and T2 (r=0.66). Conclusions: Higher degrees of hepatic fibrosis are associated with increased Laws texture. The proposed enhancements improve the accuracy of Laws texture. Enhanced Laws texture features are more accurate than PD and T2 in assessing fibrosis, and can potentially serve as an accurate surrogate marker for hepatic fibrosis.« less

  8. Plasma and serum serotonin concentrations and surface-bound platelet serotonin expression in Cavalier King Charles Spaniels with myxomatous mitral valve disease.

    PubMed

    Cremer, Signe E; Kristensen, Annemarie T; Reimann, Maria J; Eriksen, Nynne B; Petersen, Stine F; Marschner, Clara B; Tarnow, Inge; Oyama, Mark A; Olsen, Lisbeth H

    2015-06-01

    To investigate serum and plasma serotonin concentrations, percentage of serotonin-positive platelets, level of surface-bound platelet serotonin expression (mean fluorescence intensity [MFI]), and platelet activation (CD62 expression) in platelet-rich plasma from Cavalier King Charles Spaniels with myxomatous mitral valve disease (MMVD). Healthy dogs (n = 15) and dogs with mild MMVD (18), moderate-severe MMVD (19), or severe MMVD with congestive heart failure (CHF; 10). Blood samples were collected from each dog. Serum and plasma serotonin concentrations were measured with an ELISA, and surface-bound platelet serotonin expression and platelet activation were determined by flow cytometry. Dogs with mild MMVD had higher median serum (746 ng/mL) and plasma (33.3 ng/mL) serotonin concentrations, compared with MMVD-affected dogs with CHF (388 ng/mL and 9.9 ng/mL, respectively), but no other group differences were found. Among disease groups, no differences in surface-bound serotonin expression or platelet activation were found. Thrombocytopenic dogs had lower serum serotonin concentration (482 ng/mL) than nonthrombocytopenic dogs (731 ng/mL). In 26 dogs, a flow cytometry scatterplot subpopulation (FSSP) of platelets was identified; dogs with an FSSP had a higher percentage of serotonin-positive platelets (11.0%), higher level of surface-bound serotonin expression (MFI, 32,068), and higher platelet activation (MFI, 2,363) than did dogs without an FSSP (5.7%, 1,230, and 1,165, respectively). An FSSP was present in 93.8% of thrombocytopenic dogs and in 29.5% of nonthrombocytopenic dogs. A substantive influence of circulating serotonin on MMVD stages prior to CHF development in Cavalier King Charles Spaniels was not supported by the study findings. An FSSP of highly activated platelets with pronounced serotonin binding was strongly associated with thrombocytopenia but not MMVD.

  9. Tests of Sunspot Number Sequences: 3. Effects of Regression Procedures on the Calibration of Historic Sunspot Data

    NASA Astrophysics Data System (ADS)

    Lockwood, M.; Owens, M. J.; Barnard, L.; Usoskin, I. G.

    2016-11-01

    We use sunspot-group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups [RB] above a variable cut-off threshold of observed total whole spot area (uncorrected for foreshortening) to simulate what a lower-acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number [RA] using a variety of regression techniques. It is found that a very high correlation between RA and RB (r_{AB} > 0.98) does not prevent large errors in the intercalibration (for example sunspot-maximum values can be over 30 % too large even for such levels of r_{AB}). In generating the backbone sunspot number [R_{BB}], Svalgaard and Schatten ( Solar Phys., 2016) force regression fits to pass through the scatter-plot origin, which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot-cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile ("Q-Q") plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least-squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot-group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar-cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.

  10. On the suitability of the copula types for the joint modelling of flood peaks and volumes along the Danube River

    NASA Astrophysics Data System (ADS)

    Kohnová, Silvia; Papaioannou, George; Bacigál, Tomáš; Szolgay, Ján; Hlavčová, Kamila; Loukas, Athanasios; Výleta, Roman

    2017-04-01

    Flood frequency analysis is often performed as a univariate analysis of flood peaks using a suitable theoretical probability distribution of the annual maximum flood peaks or peak over threshold values. However, also other flood attributes, such as flood volume and duration, are often necessary for the design of hydrotechnical structures and projects. In this study, the suitability of various copula families for a bivariate analysis of peak discharges and flood volumes has been tested on the streamflow data from gauging stations along the whole Danube River. Kendall's rank correlation coefficient (tau) quantifies the dependence between flood peak discharge and flood volume settings. The methodology is tested on two different data samples: 1) annual maximum flood (AMF) peaks with corresponding flood volumes, which is a typical choice for engineering studies and 2). annual maximum flood (AMF) peaks combined with annual maximum flow volumes of fixed durations at 5, 10, 15, 20, 25, 30 and 60 days, which can be regarded as a regime analysis of the dependence between the extremes of both variables in a given year. The bivariate modelling of the peak discharge - flood volume couples is achieved with the use of the the following copulas: Ali-Mikhail-Haq (AMH), Clayton, Frank, Joe, Gumbel, HuslerReiss, Galambos, Tawn, Normal, Plackett and FGM, respectively. Scatterplots of the observed and simulated peak discharge - flood volume pairs and goodness-of-fit tests have been used to assess the overall applicability of the copulas as well as observing any changes in suitable models along the Danube River. The results indicate that, almost all of the considered Archimedean class copulas (e.g. Frank, Clayton and Ali-Mikhail-Haq) perform better than the other copula families selected for this study, and that for the second data samples mostly the upper-tail-flat copulas were suitable.

  11. Relationship between reactive oxygen species and water-soluble organic compounds: Time-resolved benzene carboxylic acids measurement in the coastal area during the KORUS-AQ campaign.

    PubMed

    Bae, Min-Suk; Schauer, James J; Lee, Taehyoung; Jeong, Ju-Hee; Kim, Yoo-Keun; Ro, Chul-Un; Song, Sang-Keun; Shon, Zang-Ho

    2017-12-01

    This study investigated the relationship between water-soluble organic compounds of ambient particulate matter (PM) and cellular redox activity collected from May 28 to June 20 of 2016 at the west coastal site in the Republic of Korea during the KORea-US Air Quality (KORUS-AQ) campaign. Automatic four-hour integrated samples operated at a flow rate of 92 L per minute for the analysis of organic carbon (OC), water-soluble organic carbon (WSOC), elemental carbon (EC), water-soluble ions (WSIs), and benzene carboxylic acids (BCAs) were collected on a 47 mm quartz fiber filter. The influence of atmospheric transport processes was assessed by the Weather Research and Forecasting (WRF) model. OC, EC, WSOC, and BCA were determined by SUNET carbon analyzer, total organic carbon (TOC) analyzer, and liquid chromatography-mass spectrometry mass spectrometry (LC-MSMS), respectively. Twenty-four-hour integrated samples were collected for reactive oxygen species (ROS) analysis using a fluorogenic cell-based method to investigate the main chemical classes of toxicity. The results illustrate that WSOC and specific water-soluble species are associated with the oxidative potential of particulate matter. Pairwise correlation scatterplots between the daily-averaged WSOC and ROS (r 2 of 0.81), and 135-BCA and ROS (r 2 of 0.84), indicate that secondary organic aerosol production was highly associated with ROS activity. In addition, X-ray spectral analysis together with secondary electron images (SEIs) of PM 2.5 particles collected during high ROS concentration events clearly indicate that water-soluble organic aerosols are major contributors to PM 2.5 mass. This study provides insight into the components of particulate matter that are drivers of the oxidative potential of atmospheric particulate matter and potential tracers for this activity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. An approach of surface coal fire detection from ASTER and Landsat-8 thermal data: Jharia coal field, India

    NASA Astrophysics Data System (ADS)

    Roy, Priyom; Guha, Arindam; Kumar, K. Vinod

    2015-07-01

    Radiant temperature images from thermal remote sensing sensors are used to delineate surface coal fires, by deriving a cut-off temperature to separate coal-fire from non-fire pixels. Temperature contrast of coal fire and background elements (rocks and vegetation etc.) controls this cut-off temperature. This contrast varies across the coal field, as it is influenced by variability of associated rock types, proportion of vegetation cover and intensity of coal fires etc. We have delineated coal fires from background, based on separation in data clusters in maximum v/s mean radiant temperature (13th band of ASTER and 10th band of Landsat-8) scatter-plot, derived using randomly distributed homogeneous pixel-blocks (9 × 9 pixels for ASTER and 27 × 27 pixels for Landsat-8), covering the entire coal bearing geological formation. It is seen that, for both the datasets, overall temperature variability of background and fires can be addressed using this regional cut-off. However, the summer time ASTER data could not delineate fire pixels for one specific mine (Bhulanbararee) as opposed to the winter time Landsat-8 data. The contrast of radiant temperature of fire and background terrain elements, specific to this mine, is different from the regional contrast of fire and background, during summer. This is due to the higher solar heating of background rocky outcrops, thus, reducing their temperature contrast with fire. The specific cut-off temperature determined for this mine, to extract this fire, differs from the regional cut-off. This is derived by reducing the pixel-block size of the temperature data. It is seen that, summer-time ASTER image is useful for fire detection but required additional processing to determine a local threshold, along with the regional threshold to capture all the fires. However, the winter Landsat-8 data was better for fire detection with a regional threshold.

  13. Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.

    PubMed

    d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K

    2015-12-01

    Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.

  14. Prevalence of hyperuricemia and relation of serum uric acid with cardiovascular risk factors in a developing country

    PubMed Central

    Conen, D; Wietlisbach, V; Bovet, P; Shamlaye, C; Riesen, W; Paccaud, F; Burnier, M

    2004-01-01

    Background The prevalence of hyperuricemia has rarely been investigated in developing countries. The purpose of the present study was to investigate the prevalence of hyperuricemia and the association between uric acid levels and the various cardiovascular risk factors in a developing country with high average blood pressures (the Seychelles, Indian Ocean, population mainly of African origin). Methods This cross-sectional health examination survey was based on a population random sample from the Seychelles. It included 1011 subjects aged 25 to 64 years. Blood pressure (BP), body mass index (BMI), waist circumference, waist-to-hip ratio, total and HDL cholesterol, serum triglycerides and serum uric acid were measured. Data were analyzed using scatterplot smoothing techniques and gender-specific linear regression models. Results The prevalence of a serum uric acid level >420 μmol/L in men was 35.2% and the prevalence of a serum uric acid level >360 μmol/L was 8.7% in women. Serum uric acid was strongly related to serum triglycerides in men as well as in women (r = 0.73 in men and r = 0.59 in women, p < 0.001). Uric acid levels were also significantly associated but to a lesser degree with age, BMI, blood pressure, alcohol and the use of antihypertensive therapy. In a regression model, triglycerides, age, BMI, antihypertensive therapy and alcohol consumption accounted for about 50% (R2) of the serum uric acid variations in men as well as in women. Conclusions This study shows that the prevalence of hyperuricemia can be high in a developing country such as the Seychelles. Besides alcohol consumption and the use of antihypertensive therapy, mainly diuretics, serum uric acid is markedly associated with parameters of the metabolic syndrome, in particular serum triglycerides. Considering the growing incidence of obesity and metabolic syndrome worldwide and the potential link between hyperuricemia and cardiovascular complications, more emphasis should be put on the evolving prevalence of hyperuricemia in developing countries. PMID:15043756

  15. GPM Pre-Launch Algorithm Development for Physically-Based Falling Snow Retrievals

    NASA Technical Reports Server (NTRS)

    Jackson, Gail Skofronick; Tokay, Ali; Kramer, Anne W.; Hudak, David

    2008-01-01

    In this work we compare and correlate the long time series (Nov.-March) neasurements of precipitation rate from the Parsivels and 2DVD to the passive (89, 150, 183+/-1, +/-3, +/-7 GHz) observations of NOAA's AMSU-B radiometer. There are approximately 5-8 AMSU-B overpass views of the CARE site a day. We separate the comparisons into categories of no precipitation, liquid rain and falling snow precipitation. Scatterplots between the Parsivel snowfall rates and AMSU-B brightness temperatures (TBs) did not show an exploitable relationship for retrievals. We further compared and contrasted brightness temperatures to other surface measurements such as temperature and relative humidity with equally unsatisfying results. We found that there are similar TBs (especially at 89 and 150 GHz) for cases with falling snow and for non-precipitating cases. The comparisons indicate that surface emissivity contributions to the satellite observed TB over land can add uncertainty in detecting and estimating falling snow. The newest results show that the cloud icc scattering signal in the AMSU-B data call be detected by computing clear air TBs based on CARE radiosonde data and a rough estimate of surface emissivity. That is the differences in computed TI3 and AMSU-B TB for precipitating and nonprecipitating cases are unique such that the precipitating versus lon-precipitating cases can be identified. These results require that the radiosonde releases are within an hour of the AMSU-B data and allow for three surface types: no snow on the ground, less than 5 cm snow on the ground, and greater than 5 cm on the ground (as given by ground station data). Forest fraction and measured emissivities were combined to calculate the surface emissivities. The above work and future work to incorporate knowledge about falling snow retrievals into the framework of the expected GPM Bayesian retrievals will be described during this presentation.

  16. Tweeting PP: an analysis of the 2015–2016 Planned Parenthood controversy on Twitter

    PubMed Central

    Han, Leo; Han, Lisa; Darney, Blair; Rodriguez, Maria I.

    2018-01-01

    Objectives We analyzed Twitter tweets and Twitter-provided user data to give geographical, temporal and content insight into the use of social media in the Planned Parenthood video controversy. Methodology We randomly sampled the full Twitter repository (also known as the Firehose) (n=30,000) for tweets containing the phrase “planned parenthood” as well as group-defining hashtags “#defundpp” and “#standwithpp.” We used demographic content provided by the user and word analysis to generate charts, maps and timeline visualizations. Chi-square and t tests were used to compare differences in content, statistical references and dissemination strategies. Results From July 14, 2015, to January 30, 2016, 1,364,131 and 795,791 tweets contained “#defundpp” and “#standwithpp,” respectively. Geographically, #defundpp and #standwithpp were disproportionally distributed to the US South and West, respectively. Word analysis found that early tweets predominantly used “sensational” words and that the proportion of “political” and “call to action” words increased over time. Scatterplots revealed that #standwithpp tweets were clustered and episodic compared to #defundpp. #standwithpp users were more likely to be female [odds ratio (OR) 2.2, confidence interval (CI) 2.0–2.4] and have fewer followers (median 544 vs. 1578, p<.0001). #standwithpp and #defundpp did not differ significantly in their usage of data in tweets. #defundpp users were more likely to link to websites (OR 1.8, CI 1.7–1.9) and to other online dialogs (mean 3.3 vs. 2.0 p<.0001). Conclusion Social media analysis can be used to characterize and understand the content, tempo and location of abortion-related messages in today’s public spheres. Further research may inform proabortion efforts in terms of how information can be more effectively conveyed to the public. Implications This study has implications for how the medical community interfaces with the public with regards to abortion. It highlights how social media are actively exploited instruments for information and message dissemination. Researchers, providers and advocates should be monitoring social media and addressing the public through these modern channels. PMID:28867441

  17. An analysis of high fine aerosol loading episodes in north-central Spain in the summer 2013 - Impact of Canadian biomass burning episode and local emissions

    NASA Astrophysics Data System (ADS)

    Burgos, M. A.; Mateos, D.; Cachorro, V. E.; Toledano, C.; de Frutos, A. M.; Calle, A.; Herguedas, A.; Marcos, J. L.

    2018-07-01

    This work presents an evaluation of a surprising and unusual high turbidity summer period in 2013 recorded in the north-central Iberian Peninsula (IP). The study is made up of three main pollution episodes characterized by very high aerosol optical depth (AOD) values with the presence of fine aerosol particles: the strongest long-range transport Canadian Biomass Burning (BB) event recorded, one of the longest-lasting European Anthropogenic (A) episodes and an extremely strong regional BB. The Canadian BB episode was unusually strong with maximum values of AOD(440 nm) ∼ 0.8, giving rise to the highest value recorded by photometer data in the IP with a clearly established Canadian origin. The anthropogenic pollution episode originated in Europe is mainly a consequence of the strong impact of Canadian BB events over north-central Europe. As regards the local episode, a forest fire in the nature reserve near the Duero River (north-central IP) impacted on the population over 200 km away from its source. These three episodes exhibited fingerprints in different aerosol columnar properties retrieved by sun-photometers of the AErosol RObotic NETwork (AERONET) as well as in particle mass surface concentrations, PMx, measured by the European Monitoring and Evaluation Programme (EMEP). Main statistics, time series and scatterplots relate aerosol loads (aerosol optical depth, AOD and particulate matter, PM) with aerosol size quantities (Ångström Exponent and PM ratio). More detailed microphysical/optical properties retrieved by AERONET inversion products are analysed in depth to describe these events: contribution of fine and coarse particles to AOD and its ratio (the fine mode fraction), volume particle size distribution, fine volume fraction, effective radius, sphericity fraction, single scattering albedo and absorption optical depth. Due to its relevance in climate studies, the aerosol radiative effect has been quantified for the top and bottom of the atmosphere, obtaining mean daily values for this extraordinary summer period of -14.5 and -47.5 Wm-2, respectively.

  18. Is BMI a valid measure of obesity in postmenopausal women?

    PubMed

    Banack, Hailey R; Wactawski-Wende, Jean; Hovey, Kathleen M; Stokes, Andrew

    2018-03-01

    Body mass index (BMI) is a widely used indicator of obesity status in clinical settings and population health research. However, there are concerns about the validity of BMI as a measure of obesity in postmenopausal women. Unlike BMI, which is an indirect measure of obesity and does not distinguish lean from fat mass, dual-energy x-ray absorptiometry (DXA) provides a direct measure of body fat and is considered a gold standard of adiposity measurement. The goal of this study is to examine the validity of using BMI to identify obesity in postmenopausal women relative to total body fat percent measured by DXA scan. Data from 1,329 postmenopausal women participating in the Buffalo OsteoPerio Study were used in this analysis. At baseline, women ranged in age from 53 to 85 years. Obesity was defined as BMI ≥ 30 kg/m and body fat percent (BF%) greater than 35%, 38%, or 40%. We calculated sensitivity, specificity, positive predictive value, and negative predictive value to evaluate the validity of BMI-defined obesity relative BF%. We further explored the validity of BMI relative to BF% using graphical tools, such as scatterplots and receiver-operating characteristic curves. Youden's J index was used to determine the empirical optimal BMI cut-point for each level of BF% defined obesity. The sensitivity of BMI-defined obesity was 32.4% for 35% body fat, 44.6% for 38% body fat, and 55.2% for 40% body fat. Corresponding specificity values were 99.3%, 97.1%, and 94.6%, respectively. The empirical optimal BMI cut-point to define obesity is 24.9 kg/m for 35% BF, 26.49 kg/m for 38% BF, and 27.05 kg/m for 40% BF according to the Youden's index. Results demonstrate that a BMI cut-point of 30 kg/m does not appear to be an appropriate indicator of true obesity status in postmenopausal women. Empirical estimates of the validity of BMI from this study may be used by other investigators to account for BMI-related misclassification in older women.

  19. Characterizing China's energy consumption with selective economic factors and energy-resource endowment: a spatial econometric approach

    NASA Astrophysics Data System (ADS)

    Jiang, Lei; Ji, Minhe; Bai, Ling

    2015-06-01

    Coupled with intricate regional interactions, the provincial disparity of energy-resource endowment and other economic conditions in China have created spatially complex energy consumption patterns that require analyses beyond the traditional ones. To distill the spatial effect out of the resource and economic factors on China's energy consumption, this study recast the traditional econometric model in a spatial context. Several analytic steps were taken to reveal different aspects of the issue. Per capita energy consumption (AVEC) at the provincial level was first mapped to reveal spatial clusters of high energy consumption being located in either well developed or energy resourceful regions. This visual spatial autocorrelation pattern of AVEC was quantitatively tested to confirm its existence among Chinese provinces. A Moran scatterplot was employed to further display a relatively centralized trend occurring in those provinces that had parallel AVEC, revealing a spatial structure with attraction among high-high or low-low regions and repellency among high-low or low-high regions. By a comparison between the ordinary least square (OLS) model and its spatial econometric counterparts, a spatial error model (SEM) was selected to analyze the impact of major economic determinants on AVEC. While the analytic results revealed a significant positive correlation between AVEC and economic development, other determinants showed some intricate influential patterns. The provinces endowed with rich energy reserves were inclined to consume much more energy than those otherwise, whereas changing the economic structure by increasing the proportion of secondary and tertiary industries also tended to consume more energy. Both situations seem to underpin the fact that these provinces were largely trapped in the economies that were supported by technologies of low energy efficiency during the period, while other parts of the country were rapidly modernized by adopting advanced technologies and more efficient industries. On the other hand, institutional change (i.e., marketization) and innovation (i.e., technological progress) exerted positive impacts on AVEC improvement, as always expected in this and other studies. Finally, the model comparison indicated that SEM was capable of separating spatial effect from the error term of OLS, so as to improve goodness-of-fit and the significance level of individual determinants.

  20. Association between progression-free survival and health-related quality of life in oncology: a systematic review protocol.

    PubMed

    Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng

    2016-09-02

    There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Prevalence of Anti-Tuberculosis Drug Resistance in Foreign-Born Tuberculosis Cases in the U.S. and in Their Countries of Origin

    PubMed Central

    Taylor, Allison B.; Kurbatova, Ekaterina V.; Cegielski, J. Peter

    2012-01-01

    Background Foreign-born individuals comprise >50% of tuberculosis (TB) cases in the U.S. Since anti-TB drug resistance is more common in most other countries, when evaluating a foreign-born individual for TB, one must consider the risk of drug resistance. Naturally, clinicians query The Global Project on Anti-tuberculosis Drug Resistance Surveillance (Global DRS) which provides population-based data on the prevalence of anti-TB drug resistance in 127 countries starting in 1994. However, foreign-born persons in the U.S. are a biased sample of the population of their countries of origin, and Global DRS data may not accurately predict their risk of drug resistance. Since implementing drug resistance surveillance in 1993, the U.S. National TB Surveillance System (NTSS) has accumulated systematic data on over 130,000 foreign-born TB cases from more than 200 countries and territories. Our objective was to determine whether the prevalence of drug resistance among foreign-born TB cases correlates better with data from the Global DRS or with data on foreign-born TB cases in the NTSS. Methods and Findings We compared the prevalence of resistance to isoniazid and rifampin among foreign-born TB cases in the U.S., 2007–2009, with US NTSS data from 1993 to 2006 and with Global DRS data from 1994–2007 visually with scatterplots and statistically with correlation and linear regression analyses. Among foreign-born TB cases in the U.S., 2007–2009, the prevalence of isoniazid resistance and multidrug resistance (MDR, i.e. resistance to isoniazid and rifampin), correlated much better with 1993–2006 US surveillance data (isoniazid: r = 0.95, P<.001, MDR: r = 0.75, P<.001) than with Global DRS data, 1994–2007 (isoniazid: r = 0.55, P = .001; MDR: r = 0.50, P<.001). Conclusion Since 1993, the US NTSS has accumulated sufficient data on foreign-born TB cases to estimate the risk of drug resistance among such individuals better than data from the Global DRS. PMID:23145161

  2. Evaluation of NLDAS 12-km and downscaled 1-km temperature products in New York State for potential use in health exposure response studies

    NASA Astrophysics Data System (ADS)

    Estes, M. G., Jr.; Insaf, T.; Crosson, W. L.; Al-Hamdan, M. Z.

    2017-12-01

    Heat exposure metrics (maximum and minimum daily temperatures,) have a close relationship with human health. While meteorological station data provide a good source of point measurements, temporal and spatially consistent temperature data are needed for health studies. Reanalysis data such as the North American Land Data Assimilation System's (NLDAS) 12-km gridded product are an effort to resolve spatio-temporal environmental data issues; the resolution may be too coarse to accurately capture the effects of elevation, mixed land/water areas, and urbanization. As part of this NASA Applied Sciences Program funded project, the NLDAS 12-km air temperature product has been downscaled to 1-km using MODIS Land Surface Temperature patterns. Limited validation of the native 12-km NLDAS reanalysis data has been undertaken. Our objective is to evaluate the accuracy of both the 12-km and 1-km downscaled products using the US Historical Climatology Network station data geographically dispersed across New York State. Statistical methods including correlation, scatterplots, time series and summary statistics were used to determine the accuracy of the remotely-sensed maximum and minimum temperature products. The specific effects of elevation and slope on remotely-sensed temperature product accuracy were determined with 10-m digital elevation data that were used to calculate percent slope and link with the temperature products at multiple scales. Preliminary results indicate the downscaled temperature product improves accuracy over the native 12-km temperature product with average correlation improvements from 0.81 to 0.85 for minimum and 0.71 to 0.79 for maximum temperatures in 2009. However, the benefits vary temporally and geographically. Our results will inform health studies using remotely-sensed temperature products to determine health risk from excessive heat by providing a more robust assessment of the accuracy of the 12-km NLDAS product and additional accuracy gained from the 1-km downscaled product. Also, the results will be shared with the National Weather Service to determine potential benefits to heat warning systems and evaluated for inclusion in the Centers of Disease Control and Prevention (CDC) Environmental Public Health Tracking Network as a resource for the health community.

  3. Metrics for Identifying Food Security Status and the Population with Potential to Benefit from Nutrition Interventions in the Lives Saved Tool (LiST).

    PubMed

    Jackson, Bianca D; Walker, Neff; Heidkamp, Rebecca

    2017-11-01

    Background: The Lives Saved Tool (LiST) uses the poverty head-count ratio at $1.90/d as a proxy for food security to identify the percentage of the population with the potential to benefit from balanced energy supplementation and complementary feeding (CF) interventions, following the approach used for the Lancet 's 2008 series on Maternal and Child Undernutrition. Because much work has been done in the development of food security indicators, a re-evaluation of the use of this indicator was warranted. Objective: The aim was to re-evaluate the use of the poverty head-count ratio at $1.90/d as the food security proxy indicator in LiST. Methods: We carried out a desk review to identify available indicators of food security. We identified 3 indicators and compared them by using scatterplots, Spearman's correlations, and Bland-Altman plot analysis. We generated LiST projections to compare the modeled impact results with the use of the different indicators. Results: There are many food security indicators available, but only 3 additional indicators were identified with the data availability requirements to be used as the food security indicator in LiST. As expected, analyzed food security indicators were significantly positively correlated ( P < 0.001), but there was generally poor agreement between them. The disparity between the indicators also increases as the values of the indicators increase. Consequently, the choice of indicator can have a considerable effect on the impact of interventions modeled in LiST, especially in food-insecure contexts. Conclusions: There was no single indicator identified that is ideal for measuring the percentage of the population who is food insecure for LiST. Thus, LiST will use the food security indicators that were used in the meta-analyses that produced the effect estimates. These are the poverty head-count ratio at $1.90/d for CF interventions and the prevalence of a low body mass index in women of reproductive age for balanced energy supplementation interventions. © 2017 American Society for Nutrition.

  4. SeeSway - A free web-based system for analysing and exploring standing balance data.

    PubMed

    Clark, Ross A; Pua, Yong-Hao

    2018-06-01

    Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate between someone with neurological impairment and a healthy control. The goal of SeeSway is to provide a simple yet powerful educational and research tool to explore how standing balance is affected in aging and clinical populations. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    PubMed

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  6. FlooDSuM - a decision support methodology for assisting local authorities in flood situations

    NASA Astrophysics Data System (ADS)

    Schwanbeck, Jan; Weingartner, Rolf

    2014-05-01

    Decision making in flood situations is a difficult task, especially in small to medium-sized mountain catchments (30 - 500 km2) which are usually characterized by complex topography, high drainage density and quick runoff response to rainfall events. Operating hydrological models driven by numerical weather prediction systems, which have a lead-time of several hours up to few even days, would be beneficial in this case as time for prevention could be gained. However, the spatial and quantitative accuracy of such meteorological forecasts usually decrease with increasing lead-time. In addition, the sensitivity of rainfall-runoff models to inaccuracies in estimations of areal rainfall increases with decreasing catchment size. Accordingly, decisions on flood alerts should ideally be based on areal rainfall from high resolution and short-term numerical weather prediction, nowcasts or even real-time measurements, which is transformed into runoff by a hydrological model. In order to benefit from the best possible rainfall data while retaining enough time for alerting and for prevention, the hydrological model should be fast and easily applicable by decision makers within local authorities themselves. The proposed decision support methodology FlooDSuM (Flood Decision Support Methodology) aims to meet those requirements. Applying FlooDSuM, a few successive binary decisions of increasing complexity have to be processed following a flow-chart-like structure. Prepared data and straightforwardly applicable tools are provided for each of these decisions. Maps showing the current flood disposition are used for the first step. While danger of flooding cannot be excluded more and more complex and time consuming methods will be applied. For the final decision, a set of scatter-plots relating areal precipitation to peak flow is provided. These plots take also further decisive parameters into account such as storm duration, distribution of rainfall intensity in time as well as the catchment's antecedent moisture conditions. The proposed approach is currently tested in two catchments in the Swiss Pre-Alps and Alps. We will show the general setup and selected results. The findings of those case studies will lead to further improvements of the proposed approach.

  7. WebDMS: A Web-Based Data Management System for Environmental Data

    NASA Astrophysics Data System (ADS)

    Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.

    2015-12-01

    DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.

  8. Clinical haematology and biochemistry profiles of cattle naturally infected with Theileria orientalis Ikeda type in New Zealand.

    PubMed

    Lawrence, K E; Forsyth, S F; Vaatstra, B L; McFadden, Amj; Pulford, D J; Govindaraju, K; Pomroy, W E

    2018-01-01

    To present the haematology and biochemistry profiles for cattle in New Zealand naturally infected with Theileria orientalis Ikeda type and investigate if the results differed between adult dairy cattle and calves aged <6 months. Haematology and biochemistry results were obtained from blood samples from cattle which tested positive for T. orientalis Ikeda type by PCR, that were submitted to veterinary laboratories in New Zealand between October 2012 and November 2014. Data sets for haematology and biochemistry results were prepared for adult dairy cattle (n=62 and 28, respectively) and calves aged <6 months (n=62 and 28, respectively), which were matched on the basis of individual haematocrit (HCT). Results were compared between age groups when categorised by HCT. Selected variables were plotted against individual HCT, and locally weighted scatterplot smoothing (Loess) curves were fitted to the data for adult dairy cattle and calves <6 months old. When categorised by HCT, the proportion of samples with HCT <0.15 L/L (severe anaemia) was greater for adult dairy cattle than for beef or dairy calves, for both haematology (p<0.002) and biochemistry (p<0.001) submissions. There were differences (p<0.05) between adult dairy cattle and calves aged <6 months in the relationships between HCT and red blood cell counts, mean corpuscular volume, mean corpuscular haemoglobin, mean corpuscular haemoglobin concentrations, lymphocyte and eosinophil counts, and activities of glutamate dehydrogenase and aspartate aminotransferase. In both age groups anisocytosis was frequently recorded. The proportion of blood smears showing mild and moderate macrocytosis was greater in adults than calves (p=0.01), and mild and moderate poikilocytosis was greater in calves than adults (p=0.005). The haematology and biochemistry changes observed in cattle infected with T. orientalis Ikeda type were consistent with extravascular haemolytic anaemia. Adult dairy cattle were more likely to be severely anaemic than calves. There were differences in haematology and biochemistry profiles between adult dairy cattle and calves, but most of these differences likely had a physiological rather than pathological basis. Overall, the haematological changes in calves aged <6 months appeared less severe than in adult dairy cattle.

  9. Satellite and ground-based remote sensing of aerosols during intense haze event of October 2013 over lahore, Pakistan

    NASA Astrophysics Data System (ADS)

    Tariq, Salman; Zia, ul-Haq; Ali, Muhammad

    2016-02-01

    Due to increase in population and economic development, the mega-cities are facing increased haze events which are causing important effects on the regional environment and climate. In order to understand these effects, we require an in-depth knowledge of optical and physical properties of aerosols in intense haze conditions. In this paper an effort has been made to analyze the microphysical and optical properties of aerosols during intense haze event over mega-city of Lahore by using remote sensing data obtained from satellites (Terra/Aqua Moderate-resolution Imaging Spectroradiometer (MODIS) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO)) and ground based instrument (AErosol RObotic NETwork (AERONET)) during 6-14 October 2013. The instantaneous highest value of Aerosol Optical Depth (AOD) is observed to be 3.70 on 9 October 2013 followed by 3.12 on 8 October 2013. The primary cause of such high values is large scale crop residue burning and urban-industrial emissions in the study region. AERONET observations show daily mean AOD of 2.36 which is eight times higher than the observed values on normal day. The observed fine mode volume concentration is more than 1.5 times greater than the coarse mode volume concentration on the high aerosol burden day. We also find high values (~0.95) of Single Scattering Albedo (SSA) on 9 October 2013. Scatter-plot between AOD (500 nm) and Angstrom exponent (440-870 nm) reveals that biomass burning/urban-industrial aerosols are the dominant aerosol type on the heavy aerosol loading day over Lahore. MODIS fire activity image suggests that the areas in the southeast of Lahore across the border with India are dominated by biomass burning activities. A Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model backward trajectory showed that the winds at 1000 m above the ground are responsible for transport from southeast region of biomass burning to Lahore. CALIPSO derived sub-types of aerosols with vertical profile taken on 10 October 2013 segregates the wide spread aerosol burden as smoke, polluted continental and dust aerosols.

  10. A critical review of the ESCAPE project for estimating long-term health effects of air pollution.

    PubMed

    Lipfert, Frederick W

    2017-02-01

    The European Study of Cohorts for Air Pollution Effects (ESCAPE) is a13-nation study of long-term health effects of air pollution based on subjects pooled from up to 22 cohorts that were intended for other purposes. Twenty-five papers have been published on associations of various health endpoints with long-term exposures to NOx, NO2, traffic indicators, PM10, PM2.5 and PM constituents including absorbance (elemental carbon). Seven additional ESCAPE papers found moderate correlations (R2=0.3-0.8) between measured air quality and estimates based on land-use regression that were used; personal exposures were not considered. I found no project summaries or comparisons across papers; here I conflate the 25 ESCAPE findings in the context of other recent European epidemiology studies. Because one ESCAPE cohort contributed about half of the subjects, I consider it and the other 18 cohorts separately to compare their contributions to the combined risk estimates. I emphasize PM2.5 and confirm the published hazard ratio of 1.14 (1.04-1.26) per 10μg/m3 for all-cause mortality. The ESCAPE papers found 16 statistically significant (p<0.05) risks among the125 pollutant-endpoint combinations; 4 each for PM2.5 and PM10, 1 for PM absorbance, 5 for NO2, and 2 for traffic. No PM constituent was consistently significant. No significant associations were reported for cardiovascular mortality; low birthrate was significant for all pollutants except PM absorbance. Based on associations with PM2.5, I find large differences between all-cause death estimates and the sum of specific-cause death estimates. Scatterplots of PM2.5 mortality risks by cause show no consistency across the 18 cohorts, ostensibly because of the relatively few subjects. Overall, I find the ESCAPE project inconclusive and I question whether the efforts required to estimate exposures for small cohorts were worthwhile. I suggest that detailed studies of the large cohort using historical exposures and additional cardiovascular risk factors might be productive. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Coastal fog frequency and watershed recharge metrics for coho salmon conservation recovery

    NASA Astrophysics Data System (ADS)

    Torregrosa, A.; Flint, L. E.; Flint, A. L.

    2015-12-01

    Endangered Central California Coast coho salmon benefit from summertime occurrences of fog and low cloud cover (FLCC). Watershed hydrology is a critical factor affecting population dynamics of coho and FLCC affects this in three ways. First, streams remain cooler in late summer when shaded by FLCC—high temperatures are lethal to coho. Second, more water reaches the stream when FLCC shades riparian vegetation thereby reducing evapotranspiration. Third, fog drip adds water directly into streams. The increased stream flow can be a critical resource in late summer when coastal watersheds are at their lowest subsurface discharge rate. Associated low stream flows can trap juvenile coho in pools, resulting in high rates of mortality due to higher predation exposure, overheating and, if the pool dries up, lack of habitat. The 2012 National Marine Fisheries Service Final Recovery Plan identified 75 watersheds that historically supported coho salmon. The recovery team used biological and environmental metrics to identify subwatersheds where recovery action implementation had the highest probability of improving coho salmon population survival. These subwatersheds were classified into three categories: Core (n=89), Phase I (n=93), or Phase II (N=157) (CPP). Differences among the CPP-rated subwatersheds were explored using FLCC frequency data, derived from a decade of hourly weather satellites, combined with groundwater recharge metrics from the Basin Characterization Model (BCM) to provide additional environmental dimensions. Average summertime (June, July, August, and September) FLCC in the subwatersheds ranged from 2.2 -11.3 hrs/day and cumulative groundwater recharge ranged from 6 mm -894 mm. A two dimensional scatterplot (x = FLCC; y = recharge) of subwatersheds divided into 4 quadrants , (low FLCC - low recharge, low - high, high - low, high - high, ) shows 11 Core, 6 Phase I, and 5 Phase II areas in the high - high quadrant. The majority of Phase I and II areas are in the low - low quadrant whereas the majority of Core areas are in low - high. Future conditions will impact the capacity of these subwatershed areas to continue to support coho population. FLCC metrics for interannual variation and future forecasts of recharge and air temperatures were used to analyze the difference in capacity (resilience) among areas.

  12. Tweeting PP: an analysis of the 2015-2016 Planned Parenthood controversy on Twitter.

    PubMed

    Han, Leo; Han, Lisa; Darney, Blair; Rodriguez, Maria I

    2017-12-01

    We analyzed Twitter tweets and Twitter-provided user data to give geographical, temporal and content insight into the use of social media in the Planned Parenthood video controversy. We randomly sampled the full Twitter repository (also known as the Firehose) (n=30,000) for tweets containing the phrase "planned parenthood" as well as group-defining hashtags "#defundpp" and "#standwithpp." We used demographic content provided by the user and word analysis to generate charts, maps and timeline visualizations. Chi-square and t tests were used to compare differences in content, statistical references and dissemination strategies. From July 14, 2015, to January 30, 2016, 1,364,131 and 795,791 tweets contained "#defundpp" and "#standwithpp," respectively. Geographically, #defundpp and #standwithpp were disproportionally distributed to the US South and West, respectively. Word analysis found that early tweets predominantly used "sensational" words and that the proportion of "political" and "call to action" words increased over time. Scatterplots revealed that #standwithpp tweets were clustered and episodic compared to #defundpp. #standwithpp users were more likely to be female [odds ratio (OR) 2.2, confidence interval (CI) 2.0-2.4] and have fewer followers (median 544 vs. 1578, p<.0001). #standwithpp and #defundpp did not differ significantly in their usage of data in tweets. #defundpp users were more likely to link to websites (OR 1.8, CI 1.7-1.9) and to other online dialogs (mean 3.3 vs. 2.0 p<.0001). Social media analysis can be used to characterize and understand the content, tempo and location of abortion-related messages in today's public spheres. Further research may inform proabortion efforts in terms of how information can be more effectively conveyed to the public. This study has implications for how the medical community interfaces with the public with regards to abortion. It highlights how social media are actively exploited instruments for information and message dissemination. Researchers, providers and advocates should be monitoring social media and addressing the public through these modern channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Use of multiple relocation techniques to better understand seismotectonic structure in Greece

    NASA Astrophysics Data System (ADS)

    Bozionelos, George; Ganas, Athanassios; Karastathis, Vassilios; Moshou, Alexandra

    2015-04-01

    The identification of the structure of seismicity associated with active faults is of great significance particularly for the densely populated areas of Greece, such as Corinth Gulf, SW Peloponnese and central Crete. Manual analysis of the seismicity that has been recorded by the Hellenic Unified Seismological Network (HUSN) for the recent years provides the opportunity to determine accurate hypocentral solutions using the weighted P and S wave arrival times for these regions. The purpose is to perform precise event location and relative relocation so as to obtain the spatial distribution of the recorded seismicity with the needed resolution. In order to investigate the influence of the velocity model on the seismicity distribution and to find the most reliable hypocentral locations, different velocity models (both 1-D and 3-D) and location schemes are adopted and thoroughly tested. Initially, to test the models, the hypocentral locations, including the determination of the location uncertainties, are obtained applying the non-linear location tool, NonLinLoc. To approximate the likelihood function, the much more robust in the presence of outliers, Equal Differential Time (EDT) is selected. To locate the earthquakes the Oct-tree search is used. Histograms with the RMS error, the spatial errors and the maximum half-axis (LEN3) of the 68% confidence ellipsoid are created. Moreover, the form of density scatterplots and the difference between maximum likelihood and expectation locations is taken into account. As an additional procedure, the travel-time residuals are examined separately for each station as a function of epicentral distance. Finaly, several cross sections are constructed in various azimuths and the spatial distribution of the earthquakes is evaluated and compared with the active fault structures. In order to highlight the activated faults, an additional relocation procedure is performed, using the double-difference algorithm HYPODD and incorporating the traveltime data of the best fitting velocity models. The accurate determination of seismicity will play a key role in revealing the mechanisms that contribute to the crustal deformation and to active tectonics. Note: this research was funded by the ASPIDA project.

  14. A New, More Physically Based Algorithm, for Retrieving Aerosol Properties over Land from MODIS

    NASA Technical Reports Server (NTRS)

    Levy, Robert C.; Kaufman, Yoram J.; Remer, Lorraine A.; Mattoo, Shana

    2004-01-01

    The MOD Imaging Spectrometer (MODIS) has been successfully retrieving aerosol properties, beginning in early 2000 from Terra and from mid 2002 from Aqua. Over land, the retrieval algorithm makes use of three MODIS channels, in the blue, red and infrared wavelengths. As part of the validation exercises, retrieved spectral aerosol optical thickness (AOT) has been compared via scatterplots against spectral AOT measured by the global Aerosol Robotic NETwork (AERONET). On one hand, global and long term validation looks promising, with two-thirds (average plus and minus one standard deviation) of all points falling between published expected error bars. On the other hand, regression of these points shows a positive y-offset and a slope less than 1.0. For individual regions, such as along the U.S. East Coast, the offset and slope are even worse. Here, we introduce an overhaul of the algorithm for retrieving aerosol properties over land. Some well-known weaknesses in the current aerosol retrieval from MODIS include: a) rigid assumptions about the underlying surface reflectance, b) limited aerosol models to choose from, c) simplified (scalar) radiative transfer (RT) calculations used to simulate satellite observations, and d) assumption that aerosol is transparent in the infrared channel. The new algorithm attempts to address all four problems: a) The new algorithm will include surface type information, instead of fixed ratios of the reflectance in the visible channels to the mid-IR reflectance. b) It will include updated aerosol optical properties to reflect the growing aerosol retrieved from eight-plus years of AERONE". operation. c) The effects of polarization will be including using vector RT calculations. d) Most importantly, the new algorithm does not assume that aerosol is transparent in the infrared channel. It will be an inversion of reflectance observed in the three channels (blue, red, and infrared), rather than iterative single channel retrievals. Thus, this new formulation of the MODIS aerosol retrieval over land includes more physically based surface, aerosol and radiative transfer with fewer potentially erroneous assumptions.

  15. Association between shelter crowding and incidence of sleep disturbance among disaster evacuees: a retrospective medical chart review study.

    PubMed

    Kawano, Takahisa; Nishiyama, Kei; Morita, Hiroshi; Yamamura, Osamu; Hiraide, Atsuchi; Hasegawa, Kohei

    2016-01-13

    We determined whether crowding at emergency shelters is associated with a higher incidence of sleep disturbance among disaster evacuees and identified the minimum required personal space at shelters. Retrospective review of medical charts. 30 shelter-based medical clinics in Ishinomaki, Japan, during the 46 days following the Great Eastern Japan Earthquake and Tsunami in 2011. Shelter residents who visited eligible clinics. Based on the result of a locally weighted scatter-plot smoothing technique assessing the relationship between the mean space per evacuee and cumulative incidence of sleep disturbance at the shelter, eligible shelters were classified into crowded and non-crowded shelters. The cumulative incidence per 1000 evacuees was compared between groups, using a Mann-Whitney U test. To assess the association between shelter crowding and the daily incidence of sleep disturbance per 1000 evacuees, quasi-least squares method adjusting for potential confounders was used. The 30 shelters were categorised as crowded (mean space per evacuee <5.0 m(2), 9 shelters) or non-crowded (≥ 5.0 m(2), 21 shelters). The study included 9031 patients. Among the eligible patients, 1079 patients (11.9%) were diagnosed with sleep disturbance. Mean space per evacuee during the study period was 3.3 m(2) (SD, 0.8 m(2)) at crowded shelters and 8.6 m(2) (SD, 4.3 m(2)) at non-crowded shelters. The median cumulative incidence of sleep disturbance did not differ between the crowded shelters (2.3/1000 person-days (IQR, 1.6-5.4)) and non-crowded shelters (1.9/1000 person-days (IQR, 1.0-2.8); p=0.20). In contrast, after adjusting for potential confounders, crowded shelters had an increased daily incidence of sleep disturbance (2.6 per 1000 person-days; 95% CI 0.2 to 5.0/1000 person-days, p=0.03) compared to that at non-crowded shelters. Crowding at shelters may exacerbate sleep disruptions in disaster evacuees; therefore, appropriate evacuation space requirements should be considered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. A statistical learning framework for groundwater nitrate models of the Central Valley, California, USA

    USGS Publications Warehouse

    Nolan, Bernard T.; Fienen, Michael N.; Lorenz, David L.

    2015-01-01

    We used a statistical learning framework to evaluate the ability of three machine-learning methods to predict nitrate concentration in shallow groundwater of the Central Valley, California: boosted regression trees (BRT), artificial neural networks (ANN), and Bayesian networks (BN). Machine learning methods can learn complex patterns in the data but because of overfitting may not generalize well to new data. The statistical learning framework involves cross-validation (CV) training and testing data and a separate hold-out data set for model evaluation, with the goal of optimizing predictive performance by controlling for model overfit. The order of prediction performance according to both CV testing R2 and that for the hold-out data set was BRT > BN > ANN. For each method we identified two models based on CV testing results: that with maximum testing R2 and a version with R2 within one standard error of the maximum (the 1SE model). The former yielded CV training R2 values of 0.94–1.0. Cross-validation testing R2 values indicate predictive performance, and these were 0.22–0.39 for the maximum R2 models and 0.19–0.36 for the 1SE models. Evaluation with hold-out data suggested that the 1SE BRT and ANN models predicted better for an independent data set compared with the maximum R2 versions, which is relevant to extrapolation by mapping. Scatterplots of predicted vs. observed hold-out data obtained for final models helped identify prediction bias, which was fairly pronounced for ANN and BN. Lastly, the models were compared with multiple linear regression (MLR) and a previous random forest regression (RFR) model. Whereas BRT results were comparable to RFR, MLR had low hold-out R2 (0.07) and explained less than half the variation in the training data. Spatial patterns of predictions by the final, 1SE BRT model agreed reasonably well with previously observed patterns of nitrate occurrence in groundwater of the Central Valley.

  17. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795

  18. Vcs.js - Visualization Control System for the Web

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Lipsa, D.; Doutriaux, C.; Beezley, J. D.; Williams, D. N.; Fries, S.; Harris, M. B.

    2016-12-01

    VCS is a general purpose visualization library, optimized for climate data, which is part of the UV-CDAT system. It provides a Python API for drawing 2D plots such as lineplots, scatter plots, Taylor diagrams, data colored by scalar values, vector glyphs, isocontours and map projections. VCS is based on the VTK library. Vcs.js is the corresponding JavaScript API, designed to be as close as possible to the original VCS Python API and to provide similar functionality for the Web. Vcs.js includes additional functionality when compared with VCS. This additional API is used to introspect data files available on the server and variables available in a data file. Vcs.js can display plots in the browser window. It always works with a server that reads a data file, extracts variables from the file and subsets the data. From this point, two alternate paths are possible. First the system can render the data on the server using VCS producing an image which is send to the browser to be displayed. This path works for for all plot types and produces a reference image identical with the images produced by VCS. This path uses the VTK-Web library. As an optimization, usable in certain conditions, a second path is possible. Data is packed, and sent to the browser which uses a JavaScript plotting library, such as plotly, to display the data. Plots that work well in the browser are line-plots, scatter-plots for any data and many other plot types for small data and supported grid types. As web technology matures, more plots could be supported for rendering in the browser. Rendering can be done either on the client or on the server and we expect that the best place to render will change depending on the available web technology, data transfer costs, server management costs and value provided to users. We intend to provide a flexible solution that allows for both client and server side rendering and a meaningful way to choose between the two. We provide a web-based user interface called vCdat which uses Vcs.js as its visualization library. Our paper will discuss the principles guiding our design choices for Vcs.js, present our design in detail and show a sample usage of the library.

  19. Test of a new stable isotopic fingerprinting technique (i.e. Compound Specific Stable Isotope) in an Austrian sub-catchment to establish agricultural soil source contribution to deposited sediment

    NASA Astrophysics Data System (ADS)

    Mbaye, Modou; Mabit, Lionel; Gibbs, Max; Meusburger, Katrin; Toloza, Arsenio; Resch, Christian; Klik, Andreas; Swales, Andrew; Alewell, Christine

    2017-04-01

    In order to test and refine the use of compound-specific stable isotope (CSSI) as a fingerprinting technique, an innovative study was conducted in a sub-catchment dominated by C3 plants located 60 km north of Vienna. This experimental site consists of 4 different contributing sources (i.e. 3 agricultural fields and one grassed waterway) and one sediment mixture in which the δ13C values of the bulk soil carbon and of various fatty acids (FAs) were analysed after a cost effective sampling strategy. Bi-scatterplots of all possible combinations of δ13C FAs including the bulk soil carbon δ13C showed that bulk soil carbon δ13C is a strong discriminant among the other FAs. Moreover, bulk soil carbon δ13C values highlighted the highest difference between the four sources and the δ13C values of C24 indicated significant differences for all sources while δ13C of C22 did not exhibit a significant difference between the two first sources. An additional correlation analysis revealed that the highest significant linear dependencies are between δ13C16 & δ13C18 > δ13C18 & δ13C24 > δ13C16 & δ13C24. Among the variables, the bulk soil carbon δ13C was found to be the least correlated parameter, confirming that it is the most reliable discriminator to determine the sediment origins in the mixture. To summarize, only the long chain FAs (i.e. C22 and C24) as well as the bulk soil carbon δ13C succeeded in fulfilling our multivariate statistical tests. These findings were confirmed by the mixing polygon tests and Principal Component Analysis. Using three different mixing models (i.e. Iso-source, CSSIAR v1.0 and MIXSIAR), the contribution of the different sources to the mixture were evaluated. All models highlighted that the third source (field having C3 and C4 plants in rotation) and the grassed waterway were the main contributing agricultural area representing 25-31% and 50-57% of the deposited sediment constituting the mixture, respectively.

  20. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition

    PubMed Central

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-01-01

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijiang stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting. PMID:29883381

  1. Evaluation of the MODIS Aerosol Retrievals over Ocean and Land during CLAMS.

    NASA Astrophysics Data System (ADS)

    Levy, R. C.; Remer, L. A.; Martins, J. V.; Kaufman, Y. J.; Plana-Fattori, A.; Redemann, J.; Wenny, B.

    2005-04-01

    The Chesapeake Lighthouse Aircraft Measurements for Satellites (CLAMS) experiment took place from 10 July to 2 August 2001 in a combined ocean-land region that included the Chesapeake Lighthouse [Clouds and the Earth's Radiant Energy System (CERES) Ocean Validation Experiment (COVE)] and the Wallops Flight Facility (WFF), both along coastal Virginia. This experiment was designed mainly for validating instruments and algorithms aboard the Terra satellite platform, including the Moderate Resolution Imaging Spectroradiometer (MODIS). Over the ocean, MODIS retrieved aerosol optical depths (AODs) at seven wavelengths and an estimate of the aerosol size distribution. Over the land, MODIS retrieved AOD at three wavelengths plus qualitative estimates of the aerosol size. Temporally coincident measurements of aerosol properties were made with a variety of sun photometers from ground sites and airborne sites just above the surface. The set of sun photometers provided unprecedented spectral coverage from visible (VIS) to the solar near-infrared (NIR) and infrared (IR) wavelengths. In this study, AOD and aerosol size retrieved from MODIS is compared with similar measurements from the sun photometers. Over the nearby ocean, the MODIS AOD in the VIS and NIR correlated well with sun-photometer measurements, nearly fitting a one-to-one line on a scatterplot. As one moves from ocean to land, there is a pronounced discontinuity of the MODIS AOD, where MODIS compares poorly to the sun-photometer measurements. Especially in the blue wavelength, MODIS AOD is too high in clean aerosol conditions and too low under larger aerosol loadings. Using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) radiative code to perform atmospheric correction, the authors find inconsistency in the surface albedo assumptions used by the MODIS lookup tables. It is demonstrated how the high bias at low aerosol loadings can be corrected. By using updated urban/industrial aerosol climatology for the MODIS lookup table over land, it is shown that the low bias for larger aerosol loadings can also be corrected. Understanding and improving MODIS retrievals over the East Coast may point to strategies for correction in other locations, thus improving the global quality of MODIS. Improvements in regional aerosol detection could also lead to the use of MODIS for monitoring air pollution.

  2. Time trend in the impact of heat waves on daily mortality in Spain for a period of over thirty years (1983-2013).

    PubMed

    Díaz, J; Carmona, R; Mirón, I J; Luna, M Y; Linares, C

    2018-07-01

    Many of the studies that analyze the future impact of climate change on mortality assume that the temperature that constitutes a heat wave will not change over time. This is unlikely, however, given the process of adapting to heat changes, prevention plans, and improvements in social and health infrastructure. The objective of this study is to analyze whether, during the 1983-2013 period, there has been a temporal change in the maximum daily temperatures that constitute a heat wave (T threshold ) in Spain, and to investigate whether there has been variation in the attributable risk (AR) associated with mortality due to high temperatures in this period. This study uses daily mortality data for natural causes except accidents CIEX: A00-R99 in municipalities of over 10,000 inhabitants in 10 Spanish provinces and maximum temperature data from observatories located in province capitals. The time series is divided into three periods: 1983-1992, 1993-2003 and 2004-2013. For each period and each province, the value of T threshold was calculated using scatter-plot diagram of the daily mortality pre-whitened series. For each period and each province capitals, it has been calculated the number of heat waves and quantifying the impact on mortality through generalized linear model (GLM) methodology with the Poisson regression link. These models permits obtained the relative risks (RR) and attributable risks (AR). Via a meta-analysis, using the Global RR and AR were calculated the heat impact for the total of the 10 provinces. The results show that in the first two periods RR remained constant RR: 1.14 (CI95%: 1.09 1.19) and RR: 1.14 (CI95%: 1.10 1.18), while the third period shows a sharp decrease with respect to the prior two periods RR: 1.01 (CI95%: 1.00 1.01); the difference is statistically significant. In Spain there has been a sharp decrease in mortality attributable to heat over the past 10 years. The observed variation in RR puts into question the results of numerous studies that analyze the future impact of heat on mortality in different temporal scenarios and show it to be constant over time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Psychosocial family factors and glycemic control among children aged 1-15 years with type 1 diabetes: a population-based survey.

    PubMed

    Haugstvedt, Anne; Wentzel-Larsen, Tore; Rokne, Berit; Graue, Marit

    2011-12-20

    Being the parents of children with diabetes is demanding. Jay Belsky's determinants of parenting model emphasizes both the personal psychological resources, the characteristics of the child and contextual sources such as parents' work, marital relations and social network support as important determinants for parenting. To better understand the factors influencing parental functioning among parents of children with type 1 diabetes, we aimed to investigate associations between the children's glycated hemoglobin (HbA1c) and 1) variables related to the parents' psychological and contextual resources, and 2) frequency of blood glucose measurement as a marker for diabetes-related parenting behavior. Mothers (n = 103) and fathers (n = 97) of 115 children younger than 16 years old participated in a population-based survey. The questionnaire comprised the Life Orientation Test, the Oslo 3-item Social Support Scale, a single question regarding perceived social limitation because of the child's diabetes, the Relationship Satisfaction Scale and demographic and clinical variables. We investigated associations by using regression analysis. Related to the second aim hypoglycemic events, child age, diabetes duration, insulin regimen and comorbid diseases were included as covariates. The mean HbA1c was 8.1%, and 29% had HbA1c ≤ 7.5%. In multiple regression analysis, lower HbA1c was associated with higher education and stronger perceptions of social limitation among the mothers. A higher frequency of blood glucose measurement was significantly associated with lower HbA1c in bivariate analysis. Higher child age was significantly associated with higher HbA1c both in bivariate and multivariate analysis. A scatterplot indicated this association to be linear. Most families do not reach recommended treatment goals for their child with type 1 diabetes. Concerning contextual sources of stress and support, the families who successfully reached the treatment goals had mothers with higher education and experienced a higher degree of social limitations because of the child's diabetes. The continuous increasing HbA1c by age, also during the years before puberty, may indicate a need for further exploring the associations between child characteristics, context-related variables and parenting behavior such as factors facilitating the transfer of parents' responsibility and motivation for continued frequent treatment tasks to their growing children.

  4. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition.

    PubMed

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-05-21

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting.

  5. Optical space weathering on Vesta: Radiative-transfer models and Dawn observations

    NASA Astrophysics Data System (ADS)

    Blewett, David T.; Denevi, Brett W.; Le Corre, Lucille; Reddy, Vishnu; Schröder, Stefan E.; Pieters, Carle M.; Tosi, Federico; Zambon, Francesca; De Sanctis, Maria Cristina; Ammannito, Eleonora; Roatsch, Thomas; Raymond, Carol A.; Russell, Christopher T.

    2016-02-01

    Exposure to ion and micrometeoroid bombardment in the space environment causes physical and chemical changes in the surface of an airless planetary body. These changes, called space weathering, can strongly influence a surface's optical characteristics, and hence complicate interpretation of composition from reflectance spectroscopy. Prior work using data from the Dawn spacecraft (Pieters, C.M. et al. [2012]. Nature 491, 79-82) found that accumulation of nanophase metallic iron (npFe0), which is a key space-weathering product on the Moon, does not appear to be important on Vesta, and instead regolith evolution is dominated by mixing with carbonaceous chondrite (CC) material delivered by impacts. In order to gain further insight into the nature of space weathering on Vesta, we constructed model reflectance spectra using Hapke's radiative-transfer theory and used them as an aid to understanding multispectral observations obtained by Dawn's Framing Cameras (FC). The model spectra, for a howardite mineral assemblage, include both the effects of npFe0 and that of a mixed CC component. We found that a plot of the 438-nm/555-nm ratio vs. the 555-nm reflectance for the model spectra helps to separate the effects of lunar-style space weathering (LSSW) from those of CC-mixing. We then constructed ratio-reflectance pixel scatterplots using FC images for four areas of contrasting composition: a eucritic area at Vibidia crater, a diogenitic area near Antonia crater, olivine-bearing material within Bellicia crater, and a light mantle unit (referred to as an ;orange patch; in some previous studies, based on steep spectral slope in the visible) northeast of Oppia crater. In these four cases the observed spectral trends are those expected from CC-mixing, with no evidence for weathering dominated by production of npFe0. In order to survey a wider range of surfaces, we also defined a spectral parameter that is a function of the change in 438-nm/555-nm ratio and the 555-nm reflectance between fresh and mature surfaces, permitting the spectral change to be classified as LSSW-like or CC-mixing-like. When applied to 21 fresh and mature FC spectral pairs, it was found that none have changes consistent with LSSW. We discuss Vesta's lack of LSSW in relation to the possible agents of space weathering, the effects of physical and compositional differences among asteroid surfaces, and the possible role of magnetic shielding from the solar wind.

  6. Short-term association between environmental factors and hospital admissions due to dementia in Madrid.

    PubMed

    Linares, C; Culqui, D; Carmona, R; Ortiz, C; Díaz, J

    2017-01-01

    Spain has one of the highest proportions of dementia in the world among the population aged 60 years or over. Recent studies link various environmental factors to neurocognitive-type diseases. This study sought to analyse whether urban risk factors such as traffic noise, pollutants and heat waves might have a short-term impact on exacerbation of symptoms of dementia, leading to emergency hospital admission. We conducted a longitudinal ecological time-series study, with the dependent variable being the number of daily dementia-related emergency (DDE) hospital admissions to Madrid municipal hospitals (ICD-10 codes 290.0-290.2, 290.4-290.9, 294.1-294) from 01 to 01-2001 to 31-12-2009, as obtained from the Hospital Morbidity Survey (National Statistics Institute). The measures used were as follows: for noise pollution, Leqd, equivalent diurnal noise level (from 8 to 22h), and Leqn, equivalent nocturnal noise level (from 22 to 8h) in dB(A); for chemical pollution, mean daily NO2, PM2.5, PM1 as provided by the Madrid Municipal Air Quality Monitoring Grid; and lastly, maximum daily temperature (°C), as supplied by the State Meteorological Agency. Scatterplot diagrams were plotted to assess the type of functional relationship existing between the main variable of analysis and the environmental variables. The lags of the environmental variables were calculated to analyse the timing of the effect. Poisson regression models were fitted, controlling for trends and seasonalities, to quantify relative risk (RR). During the study period, there were 1175 DDE hospital admissions. These admissions displayed a linear functional relationship without a threshold in the case of Leqd. The RR of DDE admissions was 1.15 (1.11-1.20) for an increase of 1dB in Leqd, with impact at lag 0. In the case of maximum daily temperature, there was a threshold temperature of 34°C, with an increase of 1°C over this threshold posing an RR of 1.19 (1.09-1.30) at lag 1. The only pollutant to show an association with DDE hospital admissions was O3 at lag 5, with an RR of 1.09 (1.04-1.15) for an increase of 10µg/m 3 CONCLUSIONS: Diurnal traffic noise, heat waves and tropospheric ozone may exacerbate the symptoms of dementia to the point of requiring emergency admission to hospital. Lowering exposure levels to these environmental factors could reduce dementia-related admissions in Madrid. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Descriptive Statistics: Reporting the Answers to the 5 Basic Questions of Who, What, Why, When, Where, and a Sixth, So What?

    PubMed

    Vetter, Thomas R

    2017-11-01

    Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures. This basic statistical tutorial discusses a series of fundamental concepts about descriptive statistics and their reporting. The mean, median, and mode are 3 measures of the center or central tendency of a set of data. In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread). In simplest terms, variability is how much the individual recorded scores or observed values differ from one another. The range, standard deviation, and interquartile range are 3 measures of variability or dispersion. The standard deviation is typically reported for a mean, and the interquartile range for a median. Testing for statistical significance, along with calculating the observed treatment effect (or the strength of the association between an exposure and an outcome), and generating a corresponding confidence interval are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics. A number of journals, including Anesthesia & Analgesia, strongly encourage or require the reporting of pertinent confidence intervals. A confidence interval can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design. Generally speaking, in a clinical trial, the confidence interval is the range of values within which the true treatment effect in the population likely resides. In an observational study, the confidence interval is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"

  8. Alcohol-Attributable Fraction in Liver Disease: Does GDP Per Capita Matter?

    PubMed

    Kröner, Paul T; Mankal, Pavan Kumar; Dalapathi, Vijay; Shroff, Kavin; Abed, Jean; Kotler, Donald P

    2015-01-01

    The alcohol-attributable fraction (AAF) quantifies alcohol's disease burden. Alcoholic liver disease (ALD) is influenced by alcohol consumption per capita, duration, gender, ethnicity, and other comorbidities. In this study, we investigated the association between AAF/alcohol-related liver mortality and alcohol consumption per capita, while stratifying to per-capita gross domestic product (GDP). Data obtained from the World Health Organization and World Bank for both genders on AAF on liver disease, per-capita alcohol consumption (L/y), and per-capita GDP (USD/y) were used to conduct a cross-sectional study. Countries were classified as "high-income" and "very low income" if their respective per-capita GDP was greater than $30,000 or less than $1,000. Differences in total alcohol consumption per capita and AAF were calculated using a 2-sample t test. Scatterplots were generated to supplement the Pearson correlation coefficients, and F test was conducted to assess for differences in variance of ALD between high-income and very low income countries. Twenty-six and 27 countries met the criteria for high-income and very low income countries, respectively. Alcohol consumption per capita was higher in high-income countries. AAF and alcohol consumption per capita for both genders in high-income and very low income countries had a positive correlation. The F test yielded an F value of 1.44 with P = .357. No statistically significant correlation was found among alcohol types and AAF. Significantly higher mortality from ALD was found in very low income countries relative to high-income countries. Previous studies had noted a decreased AAF in low-income countries as compared to higher-income countries. However, the non-statistically significant difference between AAF variances of low-income and high-income countries was found by this study. A possible explanation is that both high-income and low-income populations will consume sufficient amount of alcohol, irrespective of its type, enough to weigh into equivalent AAF. No significant difference of AAF variance was found between high-income and very low income countries relating to sex-specific alcohol consumption per capita. Alcohol consumption per capita was greater in high-income countries. Type of preferred alcohol did not correlate with AAF. ALD related mortality was less in high-income countries as a result of better developed healthcare systems. ALD remains a significant burden globally, requiring prevention from socioeconomic, medical, and political realms. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Clinic Blood Pressure Underestimates Ambulatory Blood Pressure in an Untreated Employer-Based US Population: Results From the Masked Hypertension Study.

    PubMed

    Schwartz, Joseph E; Burg, Matthew M; Shimbo, Daichi; Broderick, Joan E; Stone, Arthur A; Ishikawa, Joji; Sloan, Richard; Yurgel, Tyla; Grossman, Steven; Pickering, Thomas G

    2016-12-06

    Ambulatory blood pressure (ABP) is consistently superior to clinic blood pressure (CBP) as a predictor of cardiovascular morbidity and mortality risk. A common perception is that ABP is usually lower than CBP. The relationship of the CBP minus ABP difference to age has not been examined in the United States. Between 2005 and 2012, 888 healthy, employed, middle-aged (mean±SD age, 45±10.4 years) individuals (59% female, 7.4% black, 12% Hispanic) with screening BP <160/105 mm Hg and not taking antihypertensive medication completed 3 separate clinic BP assessments and a 24-hour ABP recording for the Masked Hypertension Study. The distributions of CBP, mean awake ABP (aABP), and the CBP-aABP difference in the full sample and by demographic characteristics were compared. Locally weighted scatterplot smoothing was used to model the relationship of the BP measures to age and body mass index. The prevalence of discrepancies in ABP- versus CBP-defined hypertension status-white-coat hypertension and masked hypertension-were also examined. Average systolic/diastolic aABP (123.0/77.4±10.3/7.4 mm Hg) was significantly higher than the average of 9 CBP readings over 3 visits (116.0/75.4±11.6/7.7 mm Hg). aABP exceeded CBP by >10 mm Hg much more frequently than CBP exceeded aABP. The difference (aABP>CBP) was most pronounced in young adults and those with normal body mass index. The systolic difference progressively diminished, but did not disappear, at older ages and higher body mass indexes. The diastolic difference vanished around age 65 and reversed (CBP>aABP) for body mass index >32.5 kg/m 2 . Whereas 5.3% of participants were hypertensive by CBP, 19.2% were hypertensive by aABP; 15.7% of those with nonelevated CBP had masked hypertension. Contrary to a widely held belief, based primarily on cohort studies of patients with elevated CBP, ABP is not usually lower than CBP, at least not among healthy, employed individuals. Furthermore, a substantial proportion of otherwise healthy individuals with nonelevated CBP have masked hypertension. Demonstrated CBP-aABP gradients, if confirmed in representative samples (eg, NHANES [National Health and Nutrition Examination Survey]), could provide guidance for primary care physicians as to when, for a given CBP, 24-hour ABP would be useful to identify or rule out masked hypertension. © 2016 American Heart Association, Inc.

  10. The iron-responsive microsomal proteome of Aspergillus fumigatus.

    PubMed

    Moloney, Nicola M; Owens, Rebecca A; Meleady, Paula; Henry, Michael; Dolan, Stephen K; Mulvihill, Eoin; Clynes, Martin; Doyle, Sean

    2016-03-16

    Aspergillus fumigatus is an opportunistic fungal pathogen. Siderophore biosynthesis and iron acquisition are essential for virulence. Yet, limited data exist with respect to the adaptive nature of the fungal microsomal proteome under iron-limiting growth conditions, as encountered during host infection. Here, we demonstrate that under siderophore biosynthetic conditions--significantly elevated fusarinine C (FSC) and triacetylfusarinine C (TAFC) production (p<0.0001), extensive microsomal proteome remodelling occurs. Specifically, a four-fold enrichment of transmembrane-containing proteins was observed with respect to whole cell lysates following ultracentrifugation-based microsomal extraction. Comparative label-free proteomic analysis of microsomal extracts, isolated following iron-replete and -deplete growth, identified 710 unique proteins. Scatterplot analysis (MaxQuant) demonstrated high correlation amongst biological replicates from each growth condition (Pearson correlation >0.96 within groups; biological replicates (n=4)). Quantitative and qualitative comparison revealed 231 proteins with a significant change in abundance between the iron-replete and iron-deplete conditions (p<0.05, fold change ≥ 2), with 96 proteins showing increased abundance and 135 with decreased abundance following iron limitation, including predicted siderophore transporters. Fluorescently labelled FSC was only sequestered following A. fumigatus growth under iron-limiting conditions. Interestingly, human sera exhibited significantly increased reactivity (p<0.0001) against microsomal protein extracts obtained following iron-deplete growth. The opportunistic fungal pathogen Aspergillus fumigatus must acquire iron to facilitate growth and pathogenicity. Iron-chelating non-ribosomal peptides, termed siderophores, mediate iron uptake via membrane-localised transporter proteins. Here we demonstrate for the first time that growth of A. fumigatus under iron-deplete conditions, concomitant with siderophore biosynthesis, leads to an extensive remodelling of the microsomal proteome which includes significantly altered levels of 231 constituent proteins (96 increased and 135 decreased in abundance), many of which have not previously been localised to the microsome. We also demonstrate the first synthesis of a fluorescent version of fusarinine C, an extracellular A. fumigatus siderophore, and its uptake and localization under iron-restricted conditions. This infers the use of an A. fumigatus siderophore as a 'Trojan horse' to potentiate the efficacy of anti-fungal drugs. Finally, in addition to revealing the Aspergillus-specific IgG reactivity in normal human sera against microsomal proteins, there appears to be a significantly increased reactivity against microsomal proteins obtained following iron-restricted growth. We hypothesise that iron-limiting environment in humans, which has evolved to nutritionally limit pathogen growth in vivo, may also alter the fungal microsomal proteome. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Rapid intensity and velocity variations in solar transition region lines

    NASA Astrophysics Data System (ADS)

    Hansteen, V. H.; Betta, R.; Carlsson, M.

    2000-08-01

    We have obtained short exposure (3 s) time series of strong upper chromospheric and transition region emission lines from the quiet Sun with the SUMER instrument onboard SOHO during two 1 hour periods in 1996. With a Nyqvist frequency of 167 mHz and relatively high count rates the dataset is uniquely suited for searching for high frequency variations in intensity and Doppler velocity. From Monte-Carlo experiments taking into account the photon-counting statistics we estimate our detection limit to correspond to a wave-packet of four periods coherent over 3'' with a Doppler-shift amplitude of 2.5km s-1 in the darkest internetwork areas observed in C III. In the network the detection limit is estimated to be 1.5km s-1. Above 50 mHz we detect wave-packet amplitudes above 3km s-1 less than 0.5% of the time. Between 20 and 50 mHz we detect some wave-packets with a typical duration of four periods and amplitudes up to 8km s-1. At any given internetwork location these wave-packets are present 1% of the time. In the 10-20 mHz range we see amplitudes above 3km s-1 12% of the time. At lower frequencies our dataset is consistent with other SUMER datasets reported in the literature. The chromospheric 3-7 mHz signal is discernible in the line emission. In the internetwork this is the dominant oscillation frequency but higher frequencies (7-10 mHz) are often present and appear coherent in Doppler velocity over large spatial regions (≍ 40"). Wavelet analysis implies that these oscillations have typical durations of 1000s. The network emission also shows a 5 mHz signal but is dominated by low frequency variations (of < 4 mHz) in both intensity and velocity. The oscillations show less power in intensity than in velocity. We find that while both red and blue shifted emission is observed, the transition region lines are on average red shifted between 5-10km s-1 in the network. A net red shift is also found in the internetwork emission but it is smaller (< 4km s-1). The line widths do not differ much between the internetwork and network, the non-thermal line widths increase with increasing temperature of line formation from 30km s-1 for the C II 1334 Å line to 45km s-1 for the O VI 1032 Å line. By constructing scatterplots of velocity versus intensity we find that in the network a mean redshift is correlated with a high mean intensity. In the internetwork regions we do not find any correlation between the intensity and the Doppler velocity.

  12. Association between contact hip stress and RSA-measured wear rates in total hip arthroplasties of 31 patients.

    PubMed

    The, Bertram; Hosman, Anton; Kootstra, Johan; Kralj-Iglic, Veronika; Flivik, Gunnar; Verdonschot, Nico; Diercks, Ron

    2008-01-01

    The main concern in the long run of total hip replacements is aseptic loosening of the prosthesis. Optimization of the biomechanics of the hip joint is necessary for optimization of long-term success. A widely implementable tool to predict biomechanical consequences of preoperatively planned reconstructions still has to be developed. A potentially useful model to this purpose has been developed previously. The aim of this study is to quantify the association between the estimated hip joint contact force by this biomechanical model and RSA-measured wear rates in a clinical setting. Thirty-one patients with a total hip replacement were measured with RSA, the gold standard for clinical wear measurements. The reference examination was done within 1 week of the operation and the follow-up examinations were done at 1, 2 and 5 years. Conventional pelvic X-rays were taken on the same day. The contact stress distribution in the hip joint was determined by the computer program HIPSTRESS. The procedure for the determination of the hip joint contact stress distribution is based on the mathematical model of the resultant hip force in the one-legged stance and the mathematical model of the contact stress distribution. The model for the force requires as input data, several geometrical parameters of the hip and the body weight, while the model for stress requires as input data, the magnitude and direction of the resultant hip force. The stress distribution is presented by the peak stress-the maximal value of stress on the weight-bearing area (p(max)) and also by the peak stress calculated with respect to the body weight (p(max)/W(B)) which gives the effect of hip geometry. Visualization of the relations between predicted values by the model and the wear at different points in the follow-up was done using scatterplots. Correlations were expressed as Pearson r values. The predicted p(max) and wear were clearly correlated in the first year post-operatively (r = 0.58, p = 0.002), while this correlation is weaker after 2 years (r = 0.19, p = 0.337) and 5 years (r = 0.24, p = 0.235). The wear values at 1, 2 and 5 years post-operatively correlate with each other in the way that is expected considering the wear velocity curve of the whole group. The correlation between the predicted p(max) values of two observers who were blinded for each other's results was very good (r = 0.93, p < 0.001). We conclude that the biomechanical model used in this paper provides a scientific foundation for the development of a new way of constructing preoperative biomechanical plans for total hip replacements.

  13. Adjustment of Pesticide Concentrations for Temporal Changes in Analytical Recovery, 1992-2006

    USGS Publications Warehouse

    Martin, Jeffrey D.; Stone, Wesley W.; Wydoski, Duane S.; Sandstrom, Mark W.

    2009-01-01

    Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ('spiked' QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report examines temporal changes in the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as 'pesticides') that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 to 2006 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Temporal changes in pesticide recovery were investigated by calculating robust, locally weighted scatterplot smooths (lowess smooths) for the time series of pesticide recoveries in 5,132 laboratory reagent spikes; 1,234 stream-water matrix spikes; and 863 groundwater matrix spikes. A 10-percent smoothing window was selected to show broad, 6- to 12-month time scale changes in recovery for most of the 52 pesticides. Temporal patterns in recovery were similar (in phase) for laboratory reagent spikes and for matrix spikes for most pesticides. In-phase temporal changes among spike types support the hypothesis that temporal change in method performance is the primary cause of temporal change in recovery. Although temporal patterns of recovery were in phase for most pesticides, recovery in matrix spikes was greater than recovery in reagent spikes for nearly every pesticide. Models of recovery based on matrix spikes are deemed more appropriate for adjusting concentrations of pesticides measured in groundwater and stream-water samples than models based on laboratory reagent spikes because (1) matrix spikes are expected to more closely match the matrix of environmental water samples than are reagent spikes and (2) method performance is often matrix dependent, as was shown by higher recovery in matrix spikes for most of the pesticides. Models of recovery, based on lowess smooths of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.

  14. Susceptibility patterns for amoxicillin/clavulanate tests mimicking the licensed formulations and pharmacokinetic relationships: do the MIC obtained with 2:1 ratio testing accurately reflect activity against beta-lactamase-producing strains of Haemophilus influenzae and Moraxella catarrhalis?

    PubMed

    Pottumarthy, Sudha; Sader, Helio S; Fritsche, Thomas R; Jones, Ronald N

    2005-11-01

    Amoxicillin/clavulanate has recently undergone formulation changes (XR and ES-600) that represent 14:1 and 16:1 ratios of amoxicillin/clavulanate. These ratios greatly differ from the 2:1 ratio used in initial formulations and in vitro susceptibility testing. The objective of this study was to determine if the reference method using a 2:1 ratio accurately reflects the susceptibility to the various clinically used amoxicillin/clavulanate formulations and their respective serum concentration ratios. A collection of 330 Haemophilus influenzae strains (300 beta-lactamase-positive and 30 beta-lactamase-negative) and 40 Moraxella catarrhalis strains (30 beta-lactamase-positive and 10 beta-lactamase-negative) were tested by the broth microdilution method against eight amoxicillin/clavulanate combinations (4:1, 5:1, 7:1, 9:1, 14:1, and 16:1 ratios; 0.5 and 2 microg/mL fixed clavulanate concentrations) and the minimum inhibitory concentration (MIC) results were compared with those obtained with the reference 2:1 ratio testing. For the beta-lactamase-negative strains of both genera, there was no demonstrable change in the MIC values obtained for all ratios analyzed (2:1 to 16:1). For the beta-lactamase-positive strains of H. influenzae and M. catarrhalis, at ratios >or=4:1 there was a shift in the central tendency of the MIC scatterplot compared with the results of testing 2:1 ratio. As a result, there was a 2-fold dilution increase in the MIC(50) and MIC(90) values, most evident for H. influenzae and BRO-1-producing M. catarrhalis strains. For beta-lactamase-positive strains of H. influenzae, the shift resulted in a change in the interpretive result for 3 isolates (1.0%) from susceptible using the reference method (2:1 ratio) to resistant (8/4 microg/mL; very major error) at the 16:1 ratio. In addition, the number of isolates with MIC values at or 1 dilution lower than the breakpoint (4/2 microg/mL) increased from 5% at 2:1 ratio to 32-33% for ratios 14:1 and 16:1. Our results indicate that, for the beta-lactamase-positive strains of H. influenzae and M. catarrhalis, the results of the amoxicillin/clavulanate reference 2:1 ratio testing do not accurately represent all the currently licensed formulations. Pharmacokinetic/pharmacodynamic (PK/PD) target attainment might be compromised when higher amoxicillin/clavulanate ratios are used clinically. With a better understanding of PK/PD parameters, reevaluation of the amoxicillin/clavulanate in vitro susceptibility testing should be considered by the standardizing authorities to reflect the licensed formulations and accurately predict clinical outcomes.

  15. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.

    PubMed

    Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C

    2012-12-01

    Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs.

  16. A Comparison of Self-Reported and Objective Physical Activity Measures in Young Australian Women.

    PubMed

    Hartley, Stefanie; Garland, Suzanne; Young, Elisa; Bennell, Kim Louise; Tay, Ilona; Gorelik, Alexandra; Wark, John Dennis

    2015-01-01

    The evidence for beneficial effects of recommended levels of physical activity is overwhelming. However, 70% of Australians fail to meet these levels. In particular, physical activity participation by women falls sharply between ages 16 to 25 years. Further information about physical activity measures in young women is needed. Self-administered questionnaires are often used to measure physical activity given their ease of application, but known limitations, including recall bias, compromise the accuracy of data. Alternatives such as objective measures are commonly used to overcome this problem, but are more costly and time consuming. To compare the output between the Modified Active Australia Survey (MAAS), the International Physical Activity Questionnaire (IPAQ), and an objective physical activity measure-the SenseWear Armband (SWA)-to evaluate the test-retest reliability of the MAAS and to determine the acceptability of the SWA among young women. Young women from Victoria, Australia, aged 18 to 25 years who had participated in previous studies via Facebook advertising were recruited. Participants completed the two physical activity questionnaires online, immediately before and after wearing the armband for 7 consecutive days. Data from the SWA was blocked into 10-minute activity times. Follow-up IPAQ, MAAS, and SWA data were analyzed by comparing the total continuous and categorical activity scores, while concurrent validity of IPAQ and MAAS were analyzed by comparing follow-up scores. Test-retest reliability of MAAS was analyzed by comparing MAAS total physical activity scores at baseline and follow-up. Participants provided feedback in the follow-up questionnaire about their experience of wearing the armband to determine acceptability of the SWA. Data analyses included graphical (ie, Bland-Altman plot, scatterplot) and analytical (ie, canonical correlation, kappa statistic) methods to determine agreement between MAAS, IPAQ, and SWA data. A total of 58 participants returned complete data. Comparisons between the MAAS and IPAQ questionnaires (n=52) showed moderate agreement for both categorical (kappa=.48, P<.001) and continuous data (r=.69, P<.001). Overall, the IPAQ tended to give higher scores. No significant correlation was observed between SWA and IPAQ or MAAS continuous data, for both minute-by-minute and blocked SWA data. The SWA tended to record lower scores than the questionnaires, suggesting participants tended to overreport their amount of physical activity. The test-retest analysis of MAAS showed moderate agreement for continuous outcomes (r=.44, P=.001). However, poor agreement was seen for categorical outcomes. The acceptability of the SWA to participants was high. Moderate agreement between the MAAS and IPAQ and moderate reliability of the MAAS indicates that the MAAS may be a suitable alternative to the IPAQ to assess total physical activity in young women, due to its shorter length and consequently lower participant burden. The SWA, and likely other monitoring devices, have the advantage over questionnaires of avoiding overreporting of self-reported physical activity, while being highly acceptable to participants.

  17. GTest: a software tool for graphical assessment of empirical distributions' Gaussianity.

    PubMed

    Barca, E; Bruno, E; Bruno, D E; Passarella, G

    2016-03-01

    In the present paper, the novel software GTest is introduced, designed for testing the normality of a user-specified empirical distribution. It has been implemented with two unusual characteristics; the first is the user option of selecting four different versions of the normality test, each of them suited to be applied to a specific dataset or goal, and the second is the inferential paradigm that informs the output of such tests: it is basically graphical and intrinsically self-explanatory. The concept of inference-by-eye is an emerging inferential approach which will find a successful application in the near future due to the growing need of widening the audience of users of statistical methods to people with informal statistical skills. For instance, the latest European regulation concerning environmental issues introduced strict protocols for data handling (data quality assurance, outliers detection, etc.) and information exchange (areal statistics, trend detection, etc.) between regional and central environmental agencies. Therefore, more and more frequently, laboratory and field technicians will be requested to utilize complex software applications for subjecting data coming from monitoring, surveying or laboratory activities to specific statistical analyses. Unfortunately, inferential statistics, which actually influence the decisional processes for the correct managing of environmental resources, are often implemented in a way which expresses its outcomes in a numerical form with brief comments in a strict statistical jargon (degrees of freedom, level of significance, accepted/rejected H0, etc.). Therefore, often, the interpretation of such outcomes is really difficult for people with poor statistical knowledge. In such framework, the paradigm of the visual inference can contribute to fill in such gap, providing outcomes in self-explanatory graphical forms with a brief comment in the common language. Actually, the difficulties experienced by colleagues and their request for an effective tool for addressing such difficulties motivated us in adopting the inference-by-eye paradigm and implementing an easy-to-use, quick and reliable statistical tool. GTest visualizes its outcomes as a modified version of the Q-Q plot. The application has been developed in Visual Basic for Applications (VBA) within MS Excel 2010, which demonstrated to have all the characteristics of robustness and reliability needed. GTest provides true graphical normality tests which are as reliable as any statistical quantitative approach but much easier to understand. The Q-Q plots have been integrated with the outlining of an acceptance region around the representation of the theoretical distribution, defined in accordance with the alpha level of significance and the data sample size. The test decision rule is the following: if the empirical scatterplot falls completely within the acceptance region, then it can be concluded that the empirical distribution fits the theoretical one at the given alpha level. A comprehensive case study has been carried out with simulated and real-world data in order to check the robustness and reliability of the software.

  18. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs

    PubMed Central

    2012-01-01

    Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. Results The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. Conclusions PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs. PMID:23198908

  19. Estimated Probability of a Cervical Spine Injury During an ISS Mission

    NASA Technical Reports Server (NTRS)

    Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.

    2013-01-01

    Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.

  20. Regional Distribution of Metals and C and N Stable Isotopes in the Epiphytic Ball Moss (Tillandsia Recurvata) at the Mezquital Valley, Hidalgo State

    NASA Astrophysics Data System (ADS)

    Zambrano-Garcia, A.; López-Veneroni, D.; Rojas, A.; Torres, A.; Sosa, G.

    2007-05-01

    As a part of the MILAGRO Field Campaign 2006, the influence of anthropogenic sources to metal air pollution in the Mezquital Valley, Hidalgo State, was explored by biomonitoring techniques. This valley is a major industrial- agriculture area located in central Mexico. An oil refinery, an electrical power plant, several cement plants with open-pit mines, as well as intensive wastewater-based agricultural areas, all within a 50 km radius, are some of the most important local sources of particulate air pollution. The concentrations of 25 metals and elements were determined by ICP-AES (EPA 610C method) for triplicate composite samples of the "ball moss" (T. recurvata ) collected at 50 sites. In addition, the ratios of two stable isotopes ((13C/12C and 15N/14N) were determined by continuous-flow isotope-ratio mass spectrometry in order to assess their potential as tracers for industrial emissions. Preliminary results showed high to very high average contents of several metals in the biomonitor compared to values from similar studies in other world regions, indicating a high degree of local air pollution. In contrast, most samples had Ag, As, Be, Se and Tl contents below detection levels (DL = 0.05 mg/kg of sample dry weight) indicating low levels of pollution by these metals. Metals such as Al, Ba, Ca, Fe, Li, Mo, Ni, Sr, Ti, V and Zn concentrated the most at the South portion of the valley, where the Tepeji-Tula-Apaxco industrial corridor is located. A transect parallel to the along-wind direction (N-S) showed a higher concentration of metals farther away from the sources relative to a cross-wind transect, which is consistent with the eolian transport of metal-enriched particles. Regional distribution maps of metals in the biomonitor showed that Al, Ba, Fe, Mo, Ni, Sr, Ti and V had higher levels at the industrial sampling sites; whereas K, Na and P were more abundant near to agriculture areas. Vanadium, a common element of crude oil, reflected better the influence from the local oil refinery and the oil- fueled power plant. Two distinct Ni:V scatterplot trends suggest that there are two main petrogenic emission sources in the region. Calcium and, to some extent, Mg were higher near the mining areas and a calcium carbonate factory. Lead had a diffuse distribution, probably related to former gasoline vehicle exhaust emissions, rather than to current emissions. Antimony was more abundant at sites far from agriculture and industrial areas, which suggests a natural origin (rocks or soils). The spatial distribution of stable isotopes also showed distinct patterns near the industrial sources with relatively 13C -depleted and 15N -enriched values near the oil refinery and the electrical power plant. Although it is not yet possible to provide quantitative estimates for emission contributions per source type, biomonitoring with T. recurvata provided for the first time a clear picture of the relative deposition patterns for several airborne metals in the Mezquital Valley.

  1. Usability of multiangular imaging spectroscopy data for analysis of vegetation canopy shadow fraction in boreal forest

    NASA Astrophysics Data System (ADS)

    Markiet, Vincent; Perheentupa, Viljami; Mõttus, Matti; Hernández-Clemente, Rocío

    2016-04-01

    Imaging spectroscopy is a remote sensing technology which records continuous spectral data at a very high (better than 10 nm) resolution. Such spectral images can be used to monitor, for example, the photosynthetic activity of vegetation. Photosynthetic activity is dependent on varying light conditions and varies within the canopy. To measure this variation we need very high spatial resolution data with resolution better than the dominating canopy element size (e.g., tree crown in a forest canopy). This is useful, e.g., for detecting photosynthetic downregulation and thus plant stress. Canopy illumination conditions are often quantified using the shadow fraction: the fraction of visible foliage which is not sunlit. Shadow fraction is known to depend on view angle (e.g., hot spot images have very low shadow fraction). Hence, multiple observation angles potentially increase the range of shadow fraction in the imagery in high spatial resolution imaging spectroscopy data. To investigate the potential of multi-angle imaging spectroscopy in investigating canopy processes which vary with shadow fraction, we obtained a unique multiangular airborne imaging spectroscopy data for the Hyytiälä forest research station located in Finland (61° 50'N, 24° 17'E) in July 2015. The main tree species are Norway spruce (Picea abies L. karst), Scots pine (Pinus sylvestris L.) and birch (Betula pubescens Ehrh., Betula pendula Roth). We used an airborne hyperspectral sensor AISA Eagle II (Specim - Spectral Imaging Ltd., Finland) mounted on a tilting platform. The tilting platform allowed us to measure at nadir and approximately 35 degrees off-nadir. The hyperspectral sensor has a 37.5 degrees field of view (FOV), 0.6m pixel size, 128 spectral bands with an average spectral bandwidth of 4.6nm and is sensitive in the 400-1000 nm spectral region. The airborne data was radiometrically, atmospherically and geometrically processed using the Parge and Atcor software (Re Se applications Schläpfer, Switzerland). However, even after meticulous geolocation, the canopy elements (needles) seen from the three view angles were different: at each overpass, different parts of the same crowns were observed. To overcome this, we used a 200m x 200m test site covered with pure pine stands. We assumed that for sunlit, shaded and understory spectral signatures are independent of viewing direction to the accuracy of a constant BRDF factor. Thus, we compared the spectral signatures for sunlit and shaded canopy and understory obtained for each view direction. We selected visually six hundred of the brightest and darkest canopy pixels. Next, we performed a minimum noise fraction (MNF) transformation, created a pixel purity index (PPI) and used Envi's n-D scatterplot to determine pure spectral signatures for the two classes. The pure endmembers for different view angles were compared to determine the BRDF factor and to analyze its spectral invariance. We demonstrate the compatibility of multi-angle data with high spatial resolution data. In principle, both carry similar information on structured (non-flat) targets thus as a vegetation canopy. Nevertheless, multiple view angles helped us to extend the range of shadow fraction in the images. Also, correct separation of shaded crown and shaded understory pixels remains a challenge.

  2. Statistical Approaches to Interpretation of Local, Regional, and National Highway-Runoff and Urban-Stormwater Data

    USGS Publications Warehouse

    Tasker, Gary D.; Granato, Gregory E.

    2000-01-01

    Decision makers need viable methods for the interpretation of local, regional, and national-highway runoff and urban-stormwater data including flows, concentrations and loads of chemical constituents and sediment, potential effects on receiving waters, and the potential effectiveness of various best management practices (BMPs). Valid (useful for intended purposes), current, and technically defensible stormwater-runoff models are needed to interpret data collected in field studies, to support existing highway and urban-runoffplanning processes, to meet National Pollutant Discharge Elimination System (NPDES) requirements, and to provide methods for computation of Total Maximum Daily Loads (TMDLs) systematically and economically. Historically, conceptual, simulation, empirical, and statistical models of varying levels of detail, complexity, and uncertainty have been used to meet various data-quality objectives in the decision-making processes necessary for the planning, design, construction, and maintenance of highways and for other land-use applications. Water-quality simulation models attempt a detailed representation of the physical processes and mechanisms at a given site. Empirical and statistical regional water-quality assessment models provide a more general picture of water quality or changes in water quality over a region. All these modeling techniques share one common aspect-their predictive ability is poor without suitable site-specific data for calibration. To properly apply the correct model, one must understand the classification of variables, the unique characteristics of water-resources data, and the concept of population structure and analysis. Classifying variables being used to analyze data may determine which statistical methods are appropriate for data analysis. An understanding of the characteristics of water-resources data is necessary to evaluate the applicability of different statistical methods, to interpret the results of these techniques, and to use tools and techniques that account for the unique nature of water-resources data sets. Populations of data on stormwater-runoff quantity and quality are often best modeled as logarithmic transformations. Therefore, these factors need to be considered to form valid, current, and technically defensible stormwater-runoff models. Regression analysis is an accepted method for interpretation of water-resources data and for prediction of current or future conditions at sites that fit the input data model. Regression analysis is designed to provide an estimate of the average response of a system as it relates to variation in one or more known variables. To produce valid models, however, regression analysis should include visual analysis of scatterplots, an examination of the regression equation, evaluation of the method design assumptions, and regression diagnostics. A number of statistical techniques are described in the text and in the appendixes to provide information necessary to interpret data by use of appropriate methods. Uncertainty is an important part of any decisionmaking process. In order to deal with uncertainty problems, the analyst needs to know the severity of the statistical uncertainty of the methods used to predict water quality. Statistical models need to be based on information that is meaningful, representative, complete, precise, accurate, and comparable to be deemed valid, up to date, and technically supportable. To assess uncertainty in the analytical tools, the modeling methods, and the underlying data set, all of these components need be documented and communicated in an accessible format within project publications.

  3. Comparison of spatiotemporal prediction models of daily exposure of individuals to ambient nitrogen dioxide and ozone in Montreal, Canada.

    PubMed

    Buteau, Stephane; Hatzopoulou, Marianne; Crouse, Dan L; Smargiassi, Audrey; Burnett, Richard T; Logan, Travis; Cavellin, Laure Deville; Goldberg, Mark S

    2017-07-01

    In previous studies investigating the short-term health effects of ambient air pollution the exposure metric that is often used is the daily average across monitors, thus assuming that all individuals have the same daily exposure. Studies that incorporate space-time exposures of individuals are essential to further our understanding of the short-term health effects of ambient air pollution. As part of a longitudinal cohort study of the acute effects of air pollution that incorporated subject-specific information and medical histories of subjects throughout the follow-up, the purpose of this study was to develop and compare different prediction models using data from fixed-site monitors and other monitoring campaigns to estimate daily, spatially-resolved concentrations of ozone (O 3 ) and nitrogen dioxide (NO 2 ) of participants' residences in Montreal, 1991-2002. We used the following methods to predict spatially-resolved daily concentrations of O 3 and NO 2 for each geographic region in Montreal (defined by three-character postal code areas): (1) assigning concentrations from the nearest monitor; (2) spatial interpolation using inverse-distance weighting; (3) back-extrapolation from a land-use regression model from a dense monitoring survey, and; (4) a combination of a land-use and Bayesian maximum entropy model. We used a variety of indices of agreement to compare estimates of exposure assigned from the different methods, notably scatterplots of pairwise predictions, distribution of differences and computation of the absolute agreement intraclass correlation (ICC). For each pairwise prediction, we also produced maps of the ICCs by these regions indicating the spatial variability in the degree of agreement. We found some substantial differences in agreement across pairs of methods in daily mean predicted concentrations of O 3 and NO 2 . On a given day and postal code area the difference in the concentration assigned could be as high as 131ppb for O 3 and 108ppb for NO 2 . For both pollutants, better agreement was found between predictions from the nearest monitor and the inverse-distance weighting interpolation methods, with ICCs of 0.89 (95% confidence interval (CI): 0.89, 0.89) for O 3 and 0.81 (95%CI: 0.80, 0.81) for NO 2 , respectively. For this pair of methods the maximum difference on a given day and postal code area was 36ppb for O 3 and 74ppb for NO 2 . The back-extrapolation method showed a higher degree of disagreement with the nearest monitor approach, inverse-distance weighting interpolation, and the Bayesian maximum entropy model, which were strongly constrained by the sparse monitoring network. The maps showed that the patterns of agreement differed across the postal code areas and the variability depended on the pair of methods compared and the pollutants. For O 3 , but not NO 2 , postal areas showing greater disagreement were mostly located near the city centre and along highways, especially in maps involving the back-extrapolation method. In view of the substantial differences in daily concentrations of O 3 and NO 2 predicted by the different methods, we suggest that analyses of the health effects from air pollution should make use of multiple exposure assessment methods. Although we cannot make any recommendations as to which is the most valid method, models that make use of higher spatially resolved data, such as from dense exposure surveys or from high spatial resolution satellite data, likely provide the most valid estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Comparison of EML 105 and advantage analysers measuring capillary versus venous whole blood glucose in neonates.

    PubMed

    McNamara, P J; Sharief, N

    2001-09-01

    Near-patient blood glucose monitoring is an essential component of neonatal intensive care but the analysers currently used are unreliable and inaccurate. The aim of this study was to compare a new glucose electrode-based analyser (EML 105) and a non-wipe reflectance photometry method (Advantage) as opposed to a recognized laboratory reference method (Hexokinase). We also investigated the effect of sample route and haematocrit on the accuracy of the glucose readings obtained by each method of analysis. Whole blood glucose concentrations ranging from 0 to 3.5 mmol/l were carefully prepared in a laboratory setting and blood samples from each respective solution were then measured by EML 105 and Advantage analysers. The results obtained were then compared with the corresponding plasma glucose reading obtained by the Hexokinase method, using linear regression analysis. An in vivo study was subsequently performed on 103 neonates, over a 1-y period, using capillary and venous whole blood samples. Whole blood glucose concentration was estimated from each sample using both analysers and compared with the corresponding plasma glucose concentration estimated by the Hexokinase method. Venous blood was centrifuged and haematocrit was estimated using standardized curves. The effect of haematocrit on the agreement between whole blood and plasma glucose was investigated, estimating the degree of correlation on a scatterplot of the results and linear regression analysis. Both the EML 105 and Hexokinase methods were highly accurate, in vitro, with small proportional biases of 2% and 5%, respectively. However, in vivo, both study analysers overestimated neonatal plasma glucose, ranging from at best 0.45 mmol/l (EML 105 venous) to 0.69 mmol/l (EML capillary). There was no significant difference in the agreement of capillary (GD = 0.12, 95% CI, [-0.32,0.08], p = 0.2) or venous samples (GD = 0.05, 95% CI. [0.09, 0.19], p = 0.49) with plasma glucose when analysed by either study method (GD = glucose difference between study analyser and reference method) However, the venous samples analysed by EML 105 estimated plasma glucose significantly better than capillary samples using the same method of analysis (GD = 0.24, 95% CI. [0.09,0.38], p < 0.01). The relationship between haematocrit and the resultant glucose differences was non-linear with correlation coefficients of r = -0.057 (EML 105 capillary), r = 0.145 (EML 105 venous), r = -0.127 (Advantage capillary) and r = -0.275 (Advantage venous). There was no significant difference in the effect of haematocrit on the performance of EML 105 versus Advantage, regardless of the sample route. Both EML 105 and Advantage overestimated plasma glucose, with no significant difference in the performance of either analyser, regardless of the route of analysis. Agreement with plasma glucose was better for venous samples but this was only statistically significant when EML 105 capillary and venous results were compared. Haematocrit is not a significant confounding factor towards the performance of either EML 105 or Advantage in neonates, regardless of the route of sampling. The margin of overestimation of blood glucose prohibits the recommendation of both EML 105 and Advantage for routine neonatal glucose screening. The consequences include failure accurately to diagnose hypoglycaemia and delays in the instigation of therapeutic measures, both of which may potentially result in an adverse, long-term, neurodevelopmental outcome.

  5. Felyx : A Free Open Software Solution for the Analysis of Large Earth Observation Datasets

    NASA Astrophysics Data System (ADS)

    Piolle, Jean-Francois; Shutler, Jamie; Poulter, David; Guidetti, Veronica; Donlon, Craig

    2014-05-01

    GHRSST project, by assembling large collections of earth observation data from various sources and agencies, has also raised the need for providing the user community with tools to inter-compare them, assess and monitor their quality. The ESA /Medspiration project, which implemented the first operating node of GHRSST system for Europe, also paved the way successfully towards such generic analytics tools by developing the High Resolution Diagnostic Dataset System (HR-DDS) and Satellite to In situ Multi-sensor Match-up Databases. Building on this heritage, ESA is now funding the development by IFREMER, PML and Pelamis of felyx, a web tool merging the two capabilities into a single software solution. It will consist in a free open software solution, written in python and javascript, whose aim is to provide Earth Observation data producers and users with an open-source, flexible and reusable tool to allow the quality and performance of data streams (satellite, in situ and model) to be easily monitored and studied. The primary concept of Felyx is to work as an extraction tool, subsetting source data over predefined target areas (which can be static or moving) : these data subsets, and associated metrics, can then be accessed by users or client applications either as raw files, automatic alerts and reports generated periodically, or through a flexible web interface enabling statistical analysis and visualization. Felyx presents itself as an open-source suite of tools, written in python and javascript, enabling : * subsetting large local or remote collections of Earth Observation data over predefined sites (geographical boxes) or moving targets (ship, buoy, hurricane), storing locally the extracted data (refered as miniProds). These miniProds constitute a much smaller representative subset of the original collection on which one can perform any kind of processing or assessment without having to cope with heavy volumes of data. * computing statistical metrics over these miniProds using for instance a set of usual statistical operators (mean, median, rms, ...), fully extensible and applicable to any variable of a dataset. These metrics are stored in a fast search engine, queryable by humans and automated applications. * reporting or alerting, based on user-defined inference rules, through various media (emails, twitter feeds,..) and devices (phones, tablets). * analysing miniProds and metrics through a web interface allowing to dig into this base of information and extracting useful knowledge through multidimensional interactive display functions (time series, scatterplots, histograms, maps). The services provided by felyx will be generic, deployable at users own premises and adaptable enough to integrate any kind of parameters. Users will be able to operate their own felyx instance at any location, on datasets and parameters of their own interest, and the various instances will be able to interact with each other, creating a web of felyx systems enabling aggregation and cross comparison of miniProds and metrics from multiple sources. Initially two instances will be operated simultaneously during a 6 months demonstration phase, at IFREMER - on sea surface temperature (for GHRSST community) and ocean waves datasets - and PML - on ocean colour. We will present results from the Felyx project, demonstrate how the GHRSST community can exploit Felyx and demonstrate how the wider community can make use of the GHRSST data within Felyx.

  6. Pitfalls in chronobiology: a suggested analysis using intrathecal bupivacaine analgesia as an example.

    PubMed

    Shafer, Steven L; Lemmer, Bjoern; Boselli, Emmanuel; Boiste, Fabienne; Bouvet, Lionel; Allaouchiche, Bernard; Chassard, Dominique

    2010-10-01

    The duration of analgesia from epidural administration of local anesthetics to parturients has been shown to follow a rhythmic pattern according to the time of drug administration. We studied whether there was a similar pattern after intrathecal administration of bupivacaine in parturients. In the course of the analysis, we came to believe that some data points coincident with provider shift changes were influenced by nonbiological, health care system factors, thus incorrectly suggesting a periodic signal in duration of labor analgesia. We developed graphical and analytical tools to help assess the influence of individual points on the chronobiological analysis. Women with singleton term pregnancies in vertex presentation, cervical dilation 3 to 5 cm, pain score >50 mm (of 100 mm), and requesting labor analgesia were enrolled in this study. Patients received 2.5 mg of intrathecal bupivacaine in 2 mL using a combined spinal-epidural technique. Analgesia duration was the time from intrathecal injection until the first request for additional analgesia. The duration of analgesia was analyzed by visual inspection of the data, application of smoothing functions (Supersmoother; LOWESS and LOESS [locally weighted scatterplot smoothing functions]), analysis of variance, Cosinor (Chronos-Fit), Excel, and NONMEM (nonlinear mixed effect modeling). Confidence intervals (CIs) were determined by bootstrap analysis (1000 replications with replacement) using PLT Tools. Eighty-two women were included in the study. Examination of the raw data using 3 smoothing functions revealed a bimodal pattern, with a peak at approximately 0630 and a subsequent peak in the afternoon or evening, depending on the smoother. Analysis of variance did not identify any statistically significant difference between the duration of analgesia when intrathecal injection was given from midnight to 0600 compared with the duration of analgesia after intrathecal injection at other times. Chronos-Fit, Excel, and NONMEM produced identical results, with a mean duration of analgesia of 38.4 minutes (95% CI: 35.4-41.6 minutes), an 8-hour periodic waveform with an amplitude of 5.8 minutes (95% CI: 2.1-10.7 minutes), and a phase offset of 6.5 hours (95% CI: 5.4-8.0 hours) relative to midnight. The 8-hour periodic model did not reach statistical significance in 40% of bootstrap analyses, implying that statistical significance of the 8-hour periodic model was dependent on a subset of the data. Two data points before the change of shift at 0700 contributed most strongly to the statistical significance of the periodic waveform. Without these data points, there was no evidence of an 8-hour periodic waveform for intrathecal bupivacaine analgesia. Chronobiology includes the influence of external daily rhythms in the environment (e.g., nursing shifts) as well as human biological rhythms. We were able to distinguish the influence of an external rhythm by combining several novel analyses: (1) graphical presentation superimposing the raw data, external rhythms (e.g., nursing and anesthesia provider shifts), and smoothing functions; (2) graphical display of the contribution of each data point to the statistical significance; and (3) bootstrap analysis to identify whether the statistical significance was highly dependent on a data subset. These approaches suggested that 2 data points were likely artifacts of the change in nursing and anesthesia shifts. When these points were removed, there was no suggestion of biological rhythm in the duration of intrathecal bupivacaine analgesia.

  7. Trends in surface-water quality at selected National Stream Quality Accounting Network (NASQAN) stations, in Michigan

    USGS Publications Warehouse

    Syed, Atiq U.; Fogarty, Lisa R.

    2005-01-01

    To demonstrate the value of long-term, water-quality monitoring, the Michigan Department of Environmental Quality (MDEQ), in cooperation with the U.S. Geological Survey (USGS), initiated a study to evaluate potential trends in water-quality constituents for selected National Stream Quality Accounting Network (NASQAN) stations in Michigan. The goal of this study is to assist the MDEQ in evaluating the effectiveness of water-pollution control efforts and the identification of water-quality concerns. The study included a total of nine NASQAN stations in Michigan. Approximately 28 constituents were analyzed for trend tests. Station selection was based on data availability, land-use characteristics, and station priority for the MDEQ Water Chemistry Monitoring Project. Trend analyses were completed using the uncensored Seasonal Kendall Test in the computer program Estimate Trend (ESTREND), a software program for the detection of trends in water-quality data. The parameters chosen for the trend test had (1) at least a 5-year period of record (2) about 5 percent of the observations censored at a single reporting limit, and (3) 40 percent of the values within the beginning one-fifth and ending one-fifth of the selected period. In this study, a negative trend indicates a decrease in concentration of a particular constituent, which generally means an improvement in water quality; whereas a positive trend means an increase in concentration and possible degradation of water quality. The results of the study show an overall improvement in water quality at the Clinton River at Mount Clemens, Manistee River at Manistee, and Pigeon River near Caseville. The detected trend for these stations show decreases in concentrations of various constituents such as nitrogen compounds, conductance, sulfate, fecal coliform bacteria, and fecal streptococci bacteria. The negative trend may indicate an overall improvement in agricultural practices, municipal and industrial wastewater-treatment processes, and effective regulations. Phosphorus data for most of the study stations could not be analyzed because of the data limitations for trend tests. The only station with a significant negative trend in total phosphorus concentration is the Clinton River at Mount Clemens. However, scatter-plot analyses of phosphorus data indicate decreasing concentrations with time for most of the study stations. Positive trends in concentration of nitrogen compounds were detected at the Kalamazoo River near Saugatuck and Muskegon River near Bridgeton. Positive trends in both fecal coliform and total fecal coliform were detected at the Tahquamenon River near Paradise. Various different point and nonpoint sources could produce such positive trends, but most commonly the increase in concentrations of nitrogen compounds and fecal coliform bacteria are associated with agricultural practices and sewage-plant discharges. The constituent with the most numerous and geographically widespread significant trend is pH. The pH levels increased at six out of nine stations on all the major rivers in Michigan, with no negative trend at any station. The cause of pH increase is difficult to determine, as it could be related to a combination of anthropogenic activities and natural processes occurring simultaneously in the environment. Trends in concentration of major ions, such as calcium, sodium, magnesium, sulfate, fluoride, chloride, and potassium, were detected at eight out of nine stations. A negative trend was detected only in sulfate and fluoride concentrations; a positive trend was detected only in calcium concentration. The major ions with the most widespread significant trends are sodium and chloride; three positive and two negative trends were detected for sodium, and three negative and two positive trends were detected for chloride. The negative trends in chloride concentrations outnumbered the positive trends. This result indicates a slight improvement in surface-water quality because chloride as a point source in natural water comes from deicing salt, sewage effluents, industrial wastes, and oil fields. For other major ions, such as magnesium and potassium, both positive and negative trends were detected. These changes in trends indicate changes in surface-water quality caused by a variety of point and non-point sources throughout Michigan, as well as natural changes in the environment.

  8. Correlating tephras and cryptotephras using glass compositional analyses and numerical and statistical methods: Review and evaluation

    NASA Astrophysics Data System (ADS)

    Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.

    2017-11-01

    We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards <32 μm in diameter) using narrow beams are useful for fine-grained samples from distal or ultra-distal geographic locations, and for vesicular or microlite-rich glass shards or small melt inclusions. Caveats apply, however, in the microprobe analysis of very small microshards (≤∼5 μm in diameter), where particle geometry becomes important, and of microlite-rich glass shards where the potential problem of secondary fluorescence across phase boundaries needs to be recognised. Trace element analyses of individual glass shards using laser ablation inductively coupled plasma-mass spectrometry (LA-ICP-MS), with crater diameters of 20 μm and 10 μm, are now effectively routine, giving detection limits well below 1 ppm. Smaller ablation craters (<10 μm) can be subject to significant element fractionation during analysis, but the systematic relationship of such fractionation with glass composition suggests that analyses for some elements at these resolutions may be quantifiable. In undertaking analyses, either by microprobe or LA-ICP-MS, reference material data acquired using the same procedure, and preferably from the same analytical session, should be presented alongside new analytical data. In part 2 of the review, we describe, critically assess, and recommend ways in which tephras or cryptotephras can be correlated (in conjunction with other information) using numerical or statistical analyses of compositional data. Statistical methods provide a less subjective means of dealing with analytical data pertaining to tephra components (usually glass or crystals/phenocrysts) than heuristic alternatives. They enable a better understanding of relationships among the data from multiple viewpoints to be developed and help quantify the degree of uncertainty in establishing correlations. In common with other scientific hypothesis testing, it is easier to infer using such analysis that two or more tephras are different rather than the same. Adding stratigraphic, chronological, spatial, or palaeoenvironmental data (i.e. multiple criteria) is usually necessary and allows for more robust correlations to be made. A two-stage approach is useful, the first focussed on differences in the mean composition of samples, or their range, which can be visualised graphically via scatterplot matrices or bivariate plots coupled with the use of statistical tools such as distance measures, similarity coefficients, hierarchical cluster analysis (informed by distance measures or similarity or cophenetic coefficients), and principal components analysis (PCA). Some statistical methods (cluster analysis, discriminant analysis) are referred to as 'machine learning' in the computing literature. The second stage examines sample variance and the degree of compositional similarity so that sample equivalence or otherwise can be established on a statistical basis. This stage may involve discriminant function analysis (DFA), support vector machines (SVMs), canonical variates analysis (CVA), and ANOVA or MANOVA (or its two-sample special case, the Hotelling two-sample T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high-quality tephrostratigraphic frameworks for different regions is encouraged.

Top