7 CFR 42.132 - Determining cumulative sum values.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the previous subgroup. (2) Subtract the subgroup tolerance (“T”). (3) The CuSum value is reset in the... 7 Agriculture 2 2010-01-01 2010-01-01 false Determining cumulative sum values. 42.132 Section 42... Determining cumulative sum values. (a) The parameters for the on-line cumulative sum sampling plans for AQL's...
7 CFR 42.132 - Determining cumulative sum values.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Determining cumulative sum values. 42.132 Section 42... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS On-Line Sampling and Inspection Procedures § 42.132 Determining cumulative sum values. (a) The parameters for the on-line cumulative sum sampling plans for AQL's...
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1980-01-01
Simple procedures are presented for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is provided for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are provided for determining the two phases of life. The procedure involves two steps, each similar to the conventional application of the commonly used linear damage rule. When the sum of cycle ratios based on phase 1 lives reaches unity, phase 1 is presumed complete, and further loadings are summed as cycle ratios on phase 2 lives. When the phase 2 sum reaches unity, failure is presumed to occur. No other physical properties or material constants than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons of both methods are discussed.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1981-01-01
Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.
Cumulative sum control charts for assessing performance in arterial surgery.
Beiles, C Barry; Morton, Anthony P
2004-03-01
The Melbourne Vascular Surgical Association (Melbourne, Australia) undertakes surveillance of mortality following aortic aneurysm surgery, patency at discharge following infrainguinal bypass and stroke and death following carotid endarterectomy. Quality improvement protocol employing the Deming cycle requires that the system for performing surgery first be analysed and optimized. Then process and outcome data are collected and these data require careful analysis. There must be a mechanism so that the causes of unsatisfactory outcomes can be determined and a good feedback mechanism must exist so that good performance is acknowledged and unsatisfactory performance corrected. A simple method for analysing these data that detects changes in average outcome rates is available using cumulative sum statistical control charts. Data have been analysed both retrospectively from 1999 to 2001, and prospectively during 2002 using cumulative sum control methods. A pathway to deal with control chart signals has been developed. The standard of arterial surgery in Victoria, Australia, is high. In one case a safe and satisfactory outcome was achieved by following the pathway developed by the audit committee. Cumulative sum control charts are a simple and effective tool for the identification of variations in performance standards in arterial surgery. The establishment of a pathway to manage problem performance is a vital part of audit activity.
Novick, Richard J; Fox, Stephanie A; Stitt, Larry W; Forbes, Thomas L; Steiner, Stefan
2006-08-01
We previously applied non-risk-adjusted cumulative sum methods to analyze coronary bypass outcomes. The objective of this study was to assess the incremental advantage of risk-adjusted cumulative sum methods in this setting. Prospective data were collected in 793 consecutive patients who underwent coronary bypass grafting performed by a single surgeon during a period of 5 years. The composite occurrence of an "adverse outcome" included mortality or any of 10 major complications. An institutional logistic regression model for adverse outcome was developed by using 2608 contemporaneous patients undergoing coronary bypass. The predicted risk of adverse outcome in each of the surgeon's 793 patients was then calculated. A risk-adjusted cumulative sum curve was then generated after specifying control limits and odds ratio. This risk-adjusted curve was compared with the non-risk-adjusted cumulative sum curve, and the clinical significance of this difference was assessed. The surgeon's adverse outcome rate was 96 of 793 (12.1%) versus 270 of 1815 (14.9%) for all the other institution's surgeons combined (P = .06). The non-risk-adjusted curve reached below the lower control limit, signifying excellent outcomes between cases 164 and 313, 323 and 407, and 667 and 793, but transgressed the upper limit between cases 461 and 478. The risk-adjusted cumulative sum curve never transgressed the upper control limit, signifying that cases preceding and including 461 to 478 were at an increased predicted risk. Furthermore, if the risk-adjusted cumulative sum curve was reset to zero whenever a control limit was reached, it still signaled a decrease in adverse outcome at 166, 653, and 782 cases. Risk-adjusted cumulative sum techniques provide incremental advantages over non-risk-adjusted methods by not signaling a decrement in performance when preoperative patient risk is high.
Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E
2018-02-01
In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.
Ichikawa, Nobuki; Homma, Shigenori; Yoshida, Tadashi; Ohno, Yosuke; Kawamura, Hideki; Wakizaka, Kazuki; Nakanishi, Kazuaki; Kazui, Keizo; Iijima, Hiroaki; Shomura, Hiroki; Funakoshi, Tohru; Nakano, Shiro; Taketomi, Akinobu
2017-12-01
We retrospectively assessed the efficacy of our mentor tutoring system for teaching laparoscopic colorectal surgical skills in a general hospital. A series of 55 laparoscopic colectomies performed by 1 trainee were evaluated. Next, the learning curves for high anterior resection performed by the trainee (n=20) were compared with those of a self-trained surgeon (n=19). Cumulative sum analysis and multivariate regression analyses showed that 38 completed cases were needed to reduce the operative time. In high anterior resection, the mean operative times were significantly shorter after the seventh average for the tutored surgeon compared with that for the self-trained surgeon. In cumulative sum charting, the curve reached a plateau by the seventh case for the tutored surgeon, but continued to increase for the self-trained surgeon. Mentor tutoring effectively teaches laparoscopic colorectal surgical skills in a general hospital setting.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
2010-12-01
The Francisella tularensis is one of these and is the causal agent of the tularemia disease. Tularemia is used as the motivating problem to evaluate...PAGES 79 14. SUBJECT TERMS Biosurveillance, Rare Disease, Tularemia , Cumulative Sum, CUSUM 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...is one of these, and is the causal agent of the tularemia disease. Tularemia is used as the motivating problem to evaluate and compare the
The Importance of Practice in the Development of Statistics.
1983-01-01
RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears
A Study on Predictive Analytics Application to Ship Machinery Maintenance
2013-09-01
Looking at the nature of the time series forecasting method , it would be better applied to offline analysis . The application for real- time online...other system attributes in future. Two techniques of statistical analysis , mainly time series models and cumulative sum control charts, are discussed in...statistical tool employed for the two techniques of statistical analysis . Both time series forecasting as well as CUSUM control charts are shown to be
Norisue, Yasuhiro; Tokuda, Yasuharu; Juarez, Mayrol; Uchimido, Ryo; Fujitani, Shigeki; Stoeckel, David A
2017-02-07
Cumulative sum (CUSUM) analysis can be used to continuously monitor the performance of an individual or process and detect deviations from a preset or standard level of achievement. However, no previous study has evaluated the utility of CUSUM analysis in facilitating timely environmental assessment and interventions to improve performance of linear-probe endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). The aim of this study was to evaluate the usefulness of combined CUSUM and chronological environmental analysis as a tool to improve the learning environment for EBUS-TBNA trainees. This study was an observational chart review. To determine if performance was acceptable, CUSUM analysis was used to track procedural outcomes of trainees in EBUS-TBNA. To investigate chronological changes in the learning environment, multivariate logistic regression analysis was used to compare several indices before and after time points when significant changes occurred in proficiency. Presence of an additional attending bronchoscopist was inversely associated with nonproficiency (odds ratio, 0.117; 95% confidence interval, 0-0.749; P = 0.019). Other factors, including presence of an on-site cytopathologist and dose of sedatives used, were not significantly associated with duration of nonproficiency. Combined CUSUM and chronological environmental analysis may be useful in hastening interventions that improve performance of EBUS-TBNA.
ERIC Educational Resources Information Center
Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.
In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…
Cumulative Poisson Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert
1990-01-01
Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.
Castagna, Antonella; Csepregi, Kristóf; Neugart, Susanne; Zipoli, Gaetano; Večeřová, Kristýna; Jakab, Gábor; Jug, Tjaša; Llorens, Laura; Martínez-Abaigar, Javier; Martínez-Lüscher, Johann; Núñez-Olivera, Encarnación; Ranieri, Annamaria; Schoedl-Hummel, Katharina; Schreiner, Monika; Teszlák, Péter; Tittmann, Susanne; Urban, Otmar; Verdaguer, Dolors; Jansen, Marcel A K; Hideg, Éva
2017-11-01
A 2-year study explored metabolic and phenotypic plasticity of sun-acclimated Vitis vinifera cv. Pinot noir leaves collected from 12 locations across a 36.69-49.98°N latitudinal gradient. Leaf morphological and biochemical parameters were analysed in the context of meteorological parameters and the latitudinal gradient. We found that leaf fresh weight and area were negatively correlated with both global and ultraviolet (UV) radiation, cumulated global radiation being a stronger correlator. Cumulative UV radiation (sumUVR) was the strongest correlator with most leaf metabolites and pigments. Leaf UV-absorbing pigments, total antioxidant capacities, and phenolic compounds increased with increasing sumUVR, whereas total carotenoids and xanthophylls decreased. Despite of this reallocation of metabolic resources from carotenoids to phenolics, an increase in xanthophyll-cycle pigments (the sum of the amounts of three xanthophylls: violaxanthin, antheraxanthin, and zeaxanthin) with increasing sumUVR indicates active, dynamic protection for the photosynthetic apparatus. In addition, increased amounts of flavonoids (quercetin glycosides) and constitutive β-carotene and α-tocopherol pools provide antioxidant protection against reactive oxygen species. However, rather than a continuum of plant acclimation responses, principal component analysis indicates clusters of metabolic states across the explored 1,500-km-long latitudinal gradient. This study emphasizes the physiological component of plant responses to latitudinal gradients and reveals the physiological plasticity that may act to complement genetic adaptations. © 2017 John Wiley & Sons Ltd.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
ERIC Educational Resources Information Center
Corapci, Feyza
2008-01-01
This study examined the main and interactive effects of cumulative risk and child temperament on teacher ratings of social competence and observer ratings of peer play in a sample of Head Start preschoolers. A cumulative risk index (CRI) was computed by summing the total number of risk factors for each family. There was a difference in the…
The learning curves in living donor hemiliver graft procurement using small upper midline incision.
Ikegami, Toru; Harimoto, Norifumi; Shimokawa, Masahiro; Yoshizumi, Tomoharu; Uchiyama, Hideaki; Itoh, Shinji; Okabe, Norihisa; Sakata, Kazuhito; Nagatsu, Akihisa; Soejima, Yuji; Maehara, Yoshihiko
2016-12-01
The learning curve for performing living donor hemiliver procurement (LDHP) via small upper midline incision (UMI) has not been determined. Living donors (n=101) who underwent LDHP via UMI were included to investigate the learning curve using cumulative sum analysis. The cumulative sum analysis showed that nine cases for right lobe (case #23) and 19 cases for left lobe (case #32 in the whole series) are needed for stable and acceptable surgical outcomes in LDHP via UMI. The established phase (n=69, since case #33) had a significantly shorter operative time, a smaller incision size, and less blood loss than the previous learning phase (n=32, serial case number up to the last 19th left lobe case). Multivariate analysis showed that the learning phase, high body mass index ≥25 kg/m 2 , and left lobe graft procurement are the factors associated with surgical events including operative blood loss ≥400 mL, operative time ≥300 minutes, or surgical complications ≥Clavien-Dindo grade II. There is an obvious learning curve in performing LDHP via UMI, and 32 cases including both 19 cases for left lobe and nine cases for right lobe are needed for having stable and acceptable surgical outcomes. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Nikolaev, V P
2008-01-01
Theoretical analysis of the risk of decompression illness (DI) during extravehicular activity following the Russian and NASA decompression protocols (D-R and D-US, respectively) was performed. In contrast to the tradition approach to decompression stress evaluation by the factor of tissue supersaturation with nitrogen, our probabilistic theory of decompression safety provides a completely reasoned evaluation and comparison of the levels of hazard of these decompression protocols. According to this theory, the function of cumulative DI risk is equal to the sum of functions of cumulative risk of lesion of all body tissues by gas bubbles and their supersaturation by solute gases. Based on modeling of dynamics of these functions, growth of the DI cumulative risk in the course of D-R and D-US follows essentially similar trajectories within the time-frame of up to 330 minutes. However, further extension of D-US but not D-R raises the risk of DI drastically.
Syndromic surveillance system based on near real-time cattle mortality monitoring.
Torres, G; Ciaravino, V; Ascaso, S; Flores, V; Romero, L; Simón, F
2015-05-01
Early detection of an infectious disease incursion will minimize the impact of outbreaks in livestock. Syndromic surveillance based on the analysis of readily available data can enhance traditional surveillance systems and allow veterinary authorities to react in a timely manner. This study was based on monitoring the number of cattle carcasses sent for rendering in the veterinary unit of Talavera de la Reina (Spain). The aim was to develop a system to detect deviations from expected values which would signal unexpected health events. Historical weekly collected dead cattle (WCDC) time series stabilized by the Box-Cox transformation and adjusted by the minimum least squares method were used to build the univariate cycling regression model based on a Fourier transformation. Three different models, according to type of production system, were built to estimate the baseline expected number of WCDC. Two types of risk signals were generated: point risk signals when the observed value was greater than the upper 95% confidence interval of the expected baseline, and cumulative risk signals, generated by a modified cumulative sum algorithm, when the cumulative sums of reported deaths were above the cumulative sum of expected deaths. Data from 2011 were used to prospectively validate the model generating seven risk signals. None of them were correlated to infectious disease events but some coincided, in time, with very high climatic temperatures recorded in the region. The harvest effect was also observed during the first week of the study year. Establishing appropriate risk signal thresholds is a limiting factor of predictive models; it needs to be adjusted based on experience gained during the use of the models. To increase the sensitivity and specificity of the predictions epidemiological interpretation of non-specific risk signals should be complemented by other sources of information. The methodology developed in this study can enhance other existing early detection surveillance systems. Syndromic surveillance based on mortality monitoring can reduce the detection time for certain disease outbreaks associated with mild mortality only detected at regional level. The methodology can be adapted to monitor other parameters routinely collected at farm level which can be influenced by communicable diseases. Copyright © 2015 Elsevier B.V. All rights reserved.
Testing the Race Model Inequality in Redundant Stimuli with Variable Onset Asynchrony
ERIC Educational Resources Information Center
Gondan, Matthias
2009-01-01
In speeded response tasks with redundant signals, parallel processing of the signals is tested by the race model inequality. This inequality states that given a race of two signals, the cumulative distribution of response times for redundant stimuli never exceeds the sum of the cumulative distributions of response times for the single-modality…
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
Evaluation of the learning curve for external cephalic version using cumulative sum analysis.
Kim, So Yun; Han, Jung Yeol; Chang, Eun Hye; Kwak, Dong Wook; Ahn, Hyun Kyung; Ryu, Hyun Mi; Kim, Moon Young
2017-07-01
We evaluated the learning curve for external cephalic version (ECV) using learning curve-cumulative sum (LC-CUSUM) analysis. This was a retrospective study involving 290 consecutive cases between October 2013 and March 2017. We evaluated the learning curve for ECV on nulli and over para 1 group using LC-CUSUM analysis on the assumption that 50% and 70% of ECV procedures succeeded by description a trend-line of quadratic function with reliable R 2 values. The overall success rate for ECV was 64.8% (188/290), while the success rate for nullipara and over para 1 groups was 56.2% (100/178) and 78.6% (88/112), respectively. 'H' value, that the actual failure rate does not differ from the acceptable failure rate, was -3.27 and -1.635 when considering ECV success rates of 50% and 70%, respectively. Consequently, in order to obtain a consistent 50% success rate, we would require 57 nullipara cases, and in order to obtain a consistent 70% success rate, we would require 130 nullipara cases. In contrast, 8 to 10 over para 1 cases would be required for an expected success rate of 50% and 70% on over para 1 group. Even a relatively inexperienced physician can experience success with multipara and after accumulating experience, they will manage nullipara cases. Further research is required for LC-CUSUM involving several practitioners instead of a single practitioner. This will lead to the gradual implementation of standard learning curve guidelines for ECV.
Kim, H-I; Park, M S; Song, K J; Woo, Y; Hyung, W J
2014-10-01
The learning curve of robotic gastrectomy has not yet been evaluated in comparison with the laparoscopic approach. We compared the learning curves of robotic gastrectomy and laparoscopic gastrectomy based on operation time and surgical success. We analyzed 172 robotic and 481 laparoscopic distal gastrectomies performed by single surgeon from May 2003 to April 2009. The operation time was analyzed using a moving average and non-linear regression analysis. Surgical success was evaluated by a cumulative sum plot with a target failure rate of 10%. Surgical failure was defined as laparoscopic or open conversion, insufficient lymph node harvest for staging, resection margin involvement, postoperative morbidity, and mortality. Moving average and non-linear regression analyses indicated stable state for operation time at 95 and 121 cases in robotic gastrectomy, and 270 and 262 cases in laparoscopic gastrectomy, respectively. The cumulative sum plot identified no cut-off point for surgical success in robotic gastrectomy and 80 cases in laparoscopic gastrectomy. Excluding the initial 148 laparoscopic gastrectomies that were performed before the first robotic gastrectomy, the two groups showed similar number of cases to reach steady state in operation time, and showed no cut-off point in analysis of surgical success. The experience of laparoscopic surgery could affect the learning process of robotic gastrectomy. An experienced laparoscopic surgeon requires fewer cases of robotic gastrectomy to reach steady state. Moreover, the surgical outcomes of robotic gastrectomy were satisfactory. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cumulative sum analysis score and phacoemulsification competency learning curve.
Vedana, Gustavo; Cardoso, Filipe G; Marcon, Alexandre S; Araújo, Licio E K; Zanon, Matheus; Birriel, Daniella C; Watte, Guilherme; Jun, Albert S
2017-01-01
To use the cumulative sum analysis score (CUSUM) to construct objectively the learning curve of phacoemulsification competency. Three second-year residents and an experienced consultant were monitored for a series of 70 phacoemulsification cases each and had their series analysed by CUSUM regarding posterior capsule rupture (PCR) and best-corrected visual acuity. The acceptable rate for PCR was <5% (lower limit h) and the unacceptable rate was >10% (upper limit h). The acceptable rate for best-corrected visual acuity worse than 20/40 was <10% (lower limit h) and the unacceptable rate was >20% (upper limit h). The area between lower limit h and upper limit h is called the decision interval. There was no statistically significant difference in the mean age, sex or cataract grades between groups. The first trainee achieved PCR CUSUM competency at his 22 nd case. His best-corrected visual acuity CUSUM was in the decision interval from his third case and stayed there until the end, never reaching competency. The second trainee achieved PCR CUSUM competency at his 39 th case. He could reach best-corrected visual acuity CUSUM competency at his 22 nd case. The third trainee achieved PCR CUSUM competency at his 41 st case. He reached best-corrected visual acuity CUSUM competency at his 14 th case. The learning curve of competency in phacoemulsification is constructed by CUSUM and in average took 38 cases for each trainee to achieve it.
Merchant-Borna, Kian; Asselin, Patrick; Narayan, Darren; Abar, Beau; Jones, Courtney M C; Bazarian, Jeffrey J
2016-12-01
One football season of sub-concussive head blows has been shown to be associated with subclinical white matter (WM) changes on diffusion tensor imaging (DTI). Prior research analyses of helmet-based impact metrics using mean and peak linear and rotational acceleration showed relatively weak correlations to these WM changes; however, these analyses failed to account for the emerging concept that neuronal vulnerability to successive hits is inversely related to the time between hits (TBH). To develop a novel method for quantifying the cumulative effects of sub-concussive head blows during a single season of collegiate football by weighting helmet-based impact measures for time between helmet impacts. We further aim to compare correlations to changes in DTI after one season of collegiate football using weighted cumulative helmet-based impact measures to correlations using non-weighted cumulative helmet-based impact measures and non-cumulative measures. We performed a secondary analysis of DTI and helmet impact data collected on ten Division III collegiate football players during the 2011 season. All subjects underwent diffusion MR imaging before the start of the football season and within 1 week of the end of the football season. Helmet impacts were recorded at each practice and game using helmet-mounted accelerometers, which computed five helmet-based impact measures for each hit: linear acceleration (LA), rotational acceleration (RA), Gadd Severity Index (GSI), Head Injury Criterion (HIC 15 ), and Head Impact Technology severity profile (HITsp). All helmet-based impact measures were analyzed using five methods of summary: peak and mean (non-cumulative measures), season sum-totals (cumulative unweighted measures), and season sum-totals weighted for time between hits (TBH), the interval of time from hit to post-season DTI assessment (TUA), and both TBH and TUA combined. Summarized helmet-based impact measures were correlated to statistically significant changes in fractional anisotropy (FA) using bivariate and multivariable correlation analyses. The resulting R 2 values were averaged in each of the five summary method groups and compared using one-way ANOVA followed by Tukey post hoc tests for multiple comparisons. Total head hits for the season ranged from 431 to 1850. None of the athletes suffered a clinically evident concussion during the study period. The mean R 2 value for the correlations using cumulative helmet-based impact measures weighted for both TUA and TBH combined (0.51 ± 0.03) was significantly greater than the mean R 2 value for correlations using non-cumulative HIMs (vs. 0.19 ± 0.04, p < 0.0001), unweighted cumulative helmet-based impact measures (vs. 0.27 + 0.03, p < 0.0001), and cumulative helmet-based impact measures weighted for TBH alone (vs. 0.34 ± 0.02, p < 0.001). R 2 values for weighted cumulative helmet-based impact measures ranged from 0.32 to 0.77, with 60% of correlations being statistically significant. Cumulative GSI weighted for TBH and TUA explained 77% of the variance in the percent of white matter voxels with statistically significant (PWMVSS) increase in FA from pre-season to post-season, while both cumulative GSI and cumulative HIC 15 weighted for TUA accounted for 75% of the variance in PWMVSS decrease in FA. A novel method for weighting cumulative helmet-based impact measures summed over the course of a football season resulted in a marked improvement in the correlation to brain WM changes observed after a single football season of sub-concussive head blows. Our results lend support to the emerging concept that sub-concussive head blows can result in sub-clinical brain injury, and this may be influenced by the time between hits. If confirmed in an independent data set, our novel method for quantifying the cumulative effects of sub-concussive head blows could be used to develop threshold-based countermeasures to prevent the accumulation of WM changes with multiple seasons of play.
40 CFR 91.508 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... family may be determined to be in noncompliance for purposes of § 91.510. H = The Action Limit. It is 5.0 × σ, and is a function of the standard deviation, σ. σ = is the sample standard deviation and is... Equation must be final deteriorated test results as defined in § 91.509(c). Ci = max[0 0R (Ci-1 + Xi − (FEL...
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
Learning curve for robotic-assisted surgery for rectal cancer: use of the cumulative sum method.
Yamaguchi, Tomohiro; Kinugasa, Yusuke; Shiomi, Akio; Sato, Sumito; Yamakawa, Yushi; Kagawa, Hiroyasu; Tomioka, Hiroyuki; Mori, Keita
2015-07-01
Few data are available to assess the learning curve for robotic-assisted surgery for rectal cancer. The aim of the present study was to evaluate the learning curve for robotic-assisted surgery for rectal cancer by a surgeon at a single institute. From December 2011 to August 2013, a total of 80 consecutive patients who underwent robotic-assisted surgery for rectal cancer performed by the same surgeon were included in this study. The learning curve was analyzed using the cumulative sum method. This method was used for all 80 cases, taking into account operative time. Operative procedures included anterior resections in 6 patients, low anterior resections in 46 patients, intersphincteric resections in 22 patients, and abdominoperineal resections in 6 patients. Lateral lymph node dissection was performed in 28 patients. Median operative time was 280 min (range 135-683 min), and median blood loss was 17 mL (range 0-690 mL). No postoperative complications of Clavien-Dindo classification Grade III or IV were encountered. We arranged operative times and calculated cumulative sum values, allowing differentiation of three phases: phase I, Cases 1-25; phase II, Cases 26-50; and phase III, Cases 51-80. Our data suggested three phases of the learning curve in robotic-assisted surgery for rectal cancer. The first 25 cases formed the learning phase.
Evaluation of the learning curve for external cephalic version using cumulative sum analysis
Kim, So Yun; Chang, Eun Hye; Kwak, Dong Wook; Ahn, Hyun Kyung; Ryu, Hyun Mi; Kim, Moon Young
2017-01-01
Objective We evaluated the learning curve for external cephalic version (ECV) using learning curve-cumulative sum (LC-CUSUM) analysis. Methods This was a retrospective study involving 290 consecutive cases between October 2013 and March 2017. We evaluated the learning curve for ECV on nulli and over para 1 group using LC-CUSUM analysis on the assumption that 50% and 70% of ECV procedures succeeded by description a trend-line of quadratic function with reliable R2 values. Results The overall success rate for ECV was 64.8% (188/290), while the success rate for nullipara and over para 1 groups was 56.2% (100/178) and 78.6% (88/112), respectively. ‘H’ value, that the actual failure rate does not differ from the acceptable failure rate, was −3.27 and −1.635 when considering ECV success rates of 50% and 70%, respectively. Consequently, in order to obtain a consistent 50% success rate, we would require 57 nullipara cases, and in order to obtain a consistent 70% success rate, we would require 130 nullipara cases. In contrast, 8 to 10 over para 1 cases would be required for an expected success rate of 50% and 70% on over para 1 group. Conclusion Even a relatively inexperienced physician can experience success with multipara and after accumulating experience, they will manage nullipara cases. Further research is required for LC-CUSUM involving several practitioners instead of a single practitioner. This will lead to the gradual implementation of standard learning curve guidelines for ECV. PMID:28791265
Baptista Macaroff, W M; Castroman Espasandín, P
2007-01-01
The aim of this study was to assess the cumulative sum (cusum) method for evaluating the performance of our hospital's acute postoperative pain service. The period of analysis was 7 months. Analgesic failure was defined as a score of 3 points or more on a simple numerical scale. Acceptable failure (p0) was set at 20% of patients upon admission to the postanesthetic recovery unit and at 7% 24 hours after surgery. Unacceptable failure was set at double the p0 rate at each time (40% and 14%, respectively). The unit's patient records were used to generate a cusum graph for each evaluation. Nine hundred four records were included. The rate of failure was 31.6% upon admission to the unit and 12.1% at the 24-hour postoperative assessment. The curve rose rapidly to the value set for p0 at both evaluation times (n = 14 and n = 17, respectively), later leveled off, and began to fall after 721 and 521 cases, respectively. Our study shows the efficacy of the cusum method for monitoring a proposed quality standard. The graph also showed periods of suboptimal performance that would not have been evident from analyzing the data en block. Thus the cusum method would facilitate rapid detection of periods in which quality declines.
A longitudinal study of low back pain and daily vibration exposure in professional drivers.
Bovenzi, Massimo
2010-01-01
The aim of this study was to investigate the relation between low back pain (LBP) outcomes and measures of daily exposure to whole-body vibration (WBV) in professional drivers. In a study population of 202 male drivers, who were not affected with LBP at the initial survey, LBP in terms of duration, intensity, and disability was investigated over a two-year follow-up period. Vibration measurements were made on representative samples of machines and vehicles. The following measures of daily WBV exposure were obtained: (i) 8-h energy-equivalent frequency-weighted acceleration (highest axis), A(8)(max) in ms(-2) r.m.s.; (ii) A(8)(sum) (root-sum-of-squares) in ms(-2) r.m.s.; (iii) Vibration Dose Value (highest axis), VDV(max) in ms(-1.75); (iv) VDV(sum) (root-sum-of-quads) in ms(-1.75). The cumulative incidence of LBP over the follow-up period was 38.6%. The incidence of high pain intensity and severe disability was 16.8 and 14.4%, respectively. After adjustment for several confounders, VDV(max) or VDV(sum) gave better predictions of LBP outcomes over time than A(8)(max) or A(8)(sum), respectively. Poor predictions were obtained with A(8)(max), which is the currently preferred measure of daily WBV exposure in European countries. In multivariate data analysis, physical work load was a significant predictor of LBP outcomes over the follow-up period. Perceived psychosocial work environment was not associated with LBP.
The cumulative effect of consecutive winters' snow depth on moose and deer populations: a defence
McRoberts, R.E.; Mech, L.D.; Peterson, R.O.
1995-01-01
1. L. D. Mech et al. presented evidence that moose Alces alces and deer Odocoileus virginianus population parameters re influenced by a cumulative effect of three winters' snow depth. They postulated that snow depth affects adult ungulates cumulatively from winter to winter and results in measurable offspring effects after the third winter. 2. F. Messier challenged those findings and claimed that the population parameters studied were instead affected by ungulate density and wolf indexes. 3. This paper refutes Messier's claims by demonstrating that his results were an artifact of two methodological errors. The first was that, in his main analyses, Messier used only the first previous winter's snow depth rather than the sum of the previous three winters' snow depth, which was the primary point of Mech et al. Secondly, Messier smoothed the ungulate population data, which removed 22-51% of the variability from the raw data. 4. When we repeated Messier's analyses on the raw data and using the sum of the previous three winter's snow depth, his findings did not hold up.
Wani, Sachin; Hall, Matthew; Wang, Andrew Y; DiMaio, Christopher J; Muthusamy, V Raman; Keswani, Rajesh N; Brauer, Brian C; Easler, Jeffrey J; Yen, Roy D; El Hajj, Ihab; Fukami, Norio; Ghassemi, Kourosh F; Gonzalez, Susana; Hosford, Lindsay; Hollander, Thomas G; Wilson, Robert; Kushnir, Vladimir M; Ahmad, Jawad; Murad, Faris; Prabhu, Anoop; Watson, Rabindra R; Strand, Daniel S; Amateau, Stuart K; Attwell, Augustin; Shah, Raj J; Early, Dayna; Edmundowicz, Steven A; Mullady, Daniel
2016-04-01
There are limited data on learning curves and competence in ERCP. By using a standardized data collection tool, we aimed to prospectively define learning curves and measure competence among advanced endoscopy trainees (AETs) by using cumulative sum (CUSUM) analysis. AETs were evaluated by attending endoscopists starting with the 26th hands-on ERCP examination and then every ERCP examination during the 12-month training period. A standardized ERCP competency assessment tool (using a 4-point scoring system) was used to grade the examination. CUSUM analysis was applied to produce learning curves for individual technical and cognitive components of ERCP performance (success defined as a score of 1, acceptable and unacceptable failures [p1] of 10% and 20%, respectively). Sensitivity analyses varying p1 and by using a less-stringent definition of success were performed. Five AETs were included with a total of 1049 graded ERCPs (mean ± SD, 209.8 ± 91.6/AET). The majority of cases were performed for a biliary indication (80%). The overall and native papilla allowed cannulation times were 3.1 ± 3.6 and 5.7 ± 4, respectively. Overall learning curves demonstrated substantial variability for individual technical and cognitive endpoints. Although nearly all AETs achieved competence in overall cannulation, none achieved competence for cannulation in cases with a native papilla. Sensitivity analyses increased the proportion of AETs who achieved competence. This study demonstrates that there is substantial variability in ERCP learning curves among AETs. A specific case volume does not ensure competence, especially for native papilla cannulation. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Variation in Aptitude of Trainees in Endoscopic Ultrasonography, Based on Cumulative Sum Analysis
Wani, Sachin; Hall, Matthew; Keswani, Rajesh N.; Aslanian, Harry R.; Casey, Brenna; Burbridge, Rebecca; Chak, Amitabh; Chen, Ann M.; Cote, Gregory; Edmundowicz, Steven A.; Faulx, Ashley L.; Hollander, Thomas G.; Lee, Linda S.; Mullady, Daniel; Murad, Faris; Muthusamy, Raman; Pfau, Patrick R.; Scheiman, James M.; Tokar, Jeffrey; Wagh, Mihir S.; Watson, Rabindra; Early, Dayna
2017-01-01
BACKGROUND & AIMS Studies have reported substantial variation in the competency of advanced endoscopy trainees, indicating a need for more supervised training in endoscopic ultrasound (EUS). We used a standardized, validated, data collection tool to evaluate learning curves and measure competency in EUS among trainees at multiple centers. METHODS In a prospective study performed at 15 centers, 17 trainees with no prior EUS experience were evaluated by experienced attending endosonographers at the 25th and then every 10th upper EUS examination, over a 12-month training period. A standardized data collection form was used (using a 5-point scoring system) to grade the EUS examination. Cumulative sum analysis was applied to produce a learning curve for each trainee; it tracked the overall performance based on median scores at different stations and also at each station. Competency was defined by a median score of 1, with acceptable and unacceptable failure rates of 10% and 20%, respectively. RESULTS Twelve trainees were included in the final analysis. Each of the trainees performed 265 to 540 EUS examinations (total, 4257 examinations). There was a large amount of variation in their learning curves: 2 trainees crossed the threshold for acceptable performance (at cases 225 and 245), 2 trainees had a trend toward acceptable performance (after 289 and 355 cases) but required continued observation, and 8 trainees needed additional training and observation. Similar results were observed at individual stations. CONCLUSIONS A specific case load does not ensure competency in EUS; 225 cases should be considered the minimum caseload for training because we found that no trainee achieved competency before this point. Ongoing training should be provided for trainees until competency is confirmed using objective measures. PMID:25460557
Variation in Aptitude of Trainees in Endoscopic Ultrasonography, Based on Cumulative Sum Analysis.
Wani, Sachin; Hall, Matthew; Keswani, Rajesh N; Aslanian, Harry R; Casey, Brenna; Burbridge, Rebecca; Chak, Amitabh; Chen, Ann M; Cote, Gregory; Edmundowicz, Steven A; Faulx, Ashley L; Hollander, Thomas G; Lee, Linda S; Mullady, Daniel; Murad, Faris; Muthusamy, V Raman; Pfau, Patrick R; Scheiman, James M; Tokar, Jeffrey; Wagh, Mihir S; Watson, Rabindra; Early, Dayna
2015-07-01
Studies have reported substantial variation in the competency of advanced endoscopy trainees, indicating a need for more supervised training in endoscopic ultrasound (EUS). We used a standardized, validated, data collection tool to evaluate learning curves and measure competency in EUS among trainees at multiple centers. In a prospective study performed at 15 centers, 17 trainees with no prior EUS experience were evaluated by experienced attending endosonographers at the 25th and then every 10th upper EUS examination, over a 12-month training period. A standardized data collection form was used (using a 5-point scoring system) to grade the EUS examination. Cumulative sum analysis was applied to produce a learning curve for each trainee; it tracked the overall performance based on median scores at different stations and also at each station. Competency was defined by a median score of 1, with acceptable and unacceptable failure rates of 10% and 20%, respectively. Twelve trainees were included in the final analysis. Each of the trainees performed 265 to 540 EUS examinations (total, 4257 examinations). There was a large amount of variation in their learning curves: 2 trainees crossed the threshold for acceptable performance (at cases 225 and 245), 2 trainees had a trend toward acceptable performance (after 289 and 355 cases) but required continued observation, and 8 trainees needed additional training and observation. Similar results were observed at individual stations. A specific case load does not ensure competency in EUS; 225 cases should be considered the minimum caseload for training because we found that no trainee achieved competency before this point. Ongoing training should be provided for trainees until competency is confirmed using objective measures. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.
Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.
Dogo, Samson Henry; Clark, Allan; Kulinskaya, Elena
2017-06-01
Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ 2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
Cho, Sung Yong; Choo, Min Soo; Jung, Jae Hyun; Jeong, Chang Wook; Oh, Sohee; Lee, Seung Bae; Son, Hwancheol; Jeong, Hyeon
2014-01-01
This study investigated the learning curve of a single-session retrograde intrarenal surgery (RIRS) in patients with mid-sized stones. Competence and trainee proficiency for RIRS was assessed using cumulative sum analysis (CUSUM). The study design and the use of patients' information stored in the hospital database were approved by the Institutional Review Board of our institution. A retrospective review was performed for 100 patients who underwent a single-session RIRS. Patients were included if the main stone had a maximal diameter between 10 and 30 mm. The presence of a residual stone was checked on postoperative day 1 and at one-month follow-up visit. Fragmentation efficacy was calculated "removed stone volume (mm(3)) divided by operative time (min)". CUSUM analysis was used for monitoring change in fragmentation efficacy, and we tested whether or not acceptable surgical outcomes were achieved. The mean age was 54.7±14.8 years. Serum creatinine level did not change significantly. Estimated GFR and hemoglobin were within normal limits postoperatively. The CUSUM curve tended to be flat until the 25th case and showed a rising pattern but declined again until the 56th case. After that point, the fragmentation efficacy reached a plateau. The acceptable level of fragmentation efficacy was 25 ml/min. Multivariate logistic regression analyses showed that stone-free rate was significantly lower for cases with multiple stones than those with a single stone (OR = 0.147, CI 0.032 - 0.674, P value = 0.005) and for cases with higher number of sites (OR = 0.676, CI 0.517 - 0.882, P value = 0.004). The statistical analysis of RIRS learning experience revealed that 56 cases were required for reaching a plateau in the learning curve. The number of stones and the number of sites were significant predictors for stone-free status.
Non-parametric characterization of long-term rainfall time series
NASA Astrophysics Data System (ADS)
Tiwari, Harinarayan; Pandey, Brij Kishor
2018-03-01
The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.
Wade, Mark; Madigan, Sheri; Plamondon, Andre; Rodrigues, Michelle; Browne, Dillon; Jenkins, Jennifer M
2018-06-01
Previous studies have demonstrated that various psychosocial risks are associated with poor cognitive functioning in children, and these risks frequently cluster together. In the current longitudinal study, we tested a model in which it was hypothesized that cumulative psychosocial adversity of mothers would have deleterious effects on children's cognitive functioning by compromising socialization processes within families (i.e., parental competence). A prospective community birth cohort of 501 families was recruited when children were newborns. At this time, mothers reported on their current psychosocial circumstances (socioeconomic status, teen parenthood, depression, etc.), which were summed into a cumulative risk score. Families were followed up at 18 months and 3 years, at which point maternal reflective capacity and cognitive sensitivity were measured, respectively. Child cognition (executive functioning, theory of mind, and language ability) was assessed at age 4.5 using age-appropriate observational and standardized tasks. Analyses controlled for child age, gender, number of children in the home, number of years married, and mothers' history of adversity. The results revealed significant declines in child cognition as well as maternal reflective capacity and cognitive sensitivity as the number of psychosocial risks increased. Moreover, longitudinal path analysis showed significant indirect effects from cumulative risk to all three cognitive outcomes via reflective capacity and cognitive sensitivity. Findings suggest that cumulative risk of mothers may partially account for child cognitive difficulties in various domains by disrupting key parental socialization competencies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Effects of bisphosphonate treatment on DNA methylation in osteonecrosis of the jaw.
Polidoro, Silvia; Broccoletti, Roberto; Campanella, Gianluca; Di Gaetano, Cornelia; Menegatti, Elisa; Scoletta, Matteo; Lerda, Ennio; Matullo, Giuseppe; Vineis, Paolo; Berardi, Daniela; Scully, Crispian; Arduino, Paolo G
2013-10-09
Bisphosphonates are used in the treatment of hypocalcaemia, mainly in cancer and osteoporosis. Some patients experience adverse events, such as BP-related osteonecrosis of the jaw (BRONJ). DNA methylation plays a key role in gene regulation in many tissues, but its involvement in bone homeostasis is not well characterized, and no information is available regarding altered methylation in BRONJ. Using the Illumina Infinium HumanMethylation27 BeadChip assay, we performed an epigenome-wide association study in peripheral blood samples from 68 patients treated with nitrogenous BP, including 35 with BRONJ. Analysis of the estimated cumulative BP exposure distribution indicated that the exposure of the case group to BP was slightly higher than that of the control group; more severely affected cases (i.e., with BRONJ in both mandible and maxilla) were significantly more exposed to BP than were those with BRONJ only in the mandible or maxilla (one-sided Wilcoxon rank sum test, p=0.002). Logistic regression analysis confirmed the positive association between cumulative bisphosphonates exposure and risk of BRONJ (OR 1.015 per mg of cumulative exposure, 95% CI 1.004-1.032, p=0.036). Although no statistically significant differences were observed between case and control groups, methylation levels of probes mapping on three genes, ERCC8, LEPREL1 and SDC2, were strongly associated with cumulative BP exposure levels (p<1.31E-007). Enrichment analysis, combining differentially methylated genes with genes involved in the mevalonate pathway, showed that BP treatment can affect the methylation pattern of genes involved in extracellular matrix organization and inflammatory responses, leading to more frequent adverse effects such as BRONJ. Differences in DNA methylation induced by BP treatment could be involved in the pathogenesis of the bone lesion. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandborn, R.H.
1976-01-01
M0200, a computer simulation model, was used to investigate the safeguarding of plutonium dioxide. The computer program operating the model was constructed so that replicate runs could provide data for statistical analysis of the distributions of the randomized variables. The plant model was divided into material balance areas associated with definable unit processes. Indicators of plant operations studied were modified end-of-shift material balances, end-of-blend errors formed by closing material balances between blends, and cumulative sums of the differences between actual and expected performances. (auth)
Hardell, Elin; Carlberg, Michael; Nordström, Marie; van Bavel, Bert
2010-09-15
Persistent organic pollutants (POPs) are lipophilic chemicals that bioaccumulate. Most of them were resticted or banned in the 1970s and 1980s to protect human health and the environment. The main source for humans is dietary intake of dairy products, meat and fish. Little data exist on changes of the concentration of POPs in the Swedish population over time. To study if the concentrations of polychlorinated biphenyls (PCBs), DDE, hexachlorobenzene (HCB) and chlordanes have changed in the Swedish population during 1993-2007, and certain factors that may influence the concentrations. During 1993-2007 samples from 537 controls in different human cancer studies were collected and analysed. Background information such as body mass index, breast-feeding and parity was assessed by questionaires. Wilcoxon rank-sum test was used to analyse the explanatory factors specimen (blood or adipose tissue), gender, BMI, total breast-feeding and parity in relation to POPs. Time trends for POPs were analysed using linear regression analysis, adjusted for specimen, gender, BMI and age. The concentration decreased for all POPs during 1993-2007. The annual change was statistically significant for the sum of PCBs -7.2%, HCB -8.8%, DDE -13.5% and the sum of chlordanes -10.3%. BMI and age were determinants of the concentrations. Cumulative breast-feeding >8 months gave statistically significantly lower concentrations for the sum of PCBs, DDE and the sum of chlordanes. Parity with >2 children yielded statistically significantly lower sum of PCBs. All the studied POPs decreased during the time period, probably due to restrictions of their use. Copyright 2010 Elsevier B.V. All rights reserved.
Grochowiecki, T; Jakimowicz, T; Grabowska-Derlatka, L; Szmidt, J
2014-10-01
The high rate of complication after pancreas transplantation not only had an impact on recipient quality of life and survival but also had significant financial implications. Thus, monitoring transplant center performance was crucial to indentifying changes in clinical practice that result in quality deterioration. To evaluate retrospectively the quality of the single, small pancreatic transplant program and to establish prospective monitoring of the center using risk-adjusted cumulative sum (CUSUM). From 1988 to 2014, 119 simultaneous pancreas and the kidney transplantations (SPKTx) were performed. The program was divided into 3 eras, based on surgical technique and immunosuppression. Analyses of the 15 fatal outcomes due to complication from pancreatic graft were performed. The risk model was developed using multivariable logistic regression analysis based on retrospective data of 112 SPKTx recipients. The risk-adjusted 1-sided CUSUM chart was plotted for retrospective and prospective events. The upper control limit was set to 2. There were 2 main causes of death: multiorgan failure (73.3%; 11/15) and septic hemorrhage (26.7%; 4/15). Quality analysis using the CUSUM chart revealed that the process was not homogeneous; however, no significant signal of program deterioration was obtained and the performance of the whole program was within the settled control limit. For a single pancreatic transplant center. The risk-adjusted CUSUM chart was a useful tool for quality program assessment. It could support decision making during traditional surgical morbidity and mortality conferences. For small transplant centers, increasing the sensitivity of the CUSUM method by lowering the upper control limit should be considered. However, an individual assessment approach of the for particular centers is recommended.
Safieddine, Doha; Chkeir, Aly; Herlem, Cyrille; Bera, Delphine; Collart, Michèle; Novella, Jean-Luc; Dramé, Moustapha; Hewson, David J; Duchêne, Jacques
2017-11-01
Falls are a major cause of death in older people. One method used to predict falls is analysis of Centre of Pressure (CoP) displacement, which provides a measure of balance quality. The Balance Quality Tester (BQT) is a device based on a commercial bathroom scale that calculates instantaneous values of vertical ground reaction force (Fz) as well as the CoP in both anteroposterior (AP) and mediolateral (ML) directions. The entire testing process needs to take no longer than 12 s to ensure subject compliance, making it vital that calculations related to balance are only calculated for the period when the subject is static. In the present study, a method is presented to detect the stabilization period after a subject has stepped onto the BQT. Four different phases of the test are identified (stepping-on, stabilization, balancing, stepping-off), ensuring that subjects are static when parameters from the balancing phase are calculated. The method, based on a simplified cumulative sum (CUSUM) algorithm, could detect the change between unstable and stable stance. The time taken to stabilize significantly affected the static balance variables of surface area and trajectory velocity, and was also related to Timed-up-and-Go performance. Such a finding suggests that the time to stabilize could be a worthwhile parameter to explore as a potential indicator of balance problems and fall risk in older people. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
48 CFR 1552.232-70 - Submission of invoices.
Code of Federal Regulations, 2010 CFR
2010-10-01
... contract financing payment in accordance with the invoice preparation instructions identified as a separate... any supporting data for each work assignment as identified in the instructions. (2) The invoice or... preparation instructions. Cumulative charges represent the net sum of current charges by cost element for the...
Economic analysis of interventions to improve village chicken production in Myanmar.
Henning, J; Morton, J; Pym, R; Hla, T; Sunn, K; Meers, J
2013-07-01
A cost-benefit analysis using deterministic and stochastic modelling was conducted to identify the net benefits for households that adopt (1) vaccination of individual birds against Newcastle disease (ND) or (2) improved management of chick rearing by providing coops for the protection of chicks from predation and chick starter feed inside a creep feeder to support chicks' nutrition in village chicken flocks in Myanmar. Partial budgeting was used to assess the additional costs and benefits associated with each of the two interventions tested relative to neither strategy. In the deterministic model, over the first 3 years after the introduction of the interventions, the cumulative sum of the net differences from neither strategy was 13,189Kyat for ND vaccination and 77,645Kyat for improved chick management (effective exchange rate in 2005: 1000Kyat=1$US). Both interventions were also profitable after discounting over a 10-year period; Net Present Values for ND vaccination and improved chick management were 30,791 and 167,825Kyat, respectively. The Benefit-Cost Ratio for ND vaccination was very high (28.8). This was lower for improved chick management, due to greater costs of the intervention, but still favourable at 4.7. Using both interventions concurrently yielded a Net Present Value of 470,543Kyat and a Benefit-Cost Ratio of 11.2 over the 10-year period in the deterministic model. Using the stochastic model, for the first 3 years following the introduction of the interventions, the mean cumulative sums of the net difference were similar to those values obtained from the deterministic model. Sensitivity analysis indicated that the cumulative net differences were strongly influenced by grower bird sale income, particularly under improved chick management. The effects of the strategies on odds of households selling and consuming birds after 7 months, and numbers of birds being sold or consumed after this period also influenced profitability. Cost variations for equipment used under improved chick management were not markedly associated with profitability. Net Present Values and Benefit-Cost Ratios discounted over a 10-year period were also similar to the deterministic model when mean values obtained through stochastic modelling were used. In summary, the study showed that ND vaccination and improved chick management can improve the viability and profitability of village chicken production in Myanmar. Copyright © 2013 Elsevier B.V. All rights reserved.
Poaceae pollen in the air depending on the thermal conditions.
Myszkowska, Dorota
2014-07-01
The relationship between the meteorological elements, especially the thermal conditions and the Poaceae pollen appearance in the air, were analysed as a basis to construct a useful model predicting the grass season start. Poaceae pollen concentrations were monitored in 1991-2012 in Kraków using the volumetric method. Cumulative temperature and effective cumulative temperature significantly influenced the season start in this period. The strongest correlation was seen as the sum of mean daily temperature amplitudes from April 1 to April 14, with mean daily temperature>15 °C and effective cumulative temperature>3 °C during that period. The proposed model, based on multiple regression, explained 57% of variation of the Poaceae season starts in 1991-2010. When cumulative mean daily temperature increased by 10 °C, the season start was accelerated by 1 day. The input of the interaction between these two independent variables into the factor regression model caused the increase in goodness of model fitting. In 2011 the season started 5 days earlier in comparison with the predicted value, while in 2012 the season start was observed 2 days later compared to the predicted day. Depending on the value of mean daily temperature from March 18th to the 31st and the sum of mean daily temperature amplitudes from April 1st to the 14th, the grass pollen seasons were divided into five groups referring to the time of season start occurrence, whereby the early and moderate season starts were the most frequent in the studied period and they were especially related to mean daily temperature in the second half of March.
A rapid local singularity analysis algorithm with applications
NASA Astrophysics Data System (ADS)
Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits
2015-04-01
The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.
An approach to quality and performance control in a computer-assisted clinical chemistry laboratory.
Undrill, P E; Frazer, S C
1979-01-01
A locally developed, computer-based clinical chemistry laboratory system has been in operation since 1970. This utilises a Digital Equipment Co Ltd PDP 12 and an interconnected PDP 8/F computer. Details are presented of the performance and quality control techniques incorporated into the system. Laboratory performance is assessed through analysis of results from fixed-level control sera as well as from cumulative sum methods. At a simple level the presentation may be considered purely indicative, while at a more sophisticated level statistical concepts have been introduced to aid the laboratory controller in decision-making processes. PMID:438340
Reconceptualizing synergism and antagonism among multiple stressors.
Piggott, Jeremy J; Townsend, Colin R; Matthaei, Christoph D
2015-04-01
The potential for complex synergistic or antagonistic interactions between multiple stressors presents one of the largest uncertainties when predicting ecological change but, despite common use of the terms in the scientific literature, a consensus on their operational definition is still lacking. The identification of synergism or antagonism is generally straightforward when stressors operate in the same direction, but if individual stressor effects oppose each other, the definition of synergism is paradoxical because what is synergistic to one stressor's effect direction is antagonistic to the others. In their highly cited meta-analysis, Crain et al. (Ecology Letters, 11, 2008: 1304) assumed in situations with opposing individual effects that synergy only occurs when the cumulative effect is more negative than the additive sum of the opposing individual effects. We argue against this and propose a new systematic classification based on an additive effects model that combines the magnitude and response direction of the cumulative effect and the interaction effect. A new class of "mitigating synergism" is identified, where cumulative effects are reversed and enhanced. We applied our directional classification to the dataset compiled by Crain et al. (Ecology Letters, 11, 2008: 1304) to determine the prevalence of synergistic, antagonistic, and additive interactions. Compared to their original analysis, we report differences in the representation of interaction classes by interaction type and we document examples of mitigating synergism, highlighting the importance of incorporating individual stressor effect directions in the determination of synergisms and antagonisms. This is particularly pertinent given a general bias in ecology toward investigating and reporting adverse multiple stressor effects (double negative). We emphasize the need for reconsideration by the ecological community of the interpretation of synergism and antagonism in situations where individual stressor effects oppose each other or where cumulative effects are reversed and enhanced.
Social environmental factors and preteen health-related behaviors.
Adelmann, Pamela K
2005-01-01
To examine associations among risk and protective factors with negative (substance use, delinquent behavior, sedentary recreation, empty calorie consumption, suicidal behavior) and positive behaviors (prosocial recreation, productive behavior, exercise, nutrition behavior, academic achievement, seatbelt use). Data were from sixth-grade public school students (n = 133,629) in 2001. Factor analysis produced five risks, five protectors, seven negative and six positive behaviors. Associations were tested among single and cumulative risks and protectors with behaviors (linear, logit regression) and combinations of high and low risks and protectors with behaviors (analysis of variance, Chi-square). Individual and cumulative risks were related to higher and protectors were related to lower negative behaviors. Protectors were associated with higher positive behaviors, with some exceptions. Risks and their sum were associated with lower academic achievement and seatbelt use, but were linked to higher, rather than lower productive behavior. Being bullied or victimized was correlated with higher levels of exercise, good nutrition, and prosocial recreation. The high risk/low protection combination had the highest negative behaviors and low risk/high protection the lowest, but for positive behaviors, high protectors with either high or low risks showed higher positive behaviors. These findings provide new information about how (a) risks and protectors are associated with negative behaviors besides substance use and delinquency, (b) cumulative protectors, as well as risks, are related to negative and positive behaviors, and (c) combinations of high and low risks and protectors are related to behaviors. The unusual findings for positive behaviors merit further exploration.
Urban, Jillian E.; Davenport, Elizabeth M.; Golman, Adam J.; Maldjian, Joseph A.; Whitlow, Christopher T.; Powers, Alexander K.; Stitzel, Joel D.
2015-01-01
Sports-related concussion is the most common athletic head injury with football having the highest rate among high school athletes. Traditionally, research on the biomechanics of football-related head impact has been focused at the collegiate level. Less research has been performed at the high school level, despite the incidence of concussion among high school football players. The objective of this study is to twofold: to quantify the head impact exposure in high school football, and to develop a cumulative impact analysis method. Head impact exposure was measured by instrumenting the helmets of 40 high school football players with helmet mounted accelerometer arrays to measure linear and rotational acceleration. A total of 16,502 head impacts were collected over the course of the season. Biomechanical data were analyzed by team and by player. The median impact for each player ranged from 15.2 to 27.0 g with an average value of 21.7 (±2.4) g. The 95th percentile impact for each player ranged from 38.8 to 72.9 g with an average value of 56.4 (±10.5) g. Next, an impact exposure metric utilizing concussion injury risk curves was created to quantify cumulative exposure for each participating player over the course of the season. Impacts were weighted according to the associated risk due to linear acceleration and rotational acceleration alone, as well as the combined probability (CP) of injury associated with both. These risks were summed over the course of a season to generate risk weighted cumulative exposure. The impact frequency was found to be greater during games compared to practices with an average number of impacts per session of 15.5 and 9.4, respectively. However, the median cumulative risk weighted exposure based on combined probability was found to be greater for practices vs. games. These data will provide a metric that may be used to better understand the cumulative effects of repetitive head impacts, injury mechanisms, and head impact exposure of athletes in football. PMID:23864337
Physical intelligence does matter to cumulative technological culture.
Osiurak, François; De Oliveira, Emmanuel; Navarro, Jordan; Lesourd, Mathieu; Claidière, Nicolas; Reynaud, Emanuelle
2016-08-01
Tool-based culture is not unique to humans, but cumulative technological culture is. The social intelligence hypothesis suggests that this phenomenon is fundamentally based on uniquely human sociocognitive skills (e.g., shared intentionality). An alternative hypothesis is that cumulative technological culture also crucially depends on physical intelligence, which may reflect fluid and crystallized aspects of intelligence and enables people to understand and improve the tools made by predecessors. By using a tool-making-based microsociety paradigm, we demonstrate that physical intelligence is a stronger predictor of cumulative technological performance than social intelligence. Moreover, learners' physical intelligence is critical not only in observational learning but also when learners interact verbally with teachers. Finally, we show that cumulative performance is only slightly influenced by teachers' physical and social intelligence. In sum, human technological culture needs "great engineers" to evolve regardless of the proportion of "great pedagogues." Social intelligence might play a more limited role than commonly assumed, perhaps in tool-use/making situations in which teachers and learners have to share symbolic representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Wittenberg, Philipp; Gan, Fah Fatt; Knoth, Sven
2018-04-17
The variable life-adjusted display (VLAD) is the first risk-adjusted graphical procedure proposed in the literature for monitoring the performance of a surgeon. It displays the cumulative sum of expected minus observed deaths. It has since become highly popular because the statistic plotted is easy to understand. But it is also easy to misinterpret a surgeon's performance by utilizing the VLAD, potentially leading to grave consequences. The problem of misinterpretation is essentially caused by the variance of the VLAD's statistic that increases with sample size. In order for the VLAD to be truly useful, a simple signaling rule is desperately needed. Various forms of signaling rules have been developed, but they are usually quite complicated. Without signaling rules, making inferences using the VLAD alone is difficult if not misleading. In this paper, we establish an equivalence between a VLAD with V-mask and a risk-adjusted cumulative sum (RA-CUSUM) chart based on the difference between the estimated probability of death and surgical outcome. Average run length analysis based on simulation shows that this particular RA-CUSUM chart has similar performance as compared to the established RA-CUSUM chart based on the log-likelihood ratio statistic obtained by testing the odds ratio of death. We provide a simple design procedure for determining the V-mask parameters based on a resampling approach. Resampling from a real data set ensures that these parameters can be estimated appropriately. Finally, we illustrate the monitoring of a real surgeon's performance using VLAD with V-mask. Copyright © 2018 John Wiley & Sons, Ltd.
Model-Free CUSUM Methods for Person Fit
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Shi, Min
2009-01-01
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
NASA Astrophysics Data System (ADS)
Musdalifah, N.; Handajani, S. S.; Zukhronah, E.
2017-06-01
Competition between the homoneous companies cause the company have to keep production quality. To cover this problem, the company controls the production with statistical quality control using control chart. Shewhart control chart is used to normal distributed data. The production data is often non-normal distribution and occured small process shift. Grand median control chart is a control chart for non-normal distributed data, while cumulative sum (cusum) control chart is a sensitive control chart to detect small process shift. The purpose of this research is to compare grand median and cusum control charts on shuttlecock weight variable in CV Marjoko Kompas dan Domas by generating data as the actual distribution. The generated data is used to simulate multiplier of standard deviation on grand median and cusum control charts. Simulation is done to get average run lenght (ARL) 370. Grand median control chart detects ten points that out of control, while cusum control chart detects a point out of control. It can be concluded that grand median control chart is better than cusum control chart.
E-commerce Review System to Detect False Reviews.
Kolhar, Manjur
2017-08-15
E-commerce sites have been doing profitable business since their induction in high-speed and secured networks. Moreover, they continue to influence consumers through various methods. One of the most effective methods is the e-commerce review rating system, in which consumers provide review ratings for the products used. However, almost all e-commerce review rating systems are unable to provide cumulative review ratings. Furthermore, review ratings are influenced by positive and negative malicious feedback ratings, collectively called false reviews. In this paper, we proposed an e-commerce review system framework developed using the cumulative sum method to detect and remove malicious review ratings.
Cumulative Search-Evasion Games (CSEGs)
1989-03-01
EVASI ON GAMES ( CSEGs ) . PERSONAL A UTHOR(S) Ea gl e , James N. ’ and Was hb urn , Alan R. a. TYPE OF REPO RT 113b T ME COVERED 114 DATE O F REPORT...Cumulative search - eva s ion games ( CSEGs) are t wo- pe r son ze r o- s um search- evasion games whe re p l ay proceeds t h r o ughout some speci fied pe...t he p ositions of the two players at time t’ t h e n the game ’ s p ayoff i s the sum over t from 1 to T of A( X ,Y ,t) . Additionally, all paths
Statistical Process Control Charts for Measuring and Monitoring Temporal Consistency of Ratings
ERIC Educational Resources Information Center
Omar, M. Hafidz
2010-01-01
Methods of statistical process control were briefly investigated in the field of educational measurement as early as 1999. However, only the use of a cumulative sum chart was explored. In this article other methods of statistical quality control are introduced and explored. In particular, methods in the form of Shewhart mean and standard deviation…
NASA Astrophysics Data System (ADS)
Girard, Catherine; Dufour, Anne-Béatrice; Charruault, Anne-Lise; Renaud, Sabrina
2018-01-01
Benthic foraminifera have been used as proxies for various paleoenvironmental variables such as food availability, carbon flux from surface waters, microhabitats, and indirectly water depth. Estimating assemblage composition based on morphotypes, as opposed to genus- or species-level identification, potentially loses important ecological information but opens the way to the study of ancient time periods. However, the ability to accurately constrain benthic foraminiferal assemblages has been questioned when the most abundant foraminifera are fragile agglutinated forms, particularly prone to fragmentation. Here we test an alternate method for accurately estimating the composition of fragmented assemblages. The cumulated area per morphotype
method is assessed, i.e., the sum of the area of all tests or fragments of a given morphotype in a sample. The percentage of each morphotype is calculated as a portion of the total cumulated area. Percentages of different morphotypes based on counting and cumulated area methods are compared one by one and analyzed using principal component analyses, a co-inertia analysis, and Shannon diversity indices. Morphotype percentages are further compared to an estimate of water depth based on microfacies description. Percentages of the morphotypes are not related to water depth. In all cases, counting and cumulated area methods deliver highly similar results, suggesting that the less time-consuming traditional counting method may provide robust estimates of assemblages. The size of each morphotype may deliver paleobiological information, for instance regarding biomass, but should be considered carefully due to the pervasive issue of fragmentation.
Gupta, Neeraj; Puliyel, Ashish; Manchanda, Ayush; Puliyel, Jacob
2012-07-01
To apply cumulative sum (CUSUM) to monitor a drug trial of nebulized hypertonic-saline in bronchiolitis. To test if monitoring with CUSUM control lines is practical and useful as a prompt to stop the drug trial early, if the study drug performs significantly worse than the comparator drug. Prospective, open label, controlled trial using standard therapy (epinephrine) and study drug (hypertonic-saline) sequentially in two groups of patients. Hospital offering tertiary-level pediatric care. Children, 2 months to 2 years, with first episode of bronchiolitis, excluding those with cardiac disease, immunodeficiency and critical illness at presentation. Nebulized epinephrine in first half of the bronchiolitis season (n = 35) and hypertonic saline subsequently (n = 29). Continuous monitoring of response to hypertonic-saline using CUSUM control charts developed with epinephrine-response data. Clinical score, tachycardia and total duration of hospital stay. In the epinephrine group, the maximum CUSUM was +2.25 (SD 1.34) and minimum CUSUM was -2.26 (SD 1.34). CUSUM score with hypertonic saline group stayed above the zero line throughout the study. There was no statistical difference in the post-treatment clinical score at 24 hours between the treatment groups {Mean (SD) 3.516 (2.816): 3.552 (2.686); 95% CI: -1.416 to 1.356}, heart rate {Mean (SD) 136 (44): 137(12); 95% CI: -17.849 to 15.849) or duration of hospital stay (Mean (SD) 96.029 (111.41): 82.914 (65.940); 95% CI: -33.888 to 60.128}. The software we developed allows for drawing of control lines to monitor study drug performance. Hypertonic saline performed as well or better than nebulized epinephrine in bronchiolitis.
León, Larry F; Cai, Tianxi
2012-04-01
In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stubbs, J.; Atkins, H.
1999-01-01
{sup 117m}Sn(4+) DTPA is a new radiopharmaceutical for the palliation of pain associated with metastatic bone cancer. Recently, the Phase 2 clinical trials involving 47 patients were completed. These patients received administered activities in the range 6.7--10.6 MBq/kg of body mass. Frequent collections of urine were acquired over the first several hours postadministration and daily cumulative collections were obtained for the next 4--10 days. Anterior/posterior gamma camera images were obtained frequently over the initial 10 days. Radiation dose estimates were calculated for 8 of these patients. Each patient`s biodistribution data were mathematically simulated using a multicompartmental model. The model consistedmore » of the following compartments: central, bone, kidney, other tissues, and cumulative urine. The measured cumulative urine data were used as references for the cumulative urine excretion compartment. The total-body compartment (sum of the bone surfaces, central, kidney, and other tissues compartments) was reference to all activity not excreted in the urine.« less
Constrained multiple indicator kriging using sequential quadratic programming
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Erhan Tercan, A.
2012-11-01
Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
Beyond the job exposure matrix (JEM): the task exposure matrix (TEM).
Benke, G; Sim, M; Fritschi, L; Aldred, G
2000-09-01
The job exposure matrix (JEM) has been employed to assign cumulative exposure to workers in many epidemiological studies. In these studies, where quantitative data are available, all workers with the same job title and duration are usually assigned similar cumulative exposures, expressed in mgm(-3)xyears. However, if the job is composed of multiple tasks, each with its own specific exposure profile, then assigning all workers within a job the same mean exposure can lead to misclassification of exposure. This variability of exposure within job titles is one of the major weaknesses of JEMs. A method is presented for reducing the variability in the JEM methodology, which has been called the task exposure matrix (TEM). By summing the cumulative exposures of a worker over all the tasks worked within a job title, it is possible to address the variability of exposure within the job title, and reduce possible exposure misclassification. The construction of a TEM is outlined and its application in the context of a study in the primary aluminium industry is described. The TEM was found to assign significantly different cumulative exposures to the majority of workers in the study, compared with the JEM and the degree of difference in cumulative exposure between the JEM and the TEM varied greatly between contaminants.
Evidence for slowdown in stratospheric ozone loss: First stage of ozone recovery
NASA Technical Reports Server (NTRS)
Newchurch, M. J.; Yang, Eun-Su; Cunnold, D. M.; Reinsel, C.; Zawodny, J. M.; Russell, James M., III
2003-01-01
Global ozone trends derived from the Stratospheric Aerosol and Gas Experiment I and II (SAGE I/II) combined with the more recent Halogen Occultation Experiment (HALOE) observations provide evidence of a slowdown in stratospheric ozone losses since 1997. This evidence is quantified by the cumulative sum of residual differences from the predicted linear trend. The cumulative residuals indicate that the rate of ozone loss at 35- 45 km altitudes globally has diminished. These changes in loss rates are consistent with the slowdown of total stratospheric chlorine increases characterized by HALOE HCI measurements. These changes in the ozone loss rates in the upper stratosphere are significant and constitute the first stage of a recovery of the ozone layer.
W. J. Massman
2004-01-01
Present air quality standards to protect vegetation from ozone are based on measured concentrations (i.e., exposure) rather than on plant uptake rates (or dose). Some familiar cumulative exposure-based indices include SUM06, AOT40, and W126. However, plant injury is more closely related to dose, or more appropriately to effective dose, than to exposure. This study...
Navy Fuel Composition and Screening Tool (FCAST) v2.8
2016-05-10
allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for
Enhanced Cumulative Sum Charts for Monitoring Process Dispersion
Abujiya, Mu’azu Ramat; Riaz, Muhammad; Lee, Muhammad Hisyam
2015-01-01
The cumulative sum (CUSUM) control chart is widely used in industry for the detection of small and moderate shifts in process location and dispersion. For efficient monitoring of process variability, we present several CUSUM control charts for monitoring changes in standard deviation of a normal process. The newly developed control charts based on well-structured sampling techniques - extreme ranked set sampling, extreme double ranked set sampling and double extreme ranked set sampling, have significantly enhanced CUSUM chart ability to detect a wide range of shifts in process variability. The relative performances of the proposed CUSUM scale charts are evaluated in terms of the average run length (ARL) and standard deviation of run length, for point shift in variability. Moreover, for overall performance, we implore the use of the average ratio ARL and average extra quadratic loss. A comparison of the proposed CUSUM control charts with the classical CUSUM R chart, the classical CUSUM S chart, the fast initial response (FIR) CUSUM R chart, the FIR CUSUM S chart, the ranked set sampling (RSS) based CUSUM R chart and the RSS based CUSUM S chart, among others, are presented. An illustrative example using real dataset is given to demonstrate the practicability of the application of the proposed schemes. PMID:25901356
Individual Impact Magnitude vs. Cumulative Magnitude for Estimating Concussion Odds.
O'Connor, Kathryn L; Peeters, Thomas; Szymanski, Stefan; Broglio, Steven P
2017-08-01
Helmeted impact devices have allowed researchers to investigate the biomechanics of head impacts in vivo. While increased impact magnitude has been associated with greater concussion risk, a definitive concussive threshold has not been established. It is likely that concussion risk is not determined by a single impact itself, but a host of predisposing factors. These factors may include genetics, fatigue, and/or prior head impact exposure. The objective of the current paper is to investigate the association between cumulative head impact magnitude and concussion risk. It is hypothesized that increased cumulative magnitudes will be associated with greater concussion risk. This retrospective analysis included participants that were recruited from regional high-schools in Illinois and Michigan from 2007 to 2014 as part of an ongoing study on concussion biomechanics. Across seven seasons, 185 high school football athletes were instrumented with the Head Impact Telemetry system. Out of 185 athletes, 31 (17%) sustained a concussion, with two athletes sustaining two concussions over the study period, yielding 33 concussive events. The system recorded 78,204 impacts for all concussed players. Linear acceleration, rotational acceleration, and head impact telemetry severity profile (HITsp) magnitudes were summed within five timeframes: the day of injury, three days prior to injury, seven days prior to injury, 30 days prior to injury, and prior in-season exposure. Logistic regressions were modeled to explain concussive events based on the singular linear acceleration, rotational acceleration, and HITsp event along with the calculated summations over time. Linear acceleration, rotational acceleration, and HITsp all produced significant models estimating concussion (p < 0.05). The strongest estimators of a concussive impact were the linear acceleration (OR = 1.040, p < 0.05), rotational acceleration (OR = 1.001, p < 0.05), and HITsp (OR = 1.003, p < 0.05) for the singular impact rather than any of the cumulative magnitude calculations. Moreover, no cumulative count measure was significant for linear or rotational acceleration. Results from this investigation support the growing literature indicating cumulative magnitude is not related to concussion likelihood. Cumulative magnitude is a simplistic measure of the total exposure sustained by a player over a given period. However, this measure is limited as it assumes the brain is a static structure unable to undergo self-repair. Future research should consider how biological recovery between impacts may influence concussion risk.
Contribution of stone size to chronic kidney disease in kidney stone formers.
Ahmadi, Farrokhlagha; Etemadi, Samira Motedayen; Lessan-Pezeshki, Mahbob; Mahdavi-Mazdeh, Mitra; Ayati, Mohsen; Mir, Alireza; Yazdi, Hadi Rokni
2015-01-01
To determine whether stone burden correlates with the degree of chronic kidney disease in kidney stone formers. A total of 97 extracorporeal shockwave lithotripsy candidates aged 18 years and older were included. Size, number and location of the kidney stones, along with cumulative stone size, defined as the sum of diameters of all stones) were determined. Estimated glomerular filtration rate was determined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C/creatinine equation, and chronic kidney disease was defined as estimated glomerular filtration rate <60 mL/min/1.73 m(2). In individuals with cumulative stone size <20 mm, estimated glomerular filtration rate significantly decreased when moving from the first (estimated glomerular filtration rate 75.5 ± 17.8 mL/min/1.73 m(2)) to the fourth (estimated glomerular filtration rate 56.4 ± 20.44 mL/min/1.73 m(2) ) quartile (P = 0.004). When patients with a cumulative stone size ≥ 20 mm were included, the observed association was rendered non-significant. In individuals with a cumulative stone size < 20 mm, each 1-mm increase in cumulative stone size was associated with a 20% increased risk of having chronic kidney disease. The relationship persisted even after adjustment for age, sex, body mass index, C-reactive protein, fasting plasma glucose, thyroid stimulating hormone, presence of microalbuminuria, history of renal calculi, history of extracorporeal shockwave lithotripsy, number and location of the stones (odds ratio 1.24, 95% confidence interval 1.02-1.52). The same was not observed for individuals with a cumulative stone size ≥ 20 mm. In kidney stone formers with a cumulative stone size up to 20 mm, estimated glomerular filtration rate linearly declines with increasing cumulative stone size. Additionally, cumulative stone size is an independent predictor of chronic kidney disease in this group of patients. © 2014 The Japanese Urological Association.
Cumulative Social Risk and Obesity in Early Childhood
Duarte, Cristiane S.; Chambers, Earle C.; Boynton-Jarrett, Renée
2012-01-01
OBJECTIVES: The goal of this study was to examine the relationship between cumulative social adversity and childhood obesity among preschool-aged children (N = 1605) in the Fragile Families and Child Wellbeing Study. METHODS: Maternal reports of intimate partner violence, food insecurity, housing insecurity, maternal depressive symptoms, maternal substance use, and father’s incarceration were obtained when the child was 1 and 3 years of age. Two cumulative social risk scores were created by summing the 6 factors assessed at ages 1 and 3 years. Child height and weight were measured at 5 years of age. Logistic regression models stratified according to gender were used to estimate the association between cumulative social risk and obesity, adjusting for sociodemographic factors. RESULTS: Seventeen percent of children were obese at age 5 years, and 57% had at least 1 social risk factor. Adjusting for sociodemographic factors, girls experiencing high cumulative social risk (≥2 factors) at age 1 year only (odds ratio [OR]: 2.1 [95% confidence interval [CI]: 1.1–4.1]) or at 3 years only (OR: 2.2 [95% CI: 1.2–4.2]) were at increased odds of being obese compared with girls with no risk factors at either time point. Those experiencing high cumulative risk at age 1 and 3 years were not at statistically significant odds of being obese (OR: 1.9 [95% CI: 0.9–4.0]). No significant associations were noted among boys. CONCLUSIONS: There seems to be gender differences in the effects of cumulative social risk factors on the prevalence of obesity at 5 years of age. Understanding the social context of families could make for more effective preventive efforts to combat childhood obesity. PMID:22508921
Nikulina, Valentina; Gelin, Melissa; Zwilling, Amanda
2017-12-01
Adverse childhood experiences (ACEs) have been shown to cumulatively predict a range of poor physical and mental health outcomes across adulthood. The cumulative effect of ACEs on intimate partner violence (IPV) in emerging adulthood has not been previously explored. The current study examined the individual and cumulative associations between nine ACEs (emotional abuse, physical abuse, sexual abuse, emotional neglect, physical neglect, witnessing domestic violence, living with a mentally ill, substance abusing, or incarcerated household member) and IPV in a diverse sample of college students ( N = 284; M age = 20.05 years old [ SD = 2.5], 32% male, 37% Caucasian, 30% Asian, 33% other, and 27% Hispanic) from an urban, public college in the Northeast of the United States. Participants reported ACEs (measured by the Adverse Childhood Experiences Survey) and IPV perpetration and victimization (measured with the Revised Conflict Tactics Scale-2) of physical and psychological aggression in an online study that took place from 2015 to 2016. Bivariate and multivariate associations between ACEs, cumulative ACEs (assessed by the sum of adverse experiences), and IPV outcomes were assessed, while controlling for demographics and socioeconomic status. No cumulative associations were observed between ACEs and any of the IPV subscales in multivariate regressions, while witnessing domestic violence was significantly associated with perpetration and victimization of physical aggression and injury, and household member incarceration and physical abuse were associated with physical aggression perpetration. Adverse childhood events do not seem to associate cumulatively with IPV in emerging adulthood and the contributions of individual childhood experiences appear to be more relevant for IPV outcomes. Clinical and research implications are discussed.
NASA Astrophysics Data System (ADS)
Dai, S.; Shulski, M.
2013-12-01
Climate warming and changes in rainfall patterns and increases in extreme events are resulting in higher risks of crop failures. A greater sense of urgency has been induced to understand the impacts of past climate on crop production in the U.S. As one of the most predominant sources of feed grains, corn is also the main source of U.S. ethanol. In the U.S. Corn Belt, region-scale evaluation on temperature and precipitation variability and extremes during the growing season is not well-documented yet. This study is part of the USDA-funded project 'Useful to Usable: Transforming climate variability and change information for cereal crop producers'. The overall goal of our work is to study the characteristics of average growing season conditions and changes in growing season temperature- and precipitation-based indices that are closely correlated with corn grain yield in the U.S. Corn Belt. The research area is the twelve major Corn Belt states, including IL, IN, IA, KS, MI, MN, MO, NE, OH, SD, ND, and WI. Climate data during 1981-2010 from 132 meteorological stations (elevation ranges from 122 m to 1,202 m) are used in this study, including daily minimum, maximum, and mean temperature, and daily precipitation. From 1981 to 2012, beginning date (BD), ending date (ED), and growing season length (GSL) in the climatological corn growing season are studied. Especially, during the agronomic corn growing season, from Apr to Oct, temperature- and precipitation-based indices are analyzed. The temperature-based indices include: number of days with daily mean temperature below 10°C, number of days with daily mean temperature above 30°C, the sum of growing degree days (GDD) between 10°C to 30°C (GDD10,30, growth range for corn), the sum of growing degree days above 30°C (GDD30+, exposure to harmful warming for corn), the sum of growing degree days between 0°C and 44°C (GDD0,44, survival range limits for corn), the sum of growing degree days between 5°C and 35°C (GDD5,35, growth range limits for corn), and the sum of growing degree days between 20°C and 22°C (GDD20,22, optimal growth range for corn). And the precipitation-based indices include: cumulative precipitation, consecutive dry days, and number of extreme precipitation events in June. As to the decadal trend analysis in climatic factors, Sen's Nonparametric Estimator of Slope and the nonparametric Mann-Kendall test are used. In the U.S. Corn Belt, annual mean Tavg ranges from 5.7°C to 14.7°C, and annual cumulative precipitation ranges from 396 mm to 1,203 mm. According to the decadal trend of annual mean Tavg and annual cumulative precipitation, 30 stations (45%) demonstrate a warm and dry trend, and 28 stations demonstrate a warm and wet trend. In monthly scale, Jun mean Tmin presents the most significantly increasing trend, and no significant decreasing or zero trend is detected from 1981 to 2012. During the climatological corn growing season, BD ranges from 76 to 128 DOY, ED ranges from 276 to 316 DOY, and GSL ranges from 150 to 242 days. From 1981 to 2012, BD is significantly advanced at the rate of 1 to 8 DOY per decade, ED is significantly delayed at the rate of 1 to 5 per decade, and GSL is significantly prolonged at the rate of 1 to 11 days per decade.
Hebron, Judith; Oldfield, Jeremy; Humphrey, Neil
2017-04-01
Students with autism are more likely to be bullied than their typically developing peers. However, several studies have shown that their likelihood of being bullied increases in the context of exposure to certain risk factors (e.g. behaviour difficulties and poor peer relationships). This study explores vulnerability to bullying from a cumulative risk perspective, where the number of risks rather than their nature is considered. A total of 722 teachers and 119 parents of young people with autism spectrum conditions participated in the study. Established risk factors were summed to form a cumulative risk score in teacher and parent models. There was evidence of a cumulative risk effect in both models, suggesting that as the number of risks increased, so did exposure to bullying. A quadratic effect was found in the teacher model, indicating that there was a disproportionate increase in the likelihood of being bullied in relation to the number of risk factors to which a young person was exposed. In light of these findings, it is proposed that more attention needs to be given to the number of risks to which children and young people with autism spectrum conditions are exposed when planning interventions and providing a suitable educational environment.
Oldfield, Jeremy; Humphrey, Neil; Hebron, Judith
2015-01-01
Research has identified multiple risk factors for the development of behaviour difficulties. What have been less explored are the cumulative effects of exposure to multiple risks on behavioural outcomes, with no study specifically investigating these effects within a population of young people with special educational needs and disabilities (SEND). Furthermore, it is unclear whether a threshold or linear risk model better fits the data for this population. The sample included 2660 children and 1628 adolescents with SEND. Risk factors associated with increases in behaviour difficulties over an 18-month period were summed to create a cumulative risk score, with this explanatory variable being added into a multi-level model. A quadratic term was then added to test the threshold model. There was evidence of a cumulative risk effect, suggesting that exposure to higher numbers of risk factors, regardless of their exact nature, resulted in increased behaviour difficulties. The relationship between risk and behaviour difficulties was non-linear, with exposure to increasing risk having a disproportionate and detrimental impact on behaviour difficulties in child and adolescent models. Interventions aimed at reducing behaviour difficulties need to consider the impact of multiple risk variables. Tailoring interventions towards those exposed to large numbers of risks would be advantageous. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mazzaferro, Vincenzo; Sposito, Carlo; Zhou, Jian; Pinna, Antonio D; De Carlis, Luciano; Fan, Jia; Cescon, Matteo; Di Sandro, Stefano; Yi-Feng, He; Lauterio, Andrea; Bongini, Marco; Cucchetti, Alessandro
2018-01-01
Outcomes of liver transplantation for hepatocellular carcinoma (HCC) are determined by cancer-related and non-related events. Treatments for hepatitis C virus infection have reduced non-cancer events among patients receiving liver transplants, so reducing HCC-related death might be an actionable end point. We performed a competing-risk analysis to evaluate factors associated with survival of patients with HCC and developed a prognostic model based on features of HCC patients before liver transplantation. We performed multivariable competing-risk regression analysis to identify factors associated with HCC-specific death of patients who underwent liver transplantation. The training set comprised 1018 patients who underwent liver transplantation for HCC from January 2000 through December 2013 at 3 tertiary centers in Italy. The validation set comprised 341 consecutive patients who underwent liver transplantation for HCC during the same period at the Liver Cancer Institute in Shanghai, China. We collected pretransplantation data on etiology of liver disease, number and size of tumors, patient level of α-fetoprotein (AFP), model for end-stage liver disease score, tumor stage, numbers and types of treatment, response to treatments, tumor grade, microvascular invasion, dates, and causes of death. Death was defined as HCC-specific when related to HCC recurrence after transplantation, disseminated extra- and/or intrahepatic tumor relapse and worsened liver function in presence of tumor spread. The cumulative incidence of death was segregated for hepatitis C virus status. In the competing-risk regression, the sum of tumor number and size and of log 10 level of AFP were significantly associated with HCC-specific death (P < .001), returning an average c-statistic of 0.780 (95% confidence interval, 0.763-0.798). Five-year cumulative incidence of non-HCC-related death was 8.6% in HCV-negative patients and 18.1% in HCV-positive patients. For patients with HCC to have a 70% chance of HCC-specific survival 5 years after transplantation, their level of AFP should be <200 ng/mL and the sum of number and size of tumors (in centimeters) should not exceed 7; if the level of AFP was 200-400 ng/mL, the sum of the number and size of tumors should be ≤5; if their level of AFP was 400-1000 ng/mL, the sum of the number and size of tumors should be ≤4. In the validation set, the model identified patients who survived 5 years after liver transplantation with 0.721 accuracy (95% confidence interval, 0.648%-0.793%). Our model, based on patients' level of AFP and HCC number and size, outperformed the Milan; University of California, San Francisco; Shanghai-Fudan; Up-to-7 criteria (P < .001); and AFP French model (P = .044) to predict which patients will survive for 5 years after liver transplantation. We developed a model based on level of AFP, tumor size, and tumor number, to determine risk of death from HCC-related factors after liver transplantation. This model might be used to select end points and refine selection criteria for liver transplantation for patients with HCC. To predict 5-year survival and risk of HCC-related death using an online calculator, please see www.hcc-olt-metroticket.org/. ClinicalTrials.gov ID NCT02898415. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
[Early detection on the onset of scarlet fever epidemics in Beijing, using the Cumulative Sum].
Li, Jing; Yang, Peng; Wu, Shuang-sheng; Wang, Xiao-li; Liu, Shuang; Wang, Quan-yi
2013-05-01
Based on data related to scarlet fever which was collected from the Disease Surveillance Information Reporting System in Beijing from 2005 to 2011, to explore the efficiency of Cumulative Sum (CUSUM) in detecting the onset of scarlet fever epidemics. Models as C1-MILD (C1), C2-MEDIUM (C2) and C3-ULTRA (C3) were used. Tools for evaluation as Youden's index and detection time were calculated to optimize the parameters and optimal model. Data on 2011 scarlet fever surveillance was used to verify the efficacy of these models. C1 (k = 0.5, H = 2σ), C2 (k = 0.7, H = 2σ), C3 (k = 1.1, H = 2σ) appeared to be the optimal parameters among these models. Youden's index of C1 was 83.0% and detection time being 0.64 weeks, Youden's index of C2 was 85.4% and detection time being 1.27 weeks, Youden's index of C1 was 85.1% and detection time being 1.36 weeks. Among the three early warning detection models, C1 had the highest efficacy. Three models all triggered the signals within 4 weeks after the onset of scarlet fever epidemics. The early warning detection model of CUSUM could be used to detect the onset of scarlet fever epidemics, with good efficacy.
NASA Astrophysics Data System (ADS)
Hilliard, Antony
Energy Monitoring and Targeting is a well-established business process that develops information about utility energy consumption in a business or institution. While M&T has persisted as a worthwhile energy conservation support activity, it has not been widely adopted. This dissertation explains M&T challenges in terms of diagnosing and controlling energy consumption, informed by a naturalistic field study of M&T work. A Cognitive Work Analysis of M&T identifies structures that diagnosis can search, information flows un-supported in canonical support tools, and opportunities to extend the most popular tool for MM&T: Cumulative Sum of Residuals (CUSUM) charts. A design application outlines how CUSUM charts were augmented with a more contemporary statistical change detection strategy, Recursive Parameter Estimates, modified to better suit the M&T task using Representation Aiding principles. The design was experimentally evaluated in a controlled M&T synthetic task, and was shown to significantly improve diagnosis performance.
A Workflow for Global Sensitivity Analysis of PBPK Models
McNally, Kevin; Cotton, Richard; Loizou, George D.
2011-01-01
Physiologically based pharmacokinetic (PBPK) models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilized to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis (SA) technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined the elements of a workflow for SA of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot), which we believe is intuitive and appropriate for toxicologists, risk assessors, and regulators. PMID:21772819
Cumulative Interference to Aircraft Radios from Multiple Portable Electronic Devices
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.
2005-01-01
Cumulative interference effects from portable electronic devices (PEDs) located inside a passenger cabin are conservatively estimated for aircraft radio receivers. PEDs' emission powers in an aircraft radio frequency band are first scaled according to their locations' interference path loss (IPL) values, and the results are summed to determine the total interference power. The multiple-equipment-factor (MEF) is determined by normalizing the result against the worst case contribution from a single device. Conservative assumptions were made and MEF calculations were performed for Boeing 737's Localizer, Glide-slope, Traffic Collision Avoidance System, and Very High Frequency Communication radio systems where full-aircraft IPL data were available. The results show MEF for the systems to vary between 10 and 14 dB. The same process was also used on the more popular window/door IPL data, and the comparison show the multiple-equipment-factor results came within one decibel (dB) of each other.
Effects of stressful life events in young black men with high blood pressure.
Han, Hae-Ra; Kim, Miyong T; Rose, Linda; Dennison, Cheryl; Bone, Lee; Hill, Martha N
2006-01-01
1) To describe stressful life events as experienced by a sample of young Black men with high blood pressure (HBP) living in inner-city Baltimore, Maryland; and 2) to examine the effect of cumulative stressful life events on substance use, depression, and quality of life. Data were obtained over 48 months by interview from 210 men in an HBP management study. Stressors repeatedly occurring over time included death of family member or close friend (65.2%), having a new family member (32.9%), change in residence (31.4%), difficulty finding a job (24.3%), and fired or laid off from work (17.6%). Involvement with crime or legal matters was reported at least twice during the 48 months by 33.3% of men. When a cumulative stressful life events score was calculated by summing the number of events experienced at 6-month points over 48 months and tested for its relationship with the health outcomes, the findings of multivariate analyses revealed significant associations between cumulative life stressors and depression and quality of life. No significant relationship was found between stressful life events and substance use. The results suggest that cumulative stressful life events have a negative effect on mental health and quality of life in young Black men with HBP. Future study should focus on developing interventions to assist individuals in managing distress related to stressful events with necessary community resources.
Grummon, Anna H; Vaughn, Amber; Jones, Deborah J; Ward, Dianne S
2017-08-01
Children exposed to multiple stressors are more likely to be overweight, but little is known about the mechanisms explaining this association. This cross-sectional study examined whether children exposed to multiple stressors had higher waist circumference, and whether this association was mediated through children's television time. Participants were 319 parent-child dyads. Children were 2-5 years old and had at least one overweight parent (BMI ≥ 25 kg/m 2 ). Data were collected at baseline of a larger childhood obesity prevention study and included information on psychosocial stressors (e.g., parenting stress), demographic stressors (e.g., low income), children's television time, and children's waist circumference. Two cumulative risk scores were created by summing stressors in each domain (demographic and psychosocial). Mediation and moderated mediation analyses were conducted. Indirect effects of both cumulative risk scores on waist circumference through television time were not significant; however, moderated mediation analyses found significant moderation by gender. The indirect effects of both risk scores on waist circumference through television time were significant and positive for girls, but near-zero for boys. Reducing television time should be explored as a strategy for buffering against the negative health effects of exposure to multiple stressors among girls. Longitudinal and intervention research is needed to confirm these results and to identify mediating factors between cumulative risk and body weight among boys.
Stevenson, Douglass E; Michels, Gerald J; Bible, John B; Jackman, John A; Harris, Marvin K
2008-10-01
Field observations at three locations in the Texas High Plains were used to develop and validate a degree-day phenology model to predict the onset and proportional emergence of adult Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae) adults. Climatic data from the Texas High Plains Potential Evapotranspiration network were used with records of cumulative proportional adult emergence to determine the functional lower developmental temperature, optimum starting date, and the sum of degree-days for phenological events from onset to 99% adult emergence. The model base temperature, 10 degrees C (50 degrees F), corresponds closely to known physiological lower limits for development. The model uses a modified Gompertz equation, y = 96.5 x exp (-(exp(6.0 - 0.00404 x (x - 4.0), where x is cumulative heat (degree-days), to predict y, cumulative proportional emergence expressed as a percentage. The model starts degree-day accumulation on the date of corn, Zea mays L., emergence, and predictions correspond closely to corn phenological stages from tasseling to black layer development. Validation shows the model predicts cumulative proportional adult emergence within a satisfactory interval of 4.5 d. The model is flexible enough to accommodate early planting, late emergence, and the effects of drought and heat stress. The model provides corn producers ample lead time to anticipate and implement adult control practices.
Seidler, Andreas; Bolm-Audorff, Ulrich; Abolmaali, Nasreddin; Elsner, Gine
2008-01-01
Objectives To examine the dose-response relationship between cumulative exposure to kneeling and squatting as well as to lifting and carrying of loads and symptomatic knee osteoarthritis (OA) in a population-based case-control study. Methods In five orthopedic clinics and five practices we recruited 295 male patients aged 25 to 70 with radiographically confirmed knee osteoarthritis associated with chronic complaints. A total of 327 male control subjects were recruited. Data were gathered in a structured personal interview. To calculate cumulative exposure, the self-reported duration of kneeling and squatting as well as the duration of lifting and carrying of loads were summed up over the entire working life. Results The results of our study support a dose-response relationship between kneeling/squatting and symptomatic knee osteoarthritis. For a cumulative exposure to kneeling and squatting > 10.800 hours, the risk of having radiographically confirmed knee osteoarthritis as measured by the odds ratio (adjusted for age, region, weight, jogging/athletics, and lifting or carrying of loads) is 2.4 (95% CI 1.1–5.0) compared to unexposed subjects. Lifting and carrying of loads is significantly associated with knee osteoarthritis independent of kneeling or similar activities. Conclusion As the knee osteoarthritis risk is strongly elevated in occupations that involve both kneeling/squatting and heavy lifting/carrying, preventive efforts should particularly focus on these "high-risk occupations". PMID:18625053
Knutson, Allen E; Muegge, Mark A
2010-06-01
Field observations from pecan, Carya illinoinensis (Wangenh.) Koch, orchards in Texas were used to develop and validate a degree-day model of cumulative proportional adult flight and oviposition and date of first observed nut entry by larvae of the first summer generation of the pecan nut casebearer, Acrobasis nuxvorella Nuenzig (Lepidoptera: Pyralidae). The model was initiated on the date of first sustained capture of adults in pheromone traps. Mean daily maximum and minimum temperatures were used to determine the sum of degree-days from onset to 99% moth flight and oviposition and the date on which first summer generation larvae were first observed penetrating pecan nuts. Cumulative proportional oviposition (y) was described by a modified Gompertz equation, y = 106.05 x exp(-(exp(3.11 - 0.00669 x (x - 1), with x = cumulative degree-days at a base temperature of 3.33 degrees C. Cumulative proportional moth flight (y) was modeled as y = 102.62 x exp(- (exp(1.49 - 0.00571 x (x - 1). Model prediction error for dates of 10, 25, 50, 75, and 90% cumulative oviposition was 1.3 d and 83% of the predicted dates were within +/- 2 d of the observed event. Prediction error for date of first observed nut entry was 2.2 d and 77% of model predictions were within +/- 2 d of the observed event. The model provides ample lead time for producers to implement orchard scouting to assess pecan nut casebearer infestations and to apply an insecticide if needed to prevent economic loss.
A sequential test for assessing observed agreement between raters.
Bersimis, Sotiris; Sachlas, Athanasios; Chakraborti, Subha
2018-01-01
Assessing the agreement between two or more raters is an important topic in medical practice. Existing techniques, which deal with categorical data, are based on contingency tables. This is often an obstacle in practice as we have to wait for a long time to collect the appropriate sample size of subjects to construct the contingency table. In this paper, we introduce a nonparametric sequential test for assessing agreement, which can be applied as data accrues, does not require a contingency table, facilitating a rapid assessment of the agreement. The proposed test is based on the cumulative sum of the number of disagreements between the two raters and a suitable statistic representing the waiting time until the cumulative sum exceeds a predefined threshold. We treat the cases of testing two raters' agreement with respect to one or more characteristics and using two or more classification categories, the case where the two raters extremely disagree, and finally the case of testing more than two raters' agreement. The numerical investigation shows that the proposed test has excellent performance. Compared to the existing methods, the proposed method appears to require significantly smaller sample size with equivalent power. Moreover, the proposed method is easily generalizable and brings the problem of assessing the agreement between two or more raters and one or more characteristics under a unified framework, thus providing an easy to use tool to medical practitioners. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parker, R.S.; Litke, D.W.
1987-01-01
The cumulative effects of changes in dissolved solids from a number of coal mines are needed to evaluate effects on downstream water use. A model for determining cumulative effects of streamflow, dissolved-solids concentration, and dissolved-solids load was calibrated for the Yampa River and its tributaries in northwestern Colorado. The model uses accounting principles. It establishes nodes on the stream system and sums water quantity and quality from node to node in the downstream direction. The model operates on a monthly time step for the study period that includes water years 1976 through 1981. Output is monthly mean streamflow, dissolved-solids concentration, and dissolved-solids load. Streamflow and dissolved-solids data from streamflow-gaging stations and other data-collection sites were used to define input data sets to initiate and to calibrate the model. The model was calibrated at four nodes and generally was within 10 percent of the observed values. The calibrated model can compute changes in dissolved-solids concentration or load resulting from the cumulative effects of new coal mines or the expansion of old coal mines in the Yampa River basin. (USGS)
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Statistical properties of two sine waves in Gaussian noise.
NASA Technical Reports Server (NTRS)
Esposito, R.; Wilson, L. R.
1973-01-01
A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).
NASA Technical Reports Server (NTRS)
Hinson, E. W.
1981-01-01
The preliminary analysis and data analysis system development for the shuttle upper atmosphere mass spectrometer (SUMS) experiment are discussed. The SUMS experiment is designed to provide free stream atmospheric density, pressure, temperature, and mean molecular weight for the high altitude, high Mach number region.
Leandro, G; Rolando, N; Gallus, G; Rolles, K; Burroughs, A
2005-01-01
Background: Monitoring clinical interventions is an increasing requirement in current clinical practice. The standard CUSUM (cumulative sum) charts are used for this purpose. However, they are difficult to use in terms of identifying the point at which outcomes begin to be outside recommended limits. Objective: To assess the Bernoulli CUSUM chart that permits not only a 100% inspection rate, but also the setting of average expected outcomes, maximum deviations from these, and false positive rates for the alarm signal to trigger. Methods: As a working example this study used 674 consecutive first liver transplant recipients. The expected one year mortality set at 24% from the European Liver Transplant Registry average. A standard CUSUM was compared with Bernoulli CUSUM: the control value mortality was therefore 24%, maximum accepted mortality 30%, and average number of observations to signal was 500—that is, likelihood of false positive alarm was 1:500. Results: The standard CUSUM showed an initial descending curve (nadir at patient 215) then progressively ascended indicating better performance. The Bernoulli CUSUM gave three alarm signals initially, with easily recognised breaks in the curve. There were no alarms signals after patient 143 indicating satisfactory performance within the criteria set. Conclusions: The Bernoulli CUSUM is more easily interpretable graphically and is more suitable for monitoring outcomes than the standard CUSUM chart. It only requires three parameters to be set to monitor any clinical intervention: the average expected outcome, the maximum deviation from this, and the rate of false positive alarm triggers. PMID:16210461
Kollmann-Camaiora, A; Brogly, N; Alsina, E; Gilsanz, F
2017-10-01
Although ultrasound is a basic competence for anaesthesia residents (AR) there is few data available on the learning process. This prospective observational study aims to assess the learning process of ultrasound-guided continuous femoral nerve block and to determine the number of procedures that a resident would need to perform in order to reach proficiency using the cumulative sum (CUSUM) method. We recruited 19 AR without previous experience. Learning curves were constructed using the CUSUM method for ultrasound-guided continuous femoral nerve block considering 2 success criteria: a decrease of pain score>2 in a [0-10] scale after 15minutes, and time required to perform it. We analyse data from 17 AR for a total of 237 ultrasound-guided continuous femoral nerve blocks. 8/17 AR became proficient for pain relief, however all the AR who did more than 12 blocks (8/8) became proficient. As for time of performance 5/17 of AR achieved the objective of 12minutes, however all the AR who did more than 20 blocks (4/4) achieved it. The number of procedures needed to achieve proficiency seems to be 12, however it takes more procedures to reduce performance time. The CUSUM methodology could be useful in training programs to allow early interventions in case of repeated failures, and develop competence-based curriculum. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
Spengler, John D.; Harley, Amy E.; Stoddard, Anne; Yang, May; Alvarez-Reeves, Marty; Sorensen, Glorian
2014-01-01
Objectives. We explored prevalence and clustering of key environmental conditions in low-income housing and associations with self-reported health. Methods. The Health in Common Study, conducted between 2005 and 2009, recruited participants (n = 828) from 20 low-income housing developments in the Boston area. We interviewed 1 participant per household and conducted a brief inspection of the unit (apartment). We created binary indexes and a summed index for household exposures: mold, combustion by-products, secondhand smoke, chemicals, pests, and inadequate ventilation. We used multivariable logistic regression to examine the associations between each index and household characteristics and between each index and self-reported health. Results. Environmental problems were common; more than half of homes had 3 or more exposure-related problems (median summed index = 3). After adjustment for household-level demographics, we found clustering of problems in site (P < .01) for pests, combustion byproducts, mold, and ventilation. Higher summed index values were associated with higher adjusted odds of reporting fair–poor health (odds ratio = 2.7 for highest category; P < .008 for trend). Conclusions. We found evidence that indoor environmental conditions in multifamily housing cluster by site and that cumulative exposures may be associated with poor health. PMID:24028244
Schneiderman, Inna; Kanat-Maymon, Yaniv; Ebstein, Richard P.
2014-01-01
Empathic communication between couples plays an important role in relationship quality and individual well-being and research has pointed to the role of oxytocin in providing the neurobiological substrate for pair-bonding and empathy. Here, we examined links between genetic variability on the oxytocin receptor gene (OXTR) and empathic behaviour at the initiation of romantic love. Allelic variations on five OXTR single nucleotide polymorphisms (SNPs) previously associated with susceptibility to disorders of social functioning were genotyped in 120 new lovers: OXTRrs13316193, rs2254298, rs1042778, rs2268494 and rs2268490. Cumulative genetic risk was computed by summing risk alleles on each SNP. Couples were observed in support-giving interaction and behaviour was coded for empathic communication, including affective congruence, maintaining focus on partner, acknowledging partner's distress, reciprocal exchange and non-verbal empathy. Hierarchical linear modelling indicated that individuals with high OXTR risk exhibited difficulties in empathic communication. OXTR risk predicted empathic difficulties above and beyond the couple level, relationship duration, and anxiety and depressive symptoms. Findings underscore the involvement of oxytocin in empathic behaviour during the early stages of social affiliation, and suggest the utility of cumulative risk and plasticity indices on the OXTR as potential biomarkers for research on disorders of social dysfunction and the neurobiology of empathy. PMID:23974948
Nurius, Paula S.; Prince, Dana M.; Rocha, Anita
2015-01-01
Purpose The accumulation of disadvantage has been shown to increase psychosocial stressors that impact life course well-being. This study tests for significant differences, based on disadvantage exposure, on youths’ emotional and physical health, as well as family supports, peer assets, and academic success, which hold potential for resilience and amelioration of negative health outcomes. Methods A 12 item cumulative disadvantage summed index derived from surveys of a racially and socioeconomically diverse sample of urban high school seniors (n=9,658) was used to distinguish youth at low, moderate, and high levels. Results Findings supported hypothesized stepped patterns such that as multiple disadvantages accumulate, a concomitant decline is evident across the assessed outcome variables (except positive academic identity). Post-hoc tests indicated a pattern of groups being significantly different from one another. Discussion Overall, results lend support for an additive stress load associated with stacked disadvantage, with implications for continuing trends into adulthood as well as preventive interventions PMID:26617431
The new economics of labour migration and the role of remittances in the migration process.
Taylor, J E
1999-01-01
This analysis considers international migration remittances and their impact on development in migrant-sending areas. The new economics of labor migration (NELM) posit that remittances lessen production and market constraints faced by households in poor developing countries. The article states that remittances may be a positive factor in economic development, which should be nurtured by economic policies. The impact of remittances and migration on development varies across locales and is influenced by migrants' remittance behavior and by economic contexts. Criteria for measuring development gains may include assessments of income growth, inequity, and poverty alleviation. It is hard to gauge the level of remittances, especially when remittances may not flow through formal banking systems. The International Monetary Fund distinguishes between worker remittances sent home for over 1 year; employee compensation including the value of in-kind benefits for under 1 year; and the net worth of migrants who move between countries. This sum amounted to under $2 billion in 1970 and $70 billion in 1995. The cumulative sum of remittances, employee compensation, and transfers was almost $1 trillion, of which almost 66% was worker remittances, 25% was employee compensation, and almost 10% was transfers during 1980-95. Total world remittances surpass overseas development assistance. Remittances are unequally distributed across and between countries. Migration research does not adequately reveal the range and complexity of impacts. Push factors can limit options for use of remittances to stimulate development.
Environmental variability and chum salmon production at the northwestern Pacific Ocean
NASA Astrophysics Data System (ADS)
Kim, Suam; Kang, Sukyung; Kim, Ju Kyoung; Bang, Minkyoung
2017-12-01
Chum salmon, Oncorhynchus keta, are distributed widely in the North Pacific Ocean, and about 76% of chum salmon were caught from Russian, Japanese, and Korean waters of the northwestern Pacific Ocean during the last 20 years. Although it has been speculated that the recent increase in salmon production was aided by not only the enhancement program that targeted chum salmon but also by favorable ocean conditions since the early 1990s, the ecological processes for determining the yield of salmon have not been clearly delineated. To investigate the relationship between yield and the controlling factors for ocean survival of chum salmon, a time-series of climate indices, seawater temperature, and prey availability in the northwestern Pacific including Korean waters were analyzed using some statistical tools. The results of cross-correlation function (CCF) analysis and cumulative sum (CuSum) of anomalies indicated that there were significant environmental changes in the North Pacific during the last century, and each regional stock of chum salmon responded to the Pacific Decadal Oscillation (PDO) differently: for Russian stock, the correlations between PDO index and catch were significantly negative with a time-lag of 0 and 1 years; for Japanese stock, significantly positive with a timelag of 0-2 years; and for Korean stock, positive but no significant correlation. The results of statistical analyses with Korean chum salmon also revealed that a coastal seawater temperature over 14°C and the return rate of spawning adults to the natal river produced a significant negative correlation.
Mduma, Estomih R; Ersdal, Hege; Kvaloy, Jan Terje; Svensen, Erling; Mdoe, Paschal; Perlman, Jeffrey; Kidanto, Hussein Lessio; Soreide, Eldar
2018-05-01
To trace and document smaller changes in perinatal survival over time. Prospective observational study, with retrospective analysis. Labor ward and operating theater at Haydom Lutheran Hospital in rural north-central Tanzania. All women giving birth and birth attendants. Helping Babies Breathe (HBB) simulation training on newborn care and resuscitation and some other efforts to improve perinatal outcome. Perinatal survival, including fresh stillbirths and early (24-h) newborn survival. The variable life-adjusted plot and cumulative sum chart revealed a steady improvement in survival over time, after the baseline period. There were some variations throughout the study period, and some of these could be linked to different interventions and events. To our knowledge, this is the first time statistical process control methods have been used to document changes in perinatal mortality over time in a rural Sub-Saharan hospital, showing a steady increase in survival. These methods can be utilized to continuously monitor and describe changes in patient outcomes.
Hubbert's Peak, The Coal Question, and Climate Change
NASA Astrophysics Data System (ADS)
Rutledge, D.
2008-12-01
The United Nations Intergovernmental Panel on Climate Change (IPCC) makes projections in terms of scenarios that include estimates of oil, gas, and coal production. These scenarios are defined in the Special Report on Emissions Scenarios or SRES (Nakicenovic et al., 2000). It is striking how different these scenarios are. For example, total oil production from 2005 to 2100 in the scenarios varies by 5:1 (Appendix SRES Version 1.1). Because production in some of the scenarios has not peaked by 2100, this ratio would be comparable to 10:1 if the years after 2100 were considered. The IPCC says "... the resultant 40 SRES scenarios together encompass the current range of uncertainties of future GHG [greenhouse gas] emissions arising from different characteristics of these models ..." (Nakicenovic et al., 2000, Summary for Policy Makers). This uncertainty is important for climate modeling, because it is larger than the likely range for the temperature sensitivity, which the IPCC gives as 2.3:1 (Gerard Meehl et al., 2007, the Fourth Assessment Report, Chapter 10, Global Climate Projections, p. 799). The uncertainty indicates that we could improve climate modeling if we could make a better estimate of future oil, gas, and coal production. We start by considering the two major fossil-fuel regions with substantial exhaustion, US oil and British coal. It turns out that simple normal and logistic curve fits to the cumulative production for these regions give quite stable projections for the ultimate production. By ultimate production, we mean total production, past and future. For US oil, the range for the fits for the ultimate is 1.15:1 (225- 258 billion barrels) for the period starting in 1956, when King Hubbert made his prediction of the peak year of US oil production. For UK coal, the range is 1.26:1 for the period starting in 1905, at the time of a Royal Commission on coal supplies. We extend this approach to find fits for world oil and gas production, and by a regional analysis, for world coal production. For world oil and gas production, the fit for the ultimate is 640Gtoe (billion metric tons of oil equivalent). This is somewhat larger than the sum of cumulative production and reserves, 580Gtoe. Because future discoveries are not included in the reserves, it is to be expected that our fit would be larger. On the other hand, there have been large increases in OPEC reserves that have not been subject to outside audit, so it is not clear how close the two numbers should be. For world coal, the sum of the fits for regional ultimate production is 660Gt (billion metric tons). This is considerably less than the sum of cumulative production and reserves, 1,100Gt, but it is consistent with the British experience, where until recently, reserves were a large multiple of future production. The projection is that we will have consumed half of the ultimate world oil, gas, and coal production by 2019. This means that the current intense development of alternative sources of energy can be justified independently of climate considerations. When these projections are converted to carbon equivalents, the projected future emissions from burning oil, gas, and coal from 2005 on are 520GtC. The projected emissions for the 2005-2100 period are smaller than for any of the 40 SRES scenarios. This suggests that future scenarios should take exhaustion into account. These projections, if correct, are good news for climate change.
County-level heat vulnerability of urban and rural residents in Tibet, China.
Bai, Li; Woodward, Alistair; Cirendunzhu; Liu, Qiyong
2016-01-12
Tibet is especially vulnerable to climate change due to the relatively rapid rise of temperature over past decades. The effects on mortality and morbidity of extreme heat in Tibet have been examined in previous studies; no heat adaptation initiatives have yet been implemented. We estimated heat vulnerability of urban and rural populations in 73 Tibetan counties and identified potential areas for public health intervention and further research. According to data availability and vulnerability factors identified previously in Tibet and elsewhere, we selected 10 variables related to advanced age, low income, illiteracy, physical and mental disability, small living spaces and living alone. We separately created and mapped county-level cumulative heat vulnerability indices for urban and rural residents by summing up factor scores produced by a principal components analysis (PCA). For both study populations, PCA yielded four factors with similar structure. The components for rural and urban residents explained 76.5 % and 77.7 % respectively of the variability in the original vulnerability variables. We found spatial variability of heat vulnerability across counties, with generally higher vulnerability in high-altitude counties. Although we observed similar median values and ranges of the cumulative heat vulnerability index values among urban and rural residents overall, the pattern varied strongly from one county to another. We have developed a measure of population vulnerability to high temperatures in Tibet. These are preliminary findings, but they may assist targeted adaptation plans in response to future rapid warming in Tibet.
Oshima, Taku; Furukawa, Yutaka; Kobayashi, Michihiko; Sato, Yumi; Nihei, Aya; Oda, Shigeto
2015-03-01
We sought to investigate the energy requirements for patients under therapeutic hypothermia, and the relationship of energy fulfillment to patient outcome. Adult patients admitted to our ICU after successful resuscitation from cardiac arrest for post resuscitation therapeutic hypothermia from April, 2012 to March, 2014 were enrolled. Body temperature was managed using the surface cooling device (Arctic Sun(®), IMI). Calorimeter module on the ventilator (Engström carestation(®), GE) was used for indirect calorimetry. Energy expenditure (EE) and respiratory quotient (RQ) were recorded continuously, as the average of the recent 2h. Measurements were started at the hypothermic phase and continued until the rewarming was completed. Cumulative energy deficit was calculated as the sum of difference between EE and daily energy provision for the 4 days during hypothermia therapy. Seven patients were eligible for analysis. Median EE for the hypothermic phase (day 1) was 1557.0kcald(-1). EE was elevated according with the rise in body temperature, reaching 2375kcald(-1) at normothermic phase. There was significant association between cumulative energy deficit and the length of ICU stay, among patients with good neurologic recovery (cerebral performance category (CPC): 1-3). The EE for patients under therapeutic hypothermia was higher than expected. Meeting the energy demand may improve patient outcome, as observed in the length of ICU stay for the present study. A larger, prospective study is awaited to validate the results of our study. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The sumLINK statistic for genetic linkage analysis in the presence of heterogeneity.
Christensen, G B; Knight, S; Camp, N J
2009-11-01
We present the "sumLINK" statistic--the sum of multipoint LOD scores for the subset of pedigrees with nominally significant linkage evidence at a given locus--as an alternative to common methods to identify susceptibility loci in the presence of heterogeneity. We also suggest the "sumLOD" statistic (the sum of positive multipoint LOD scores) as a companion to the sumLINK. sumLINK analysis identifies genetic regions of extreme consistency across pedigrees without regard to negative evidence from unlinked or uninformative pedigrees. Significance is determined by an innovative permutation procedure based on genome shuffling that randomizes linkage information across pedigrees. This procedure for generating the empirical null distribution may be useful for other linkage-based statistics as well. Using 500 genome-wide analyses of simulated null data, we show that the genome shuffling procedure results in the correct type 1 error rates for both the sumLINK and sumLOD. The power of the statistics was tested using 100 sets of simulated genome-wide data from the alternative hypothesis from GAW13. Finally, we illustrate the statistics in an analysis of 190 aggressive prostate cancer pedigrees from the International Consortium for Prostate Cancer Genetics, where we identified a new susceptibility locus. We propose that the sumLINK and sumLOD are ideal for collaborative projects and meta-analyses, as they do not require any sharing of identifiable data between contributing institutions. Further, loci identified with the sumLINK have good potential for gene localization via statistical recombinant mapping, as, by definition, several linked pedigrees contribute to each peak.
1988-03-01
framework for acquistion management to analyzing the Identification Friend, Foe or Neutral (IFFN) Joint Testbed to evaluating C2 components of 0 the...measure. The results on the worksheet were columns consisting of ones and zeroes . Every summed measure (e.g.,FAIR, XMOTi, and XCSTi) received a cumulative...were networked by the gateway and through TASS to one another. c. Structural Components The valL-- of the structural measure remained at zero
Griffiths, Robert I; Gleeson, Michelle L; Danese, Mark D; O'Hagan, Anthony
2012-01-01
To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data. By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months. The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups. IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Oshikiri, Taro; Yasuda, Takashi; Yamamoto, Masashi; Kanaji, Shingo; Yamashita, Kimihiro; Matsuda, Takeru; Sumi, Yasuo; Nakamura, Tetsu; Fujino, Yasuhiro; Tominaga, Masahiro; Suzuki, Satoshi; Kakeji, Yoshihiro
2016-09-01
Minimally invasive esophagectomy (MIE) has less morbidity than the open approach. In particular, thoracoscopic esophagectomy in the prone position (TEP) has been performed worldwide. Using the cumulative sum control chart (CUSUM) method, this study aimed to confirm whether a trainee surgeon who learned established standards would become skilled in TEP with a shorter learning curve than that of the mentoring surgeon. Surgeon A performed TEP in 100 patients; the first 22 patients comprised period 1. His learning curve, defined based on the operation time (OT) of the thoracic procedure, was evaluated using the CUSUM method, and short-term outcomes were assessed. Another 22 patients underwent TEP performed by surgeon B, with outcomes compared to those of surgeon A's period 1. Using the CUSUM chart, the peak point of the thoracic procedure OT occurred at the 44th case in surgeon A's experience of 100 cases. With surgeon A's first 22 cases (period 1), the peak point of the thoracic procedure OT could not be confirmed and graph is expanding soaring at CUSUM chart. The CUSUM chart of surgeon B's experience of 22 cases clearly indicated that the peak point of the thoracic procedure OT occurred at the 17th case. The rate of recurrent laryngeal nerve palsy for surgeon B (9 %) was significantly lower than for surgeon A in period 1 (36 %) (p = 0.0266). There is some possibility for a trainee surgeon to attain the required basic skills to perform TEP in a relatively short period of time using a standardized procedure developed by a mentoring surgeon. The CUSUM method should be useful in evaluating trainee competence during an initial series of procedures, by assessing the learning curve defined by OT.
Variable order fractional Fokker-Planck equations derived from Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Straka, Peter
2018-08-01
Continuous Time Random Walk models (CTRW) of anomalous diffusion are studied, where the anomalous exponent β(x) ∈(0 , 1) varies in space. This type of situation occurs e.g. in biophysics, where the density of the intracellular matrix varies throughout a cell. Scaling limits of CTRWs are known to have probability distributions which solve fractional Fokker-Planck type equations (FFPE). This correspondence between stochastic processes and FFPE solutions has many useful extensions e.g. to nonlinear particle interactions and reactions, but has not yet been sufficiently developed for FFPEs of the "variable order" type with non-constant β(x) . In this article, variable order FFPEs (VOFFPE) are derived from scaling limits of CTRWs. The key mathematical tool is the 1-1 correspondence of a CTRW scaling limit to a bivariate Langevin process, which tracks the cumulative sum of jumps in one component and the cumulative sum of waiting times in the other. The spatially varying anomalous exponent is modelled by spatially varying β(x) -stable Lévy noise in the waiting time component. The VOFFPE displays a spatially heterogeneous temporal scaling behaviour, with generalized diffusivity and drift coefficients whose units are length2/timeβ(x) resp. length/timeβ(x). A global change of the time scale results in a spatially varying change in diffusivity and drift. A consequence of the mathematical derivation of a VOFFPE from CTRW limits in this article is that a solution of a VOFFPE can be approximated via Monte Carlo simulations. Based on such simulations, we are able to confirm that the VOFFPE is consistent under a change of the global time scale.
Stephenson, Richard; Caron, Aimee M; Famina, Svetlana
2016-12-01
Sleep-wake behavior exhibits diurnal rhythmicity, rebound responses to acute total sleep deprivation (TSD), and attenuated rebounds following chronic sleep restriction (CSR). We investigated how these long-term patterns of behavior emerge from stochastic short-term dynamics of state transition. Male Sprague-Dawley rats were subjected to TSD (1day×24h, N=9), or CSR (10days×18h TSD, N=7) using a rodent walking-wheel apparatus. One baseline day and one recovery day following TSD and CSR were analyzed. The implications of the zero sum principle were evaluated using a Markov model of sleep-wake state transition. Wake bout duration (a combined function of the probability of wake maintenance and proportional representations of brief and long wake) was a key variable mediating the baseline diurnal rhythms and post-TSD responses of all three states, and the attenuation of the post-CSR rebounds. Post-NREM state transition trajectory was an important factor in REM rebounds. The zero sum constraint ensures that a change in any transition probability always affects bout frequency and cumulative time of at least two, and usually all three, of wakefulness, NREM and REM. Neural mechanisms controlling wake maintenance may play a pivotal role in regulation and dysregulation of all three states. Copyright © 2016 Elsevier Inc. All rights reserved.
Singh, Jitendra P; Singh, Ak; Bajpai, Anju; Ahmad, Iffat Zareen
2014-01-01
The Indian black berry (Syzygium cumini Skeels) has a great nutraceutical and medicinal properties. As in other fruit crops, the fruit characteristics are important attributes for differentiation were also determined for different accessions of S. cumini. The fruit weight, length, breadth, length: breadth ratio, pulp weight, pulp content, seed weight and pulp: seed ratio significantly varied in different accessions. Molecular characterization was carried out using PCR based RAPD technique. Out of 80 RAPD primers, only 18 primers produced stable polymorphisms that were used to examine the phylogenetic relationship. A sum of 207 loci were generated out of which 201 loci found polymorphic. The average genetic dissimilarity was 97 per cent among jamun accessions. The phylogenetic relationship was also determined by principal coordinates analysis (PCoA) that explained 46.95 per cent cumulative variance. The two-dimensional PCoA analysis showed grouping of the different accessions that were plotted into four sub-plots, representing clustering of accessions. The UPGMA (r = 0.967) and NJ (r = 0.987) dendrogram constructed based on the dissimilarity matrix revealed a good degree of fit with the cophenetic correlation value. The dendrogram grouped the accessions into three main clusters according to their eco-geographical regions which given useful insight into their phylogenetic relationships.
Schneiderman, Inna; Kanat-Maymon, Yaniv; Ebstein, Richard P; Feldman, Ruth
2014-10-01
Empathic communication between couples plays an important role in relationship quality and individual well-being and research has pointed to the role of oxytocin in providing the neurobiological substrate for pair-bonding and empathy. Here, we examined links between genetic variability on the oxytocin receptor gene (OXTR) and empathic behaviour at the initiation of romantic love. Allelic variations on five OXTR single nucleotide polymorphisms (SNPs) previously associated with susceptibility to disorders of social functioning were genotyped in 120 new lovers: OXTRrs13316193, rs2254298, rs1042778, rs2268494 and rs2268490. Cumulative genetic risk was computed by summing risk alleles on each SNP. Couples were observed in support-giving interaction and behaviour was coded for empathic communication, including affective congruence, maintaining focus on partner, acknowledging partner's distress, reciprocal exchange and non-verbal empathy. Hierarchical linear modelling indicated that individuals with high OXTR risk exhibited difficulties in empathic communication. OXTR risk predicted empathic difficulties above and beyond the couple level, relationship duration, and anxiety and depressive symptoms. Findings underscore the involvement of oxytocin in empathic behaviour during the early stages of social affiliation, and suggest the utility of cumulative risk and plasticity indices on the OXTR as potential biomarkers for research on disorders of social dysfunction and the neurobiology of empathy. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albano Farias, L.; Stephany, J.
2010-12-15
We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momentamore » and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.« less
Vogt, Lars; Grobe, Peter; Quast, Björn; Bartolomaeus, Thomas
2012-01-01
Background The Basic Formal Ontology (BFO) is a top-level formal foundational ontology for the biomedical domain. It has been developed with the purpose to serve as an ontologically consistent template for top-level categories of application oriented and domain reference ontologies within the Open Biological and Biomedical Ontologies Foundry (OBO). BFO is important for enabling OBO ontologies to facilitate in reliably communicating and managing data and metadata within and across biomedical databases. Following its intended single inheritance policy, BFO's three top-level categories of material entity (i.e. ‘object’, ‘fiat object part’, ‘object aggregate’) must be exhaustive and mutually disjoint. We have shown elsewhere that for accommodating all types of constitutively organized material entities, BFO must be extended by additional categories of material entity. Methodology/Principal Findings Unfortunately, most biomedical material entities are cumulative-constitutively organized. We show that even the extended BFO does not exhaustively cover cumulative-constitutively organized material entities. We provide examples from biology and everyday life that demonstrate the necessity for ‘portion of matter’ as another material building block. This implies the necessity for further extending BFO by ‘portion of matter’ as well as three additional categories that possess portions of matter as aggregate components. These extensions are necessary if the basic assumption that all parts that share the same granularity level exhaustively sum to the whole should also apply to cumulative-constitutively organized material entities. By suggesting a notion of granular representation we provide a way to maintain the single inheritance principle when dealing with cumulative-constitutively organized material entities. Conclusions/Significance We suggest to extend BFO to incorporate additional categories of material entity and to rearrange its top-level material entity taxonomy. With these additions and the notion of granular representation, BFO would exhaustively cover all top-level types of material entities that application oriented ontologies may use as templates, while still maintaining the single inheritance principle. PMID:22253856
Fine Structure Analysis of 4702 oA Band of the Molecule
NASA Astrophysics Data System (ADS)
Sureshkumar, M. B.; Srikant, S. R.
1998-01-01
The emission spectrum of the cobalt monochloride molecule has been excited in a high frequency discharge tube source and the (0,0) band of H-system at 4702 Å was photo-graphed at an inverse dispersion of 0.973 Å/mm in the 5th order of a two meter plane grating spectrograph (Carl-Zeiss). The fine structure analysis of the band has been carried out and the molecular constants are reported for the first time. Rotational isotopic shift due to 37Cl support the analysis. The electronic transition involved is of the type 0---- 0- of case (c) which is equivalent of 3sum+---3sum+ or 5sum+---5sum+.
A Time-Dependent Quantum Dynamics Study of the H2 + CH3 yields H + CH4 Reaction
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
We present a time-dependent wave-packet propagation calculation for the H2 + CH3 yields H + CH4 reaction in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probability for different initial rotational-vibrational states are presented in this study. The cumulative reaction probability (CRP) is obtained by summing over initial-state-selected reaction probability. The energy-shift approximation to account for the contribution of degrees of freedom missing in the 6D calculation is employed to obtain an approximate full-dimensional CRP. Thermal rate constant is compared with different experiment results.
Oil, gas field growth projections: Wishful thinking or reality?
Attanasi, E.D.; Mast, R.F.; Root, D.H.
1999-01-01
The observed `field growth' for the period from 1992 through 1996 with the US Geological Survey's (USGS) predicted field growth for the same period are compared. Known field recovery of field size is defined as the sum of past cumulative field production and the field's proved reserves. Proved reserves are estimated quantities of hydrocarbons which geologic and engineering data demonstrate with reasonable certainty to recoverable from known fields under existing economic and operating conditions. Proved reserve estimates calculated with this definition are typically conservative. The modeling approach used by the USGS to characterize `field growth phenomena' is statistical rather that geologic in nature.
Analysis of LDPE-ZnO-clay nanocomposites using novel cumulative rheological parameters
NASA Astrophysics Data System (ADS)
Kracalik, Milan
2017-05-01
Polymer nanocomposites exhibit complex rheological behaviour due to physical and also possibly chemical interactions between individual phases. Up to now, rheology of dispersive polymer systems has been usually described by evaluation of viscosity curve (shear thinning phenomenon), storage modulus curve (formation of secondary plateau) or plotting information about dumping behaviour (e.g. Van Gurp-Palmen-plot, comparison of loss factor tan δ). On the contrary to evaluation of damping behaviour, values of cot δ were calculated and called as "storage factor", analogically to loss factor. Then values of storage factor were integrated over specific frequency range and called as "cumulative storage factor". In this contribution, LDPE-ZnO-clay nanocomposites with different dispersion grades (physical networks) have been prepared and characterized by both conventional as well as novel analysis approach. Next to cumulative storage factor, further cumulative rheological parameters like cumulative complex viscosity, cumulative complex modulus or cumulative storage modulus have been introduced.
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2015-09-01
Mass change over Greenland can be caused by either changes in the glacial dynamic mass balance (DMB) or the surface mass balance (SMB). The GRACE satellite gravity mission cannot directly separate the two physical causes because it measures the sum of the entire mass column with limited spatial resolution. We demonstrate one theoretical way to indirectly separate cumulative SMB from DMB with GRACE, using a least squares inversion technique with knowledge of the location of the glaciers. However, we find that the limited 60 × 60 spherical harmonic representation of current GRACE data does not provide sufficient resolution to adequately accomplish the task. We determine that at a maximum degree/order of 90 × 90 or above, a noise-free gravity measurement could theoretically separate the SMB from DMB signals. However, current GRACE satellite errors are too large at present to separate the signals. A noise reduction of a factor of 10 at a resolution of 90 × 90 would provide the accuracy needed for the interannual cumulative SMB and DMB to be accurately separated.
Zineldin, Mosad; Camgöz-Akdağ, Hatice; Vasicheva, Valiantsina
2011-01-01
This paper aims to examine the major factors affecting cumulative summation, to empirically examine the major factors affecting satisfaction and to address the question whether patients in Kazakhstan evaluate healthcare similarly or differently from patients in Egypt and Jordan. A questionnaire, adapted from previous research, was distributed to Kazakhstan inpatients. The questionnaire contained 39 attributes about five newly-developed quality dimensions (5Qs), which were identified to be the most relevant attributes for hospitals. The questionnaire was translated into Russian to increase the response rate and improve data quality. Almost 200 usable questionnaires were returned. Frequency distribution, factor analysis and reliability checks were used to analyze the data. The three biggest concerns for Kazakhstan patients are: infrastructure; atmosphere; and interaction. Hospital staffs concern for patients' needs, parking facilities for visitors, waiting time and food temperature were all common specific attributes, which were perceived as concerns. These were shortcomings in all three countries. Improving health service quality by applying total relationship management and the 5Qs model together with a customer-orientation strategy is recommended. Results can be used by hospital staff to reengineer and redesign creatively their quality management processes and help move towards more effective healthcare quality strategies. Patients in three countries have similar concerns and quality perceptions. The paper describes a new instrument and method. The study assures relevance, validity and reliability, while being explicitly change-oriented. The authors argue that patient satisfaction is a cumulative construct, summing satisfaction as five different qualities (5Qs): object; processes; infrastructure; interaction and atmosphere.
Use of antiepileptic drugs and risk of fractures: case-control study among patients with epilepsy.
Souverein, P C; Webb, D J; Weil, J G; Van Staa, T P; Egberts, A C G
2006-05-09
To study the association between use of antiepileptic drugs (AEDs) and risk of fractures. The authors obtained data from the General Practice Research Database (GPRD). A case-control study was nested within a cohort of patients with active epilepsy. Cases were patients with a first fracture after cohort entry. Up to four controls were matched to each case by practice, sex, year of birth, timing of first epilepsy diagnosis, index date, and duration of GPRD history. Cumulative exposure to AEDs was assessed by summing the duration of all AED prescriptions. A distinction was made between AEDs that induce the hepatic cytochrome P-450 enzyme system and AEDs that do not. Medical conditions and drugs known to be associated with bone metabolism or falls were evaluated as potential confounders. Conditional logistic regression analysis was used to calculate odds ratios (ORs) and 95% CIs. The study population comprised 1,018 cases and 1,842 matched controls. The risk of fractures increased with cumulative duration of exposure (p for trend < 0.001), with the strongest association for greater than 12 years of use: adjusted OR 4.15 (95% CI 2.71 to 6.34). Risk estimates were higher in women than in men. There was no difference between users of AEDs that induce and AEDs that do not induce the hepatic cytochrome P-450 system. Long-term use of AEDs was associated with an increased risk of fractures, especially in women. More research on mechanisms of AED-induced bone breakdown and female vulnerability to the effects of AEDs on bone health is warranted.
White-nose syndrome pathology grading in Nearctic and Palearctic bats
Pikula, Jiri; Amelon, Sybill K.; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G.; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity. PMID:28767673
A case study of IMRT planning (Plan B) subsequent to a previously treated IMRT plan (Plan A)
NASA Astrophysics Data System (ADS)
Cao, F.; Leong, C.; Schroeder, J.; Lee, B.
2014-03-01
Background and purpose: Treatment of the contralateral neck after previous ipsilateral intensity modulated radiation therapy (IMRT) for head and neck cancer is a challenging problem. We have developed a technique that limits the cumulative dose to the spinal cord and brainstem while maximizing coverage of a planning target volume (PTV) in the contralateral neck. Our case involves a patient with right tonsil carcinoma who was given ipsilateral IMRT with 70Gy in 35 fractions (Plan A). A left neck recurrence was detected 14 months later. The patient underwent a neck dissection followed by postoperative left neck radiation to a dose of 66 Gy in 33 fractions (Plan B). Materials and Methods: The spinal cord-brainstem margin (SCBM) was defined as the spinal cord and brainstem with a 1.0 cm margin. Plan A was recalculated on the postoperative CT scan but the fluence outside of SCBM was deleted. A further modification of Plan A resulted in a base plan that was summed with Plan B to evaluate the cumulative dose received by the spinal cord and brainstem. Plan B alone was used to evaluate for coverage of the contralateral neck PTV. Results: The maximum cumulative doses to the spinal cord with 0.5cm margin and brainstem with 0.5cm margin were 51.96 Gy and 45.60 Gy respectively. For Plan B, 100% of the prescribed dose covered 95% of PTVb1. Conclusion: The use of a modified ipsilateral IMRT plan as a base plan is an effective way to limit the cumulative dose to the spinal cord and brainstem while enabling coverage of a PTV in the contralateral neck.
White-nose syndrome pathology grading in Nearctic and Palearctic bats.
Pikula, Jiri; Amelon, Sybill K; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity.
Caleyachetty, Rishi; Echouffo-Tcheugui, Justin B; Muennig, Peter; Zhu, Wenyi; Muntner, Paul; Shimbo, Daichi
2015-07-15
The American Heart Association developed the Life's Simple 7 metric for defining cardiovascular health. Little is known about the association of co-occurring social risk factors on ideal cardiovascular health. Using data on 11,467 adults aged ≥25 years from the National Health and Nutrition Examination Survey 1999-2006, we examined the association between cumulative social risk and ideal cardiovascular health in US adults. A cumulative risk score (range 0 to 3 or 4) was created by summing four social risk factors (low family income, low education level, minority race, and single-living status). Ideal levels for each component in Life's Simple 7 (blood pressure, cholesterol, glucose, BMI, smoking, physical activity, and diet) were used to create an ideal Life's Simple 7 score [0-1 (low), 2, 3, 4, and 5-7 (high)]. Adults with low income (odds ratio [OR]=0.30 [95% CI 0.23-0.39]), low education [0.22 (0.16-0.28)], who are non-white (0.44 [0.36-0.54]) and single-living [0.79 (0.67-0.95)] were less likely to have 5-7 versus 0 ideal Life's Simple 7 scores after adjustment for age and sex. Adults were less likely to attain 5-7 versus 0 ideal Life's Simple 7 scores as exposure to the number of social risk factors increased [OR (95% CI) of 0.58 (0.49-0.68); 0.27 (0.21-0.35); and 0.19 (0.14-0.27) for cumulative social risk scores of 1, 2, and 3 or 4, respectively, each versus 0]. US adults with an increasing number of socially risk factors, were progressively less likely to attain ideal levels of cardiovascular health factors. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Chapter 19. Cumulative watershed effects and watershed analysis
Leslie M. Reid
1998-01-01
Cumulative watershed effects are environmental changes that are affected by more than.one land-use activity and that are influenced by.processes involving the generation or transport.of water. Almost all environmental changes are.cumulative effects, and almost all land-use.activities contribute to cumulative effects
MILS in a general surgery unit: learning curve, indications, and limitations.
Patriti, Alberto; Marano, Luigi; Casciola, Luciano
2015-06-01
Minimally invasive liver surgery (MILS) is going to be a method with a wide diffusion even in general surgery units. Organization, learning curve effect, and the environment are crucial issues to evaluate before starting a program of minimally invasive liver resections. Analysis of a consecutive series of 70 patients has been used to define advantages and limits of starting a program of MILS in a general surgery unit. Seventeen MILS have been calculated with the cumulative sum method as the number of cases to complete the learning curve. Operative times [270 (60-480) vs. 180 (15-550) min; p 0.01] and rate of conversion (6/17 vs. 5/53; p 0.018) decrease after this number of cases. More complex cases can be managed after a proper optimization of all steps of liver resection. When a high confidence of the medical and nurse staff with MILS is reached, economical and strategic issues should be evaluated in order to establish a multidisciplinary hepatobiliary unit independent from the general surgery unit to manage more complex cases.
The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single chemical drives the cumulative risk of an individual exposed to multiple chemicals. Phthalates are a class of chemicals with ubiquitous exposures in the general population that have the potential to cause ...
Adaptive strategies for cumulative cultural learning.
Ehn, Micael; Laland, Kevin
2012-05-21
The demographic and ecological success of our species is frequently attributed to our capacity for cumulative culture. However, it is not yet known how humans combine social and asocial learning to generate effective strategies for learning in a cumulative cultural context. Here we explore how cumulative culture influences the relative merits of various pure and conditional learning strategies, including pure asocial and social learning, critical social learning, conditional social learning and individual refiner strategies. We replicate the Rogers' paradox in the cumulative setting. However, our analysis suggests that strategies that resolved Rogers' paradox in a non-cumulative setting may not necessarily evolve in a cumulative setting, thus different strategies will optimize cumulative and non-cumulative cultural learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Deposition Fluxes of Terpenes over Grassland
Bamberger, I.; Hörtnagl, L.; Ruuskanen, T. M.; Schnitzhofer, R.; Müller, M.; Graus, M.; Karl, T.; Wohlfahrt, G.; Hansel, A.
2013-01-01
Eddy covariance flux measurements were carried out for two subsequent vegetation periods above a temperate mountain grassland in an alpine valley using a proton-transfer-reaction – mass spectrometer (PTR-MS) and a PTR-time of flight – mass spectrometer (PTR-TOF). In 2008 and during the first half of the vegetation period 2009 the volume mixing ratios (VMRs) for the sum of monoterpenes (MTs) were typically well below 1 ppbv and neither MT emission nor deposition was observed. After a hailstorm in July 2009 an order of magnitude higher amount of terpenes was transported to the site from nearby coniferous forests causing elevated VMRs. As a consequence, deposition fluxes of terpenes to the grassland, which continued over a time period of several weeks without significant re-emission, were observed. For days without precipitation the deposition occurred at velocities close to the aerodynamic limit. In addition to monoterpene uptake, deposition fluxes of the sum of sesquiterpenes (SQTs) and the sum of oxygenated terpenes (OTs) were detected. Considering an entire growing season for the grassland (i.e., 1st of April to 1st of November), the cumulative carbon deposition of monoterpenes reached 276 mg C m−2. This is comparable to the net carbon emission of methanol (329 mg C m−2), which is the dominant non methane volatile organic compound (VOC) emitted from this site, during the same time period. It is suggested that deposition of monoterpenes to terrestrial ecosystems could play a more significant role in the reactive carbon budget than previously assumed. PMID:24383048
Variability of rainfall over Lake Kariba catchment area in the Zambezi river basin, Zimbabwe
NASA Astrophysics Data System (ADS)
Muchuru, Shepherd; Botai, Joel O.; Botai, Christina M.; Landman, Willem A.; Adeola, Abiodun M.
2016-04-01
In this study, average monthly and annual rainfall totals recorded for the period 1970 to 2010 from a network of 13 stations across the Lake Kariba catchment area of the Zambezi river basin were analyzed in order to characterize the spatial-temporal variability of rainfall across the catchment area. In the analysis, the data were subjected to intervention and homogeneity analysis using the Cumulative Summation (CUSUM) technique and step change analysis using rank-sum test. Furthermore, rainfall variability was characterized by trend analysis using the non-parametric Mann-Kendall statistic. Additionally, the rainfall series were decomposed and the spectral characteristics derived using Cross Wavelet Transform (CWT) and Wavelet Coherence (WC) analysis. The advantage of using the wavelet-based parameters is that they vary in time and can therefore be used to quantitatively detect time-scale-dependent correlations and phase shifts between rainfall time series at various localized time-frequency scales. The annual and seasonal rainfall series were homogeneous and demonstrated no apparent significant shifts. According to the inhomogeneity classification, the rainfall series recorded across the Lake Kariba catchment area belonged to category A (useful) and B (doubtful), i.e., there were zero to one and two absolute tests rejecting the null hypothesis (at 5 % significance level), respectively. Lastly, the long-term variability of the rainfall series across the Lake Kariba catchment area exhibited non-significant positive and negative trends with coherent oscillatory modes that are constantly locked in phase in the Morlet wavelet space.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Byun, Hi-Ryong; Adamowski, Jan F.; Begum, Khaleda
2017-04-01
Drought indices (DIs) that quantify drought events by their onset, termination, and subsequent properties such as the severity, duration, and peak intensity are practical stratagems for monitoring and evaluating the impacts of drought. In this study, the effective drought index (EDI) calculated over daily timescales was utilized to quantify short-term (dry spells) and ongoing drought events using drought monitoring data in Australia. EDI was an intensive DI that considered daily water accumulation with a weighting function applied to daily rainfall data with the passage of time. A statistical analysis of the distribution of water deficit period relative to the base period was performed where a run-sum method was adopted to identify drought onset for any day ( i) with EDI i < 0 (rainfall below normal). Drought properties were enumerated in terms of (1) severity (AEDI ≡ accumulated sum of EDIi < 0), (2) duration (DS ≡ cumulative number of days with EDIi < 0), (3) peak intensity (EDImin ≡ minimum EDI of a drought event), (4) annual drought severity (YAEDI ≡ yearly accumulated negative EDI), and (5) accumulated severity of ongoing drought using event-accumulated EDI (EAEDI). The analysis of EDI signal enabled the detection and quantification of a number of drought events in Australia: Federation Drought (1897-1903), 1911-1916 Drought, 1925-1929 Drought, World War II Drought (1937-1945), and Millennium Drought (2002-2010). In comparison with the other droughts, Millennium Drought was exemplified as an unprecedented dry period especially in Victoria (EAEDI ≈ -4243, DS = 1946 days, EDImin = -4.05, and YAEDI = -4903). For the weather station tested in Northern Territory, the worst drought was recorded during 1925-1929 period. The results justified the suitability of effective drought index as a useful scientific tool for monitoring of drought progression, onset and termination, and ranking of drought based on severity, duration, and peak intensity, which allows an assessment of accumulated stress caused by short- and long-term (protracted) dry events.
The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single component of a chemical mixture drives the cumulative risk of a receptor.1 This study used the MCR, the Hazard Index (HI) and Hazard Quotient (HQ) to evaluate co-exposures to six phthalates using biomonito...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Shirong; Davis, Michael J.; Skodje, Rex T.
2015-11-12
The sensitivity of kinetic observables is analyzed using a newly developed sum over histories representation of chemical kinetics. In the sum over histories representation, the concentrations of the chemical species are decomposed into the sum of probabilities for chemical pathways that follow molecules from reactants to products or intermediates. Unlike static flux methods for reaction path analysis, the sum over histories approach includes the explicit time dependence of the pathway probabilities. Using the sum over histories representation, the sensitivity of an observable with respect to a kinetic parameter such as a rate coefficient is then analyzed in terms of howmore » that parameter affects the chemical pathway probabilities. The method is illustrated for species concentration target functions in H-2 combustion where the rate coefficients are allowed to vary over their associated uncertainty ranges. It is found that large sensitivities are often associated with rate limiting steps along important chemical pathways or by reactions that control the branching of reactive flux« less
Nitrous oxide fluxes from upland soils in central Hokkaido, Japan.
Mu, Zhijian; Kimura, Sonoko D; Toma, Yo; Hatano, Ryusuke
2008-01-01
Nitrous oxide (N2O) fluxes from soils were measured using the closed chamber method during the snow-free seasons (middle April to early November), for three years, in a total of 11 upland crop fields in central Hokkaido, Japan. The annual mean N2O fluxes ranged from 2.95 to 164.17 microgN/(m2 x h), with the lowest observed in a grassland and the highest in an onion field. The instantaneous N2O fluxes showed a large temporal variation with peak emissions generally occurring following fertilization and heavy rainfall events around harvesting in autumn. No clear common factor regulating instantaneous N2O fluxes was found at any of the study sites. Instead, instantaneous N2O fluxes at different sites were affected by different soil variables. The cumulative N2O emissions during the study period within each year varied from 0.15 to 7.05 kgN/hm2 for different sites, which accounted for 0.33% to 5.09% of the applied fertilizer N. No obvious relationship was observed between cumulative N2O emission and applied fertilizer N rate (P > 0.4). However, the cumulative N2O emission was significantly correlated with gross mineralized N as estimated by CO2 emissions from bare soils divided by C/N ratios of each soil, and with soil mineral N pool (i.e., the sum of gross mineralized N and fertilizer N) (P < 0.001).
van der Pas, Stéphanie L; Nelissen, Rob G H H; Fiocco, Marta
2017-08-02
In arthroplasty data, patients with staged bilateral total joint arthroplasty (TJA) pose a problem in statistical analysis. Subgroup analysis, in which patients with unilateral and bilateral TJA are studied separately, is sometimes considered an appropriate solution to the problem; we aim to show that this is not true because of immortal time bias. We reviewed patients who underwent staged (at any time) bilateral TJA. The logical fallacy leading to immortal time bias is explained through a simple artificial data example. The cumulative incidences of revision and death are computed by subgroup analysis and by landmark analysis based on hip replacement data from the Dutch Arthroplasty Register and on simulated data sets. For patients who underwent unilateral TJA, subgroup analysis can lead to an overestimate of the cumulative incidence of death and an underestimate of the cumulative incidence of revision. The reverse conclusion holds for patients who underwent staged bilateral TJA. Analysis of these patients can lead to an underestimate of the cumulative incidence of death and an overestimate of the cumulative incidence of revision. Immortal time bias can be prevented by using landmark analysis. When examining arthroplasty registry data, patients who underwent staged bilateral TJA should be analyzed with caution. An appropriate statistical method to address the research question should be selected.
Success in Developing Regions: World Records Evolution through a Geopolitical Prism
Guillaume, Marion; Helou, Nour El; Nassif, Hala; Berthelot, Geoffroy; Len, Stéphane; Thibault, Valérie; Tafflet, Muriel; Quinquis, Laurent; Desgorces, François; Hermine, Olivier; Toussaint, Jean-François
2009-01-01
A previous analysis of World Records (WR) has revealed the potential limits of human physiology through athletes' personal commitment. The impact of political factors on sports has only been studied through Olympic medals and results. Here we studied 2876 WR from 63 nations in four summer disciplines. We propose three new indicators and show the impact of historical, geographical and economical factors on the regional WR evolution. The south-eastward path of weighted annual barycenter (i.e. the average of country coordinates weighting by the WR number) shows the emergence of East Africa and China in WR archives. Home WR ratio decreased from 79.9% before the second World War to 23.3% in 2008, underlining sports globalization. Annual Cumulative Proportions (ACP, i.e. the cumulative sum of the WR annual rate) highlight the regional rates of progression. For all regions, the mean slope of ACP during the Olympic era is 0.0101, with a maximum between 1950 and 1989 (0.0156). For European countries, this indicator reflects major historical events (slowdown for western countries after 1945, slowdown for eastern countries after 1990). Mean North-American ACP slope is 0.0029 over the century with an acceleration between 1950 and 1989 at 0.0046. Russia takes off in 1935 and slows down in 1988 (0.0038). For Eastern Europe, maximal progression is seen between 1970 and 1989 (0.0045). China starts in 1979 with a maximum between 1990 and 2008 (0.0021), while other regions have largely declined (mean ACP slope for all other countries = 0.0011). A similar trend is observed for the evolution of the 10 best performers. The national analysis of WR reveals a precise and quantifiable link between the sport performances of a country, its historical or geopolitical context, and its steps of development. PMID:19862324
Investigating the change of causality in emerging property markets during the financial tsunami
NASA Astrophysics Data System (ADS)
Hui, Eddie C. M.; Chen, Jia
2012-08-01
In this paper, we employ the multivariate CUSUM (cumulative sum) test for covariance structure as well as the renormalized partial directed coherence (PDC) method to capture the structural causality change of real estate stock indices of five emerging Asian countries and regions (i.e., Thailand, Malaysia, South Korea, PR China, and Taiwan). Meanwhile, we develop a method to make the comparison of renormalized PDC more intuitive and a set of criteria to measure the result. One of our findings indicates that the regional influence of the Chinese real estate stock market on the causality structure of the five markets has arisen under the effect of the financial tsunami.
Chen, Yeung-Jen; Chiang, Chao-Ching; Huang, Peng-Ju; Huang, Jason; Karcher, Keith; Li, Honglan
2015-11-01
To evaluate the efficacy and safety of tapentadol immediate-release (IR) for treating acute pain following orthopedic bunionectomy surgery in a Taiwanese population. This was a phase 3, randomized, double-blind, placebo-controlled, parallel-group bridging study in which Taiwanese patients (N = 60) with moderate-to-severe pain following bunionectomy were randomized (1:1:1) to receive tapentadol IR 50 or 75 mg or placebo orally every 4-6 hours over a 72 hour period. The primary endpoint was the sum of pain intensity difference over 48 hours (SPID48), analyzed using analysis of variance. Out of 60 patients randomized (mainly women [96.7%]; median age 44 years), 41 (68.3%) completed the treatment. Mean SPID48 values were significantly higher for tapentadol IR (p ≤ 0.006: 50 mg, p ≤ 0.004: 75 mg) compared with placebo. Between-group differences in LS means of SPID48 (vs. placebo) were tapentadol IR 50 mg: 105.6 (95% CI: 32.0; 179.2); tapentadol IR 75 mg: 126.6 (95% CI: 49.5; 203.7). Secondary endpoints including SPID at 12, 24, and 72 hours, time to first use of rescue medication, cumulative distribution of responder rates, total pain relief and sum of total pain relief and sum of pain intensity difference at 12, 24, 48, and 72 hours, and patient global impression of change showed numerically better results supporting that tapentadol IR (50 and 75 mg) was more efficacious than placebo in relieving acute pain. The most frequent treatment emergent adverse events reported in ≥ 10% patients in either group were dizziness, nausea, and vomiting. A limitation of this study may possibly include more controlled patient monitoring through 4-6 hour dosing intervals, which reflects optimal conditions and thus may not approximate real-world clinical practice. However, all treatment groups would be equally affected by such bias of frequent monitoring, if any, since it was a randomized and double-blind study. Tapentadol IR treatment significantly relieved acute postoperative pain and was well tolerated in a Taiwanese population. ClinicalTrials.gov identifier: NCT01813890.
Mass spectrometry of peptides and proteins from human blood.
Zhu, Peihong; Bowden, Peter; Zhang, Du; Marshall, John G
2011-01-01
It is difficult to convey the accelerating rate and growing importance of mass spectrometry applications to human blood proteins and peptides. Mass spectrometry can rapidly detect and identify the ionizable peptides from the proteins in a simple mixture and reveal many of their post-translational modifications. However, blood is a complex mixture that may contain many proteins first expressed in cells and tissues. The complete analysis of blood proteins is a daunting task that will rely on a wide range of disciplines from physics, chemistry, biochemistry, genetics, electromagnetic instrumentation, mathematics and computation. Therefore the comprehensive discovery and analysis of blood proteins will rank among the great technical challenges and require the cumulative sum of many of mankind's scientific achievements together. A variety of methods have been used to fractionate, analyze and identify proteins from blood, each yielding a small piece of the whole and throwing the great size of the task into sharp relief. The approaches attempted to date clearly indicate that enumerating the proteins and peptides of blood can be accomplished. There is no doubt that the mass spectrometry of blood will be crucial to the discovery and analysis of proteins, enzyme activities, and post-translational processes that underlay the mechanisms of disease. At present both discovery and quantification of proteins from blood are commonly reaching sensitivities of ∼1 ng/mL. Copyright © 2010 Wiley Periodicals, Inc.
Dan Neary; Brenda R. Baillie
2016-01-01
Herbicide use varies both spatially and temporally within managed forests. While information exists on the effects of herbicide use on water quality at the site and small catchment scale, little is known about the cumulative effects of herbicide use at the landscape scale. A cumulative effects analysis was conducted in the upper Rangitaiki catchment (118,345...
Predicting worsening asthma control following the common cold
Walter, Michael J.; Castro, Mario; Kunselman, Susan J.; Chinchilli, Vernon M; Reno, Melissa; Ramkumar, Thiruvamoor P.; Avila, Pedro C.; Boushey, Homer A.; Ameredes, Bill T.; Bleecker, Eugene R.; Calhoun, William J.; Cherniack, Reuben M.; Craig, Timothy J.; Denlinger, Loren C.; Israel, Elliot; Fahy, John V.; Jarjour, Nizar N.; Kraft, Monica; Lazarus, Stephen C.; Lemanske, Robert F.; Martin, Richard J.; Peters, Stephen P.; Ramsdell, Joe W.; Sorkness, Christine A.; Rand Sutherland, E.; Szefler, Stanley J.; Wasserman, Stephen I.; Wechsler, Michael E.
2008-01-01
The asthmatic response to the common cold is highly variable and early characteristics that predict worsening of asthma control following a cold have not been identified. In this prospective multi-center cohort study of 413 adult subjects with asthma, we used the mini-Asthma Control Questionnaire (mini-ACQ) to quantify changes in asthma control and the Wisconsin Upper Respiratory Symptom Survey-21 (WURSS-21) to measure cold severity. Univariate and multivariable models examined demographic, physiologic, serologic, and cold-related characteristics for their relationship to changes in asthma control following a cold. We observed a clinically significant worsening of asthma control following a cold (increase in mini-ACQ of 0.69 ± 0.93). Univariate analysis demonstrated season, center location, cold length, and cold severity measurements all associated with a change in asthma control. Multivariable analysis of the covariates available within the first 2 days of cold onset revealed the day 2 and the cumulative sum of the day 1 and 2 WURSS-21 scores were significant predictors for the subsequent changes in asthma control. In asthmatic subjects the cold severity measured within the first 2 days can be used to predict subsequent changes in asthma control. This information may help clinicians prevent deterioration in asthma control following a cold. PMID:18768579
Schnedeker, Amy H; Cole, Lynette K; Lorch, Gwendolen; Diaz, Sandra F; Bonagura, John; Daniels, Joshua B
2017-10-01
Staphylococcus pseudintermedius is the most common cause of bacterial skin infections in dogs. Meticillin-resistant infections have become more common and are challenging to treat. Blue light phototherapy may be an option for treating these infections. The objective of this study was to measure the in vitro bactericidal activity of 465 nm blue light on meticillin-susceptible Staphylococcus pseudintermedius (MSSP) and meticillin-resistant Staphylococcus pseudintermedius (MRSP). We hypothesized that irradiation with blue light would kill MSSP and MRSP in a dose-dependent fashion in vitro as previously reported for meticillin-resistant Staphylococcus aureus (MRSA). In six replicate experiments, each strain [MSSP, n = 1; MRSP ST-71 (KM1381) n = 1; and MRSA (BAA-1680) n = 1] were cultivated on semisolid media, irradiated using a 465 nm blue light phototherapeutic device at the cumulative doses of 56.25, 112.5 and 225 J/cm 2 and incubated overnight at 35°C. Controls were not irradiated. Colony counts (CC) were performed manually. Descriptive statistics were performed and treatment effects assessed using the Wilcoxon-Mann-Whitney rank-sum test. Bonferroni-corrected rank-sum tests were performed for post hoc analysis when significant differences were identified. There was a significant decrease in CC with blue light irradiation at all doses for MRSA (P = 0.0006) but not for MSSP (P = 0.131) or MRSP (P = 0.589). Blue light phototherapy significantly reduced CC of MRSA, but not of MSSP or MRSP. The mechanism for the relative photosensitivity of the MRSA isolate is unknown, but is hypothesized to be due to an increased concentration of porphyrin in S. aureus relative to S. pseudintermedius, which would modulate blue light absorption. © 2017 ESVD and ACVD.
1980-01-01
IIINDOk.I. IF THE VALUE OF Ni,.INDO IS GREATER THAN 1’.THEN SPECTRAL iINiOl. :SA::LING :THOLILI BE PERFORMED LIIG ’PECTRAL IIIN- DiOW.I NI. I MirO - 1i...8217321iis C:OMMON UVYi312!i3PPI3MM’G11’’ 322 C INITIA+LIZE VARIARBLES. NUMB=LE NG-20.~NN RAVRG=FLOPT (NAlVRG’l NUMBM1=NUMB-1 SUM 1=0. SUM20. SIJM3=I...1) =sumaRAVG * G11l ) SUM3/PAVRG ’ 322 (I)-SU1M4/RAVRG G12 (I) =SUM5’/RPVRG C DETERMINE SUMS FOR NEXT ESTIMATES. * SUM I -SUM I -Apl +GPP QI .tiMR
NASA Technical Reports Server (NTRS)
Wallace, G. R.; Weathers, G. D.; Graf, E. R.
1973-01-01
The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis
2014-08-01
The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’
Relationship of deer and moose populations to previous winters' snow
Mech, L.D.; McRoberts, R.E.; Peterson, R.O.; Page, R.E.
1987-01-01
(1) Linear regression was used to relate snow accumulation during single and consecutive winters with white-tailed deer (Odocoileus virginianus) fawn:doe ratios, mosse (Alces alces) twinning rates and calf:cow ratios, and annual changes in deer and moose populations. Significant relationships were found between snow accumulation during individual winters and these dependent variables during the following year. However, the strongest relationships were between the dependent variables and the sums of the snow accumulations over the previous three winters. The percentage of the variability explained was 36 to 51. (2) Significant relationships were also found between winter vulnerability of moose calves and the sum of the snow accumulations in the current, and up to seven previous, winters, with about 49% of the variability explained. (3) No relationship was found between wolf numbers and the above dependent variables. (4) These relationships imply that winter influences on maternal nutrition can accumulate for several years and that this cumulative effect strongly determines fecundity and/or calf and fawn survivability. Although wolf (Canis lupus L.) predation is the main direct mortality agent on fawns and calves, wolf density itself appears to be secondary to winter weather in influencing the deer and moose populations.
De Gori, Marco; Adamczewski, Benjamin; Jenny, Jean-Yves
2017-06-01
The purpose of the study was to use the cumulative summation (CUSUM) test to assess the learning curve during the introduction of a new surgical technique (patient-specific instrumentation) in total knee arthroplasty (TKA) in an academic department. The first 50TKAs operated on at an academic department using patient-specific templates (PSTs) were scheduled to enter the study. All patients had a preoperative computed tomography scan evaluation to plan bone resections. The PSTs were positioned intraoperatively according to the best-fit technique and their three-dimensional orientation was recorded by a navigation system. The position of the femur and tibia PST was compared to the planned position for four items for each component: coronal and sagittal orientation, medial and lateral height of resection. Items were summarized to obtain knee, femur and tibia PST scores, respectively. These scores were plotted according to chronological order and included in a CUSUM analysis. The tested hypothesis was that the PST process for TKA was immediately under control after its introduction. CUSUM test showed that positioning of the PST significantly differed from the target throughout the study. There was a significant difference between all scores and the maximal score. No case obtained the maximal score of eight points. The study was interrupted after 20 cases because of this negative evaluation. The CUSUM test is effective in monitoring the learning curve when introducing a new surgical procedure. Introducing PST for TKA in an academic department may be associated with a long-lasting learning curve. The study was registered on the clinical.gov website (Identifier NCT02429245). Copyright © 2017 Elsevier B.V. All rights reserved.
Surface slip during large Owens Valley earthquakes
NASA Astrophysics Data System (ADS)
Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.
2016-06-01
The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.
Halonen, Jaana I; Kivimäki, Mika; Pentti, Jaana; Kawachi, Ichiro; Virtanen, Marianna; Martikainen, Pekka; Subramanian, S V; Vahtera, Jussi
2012-01-01
The extent to which neighbourhood characteristics explain accumulation of health behaviours is poorly understood. We examined whether neighbourhood disadvantage was associated with co-occurrence of behaviour-related risk factors, and how much of the neighbourhood differences in the co-occurrence can be explained by individual and neighbourhood level covariates. The study population consisted of 60 694 Finnish Public Sector Study participants in 2004 and 2008. Neighbourhood disadvantage was determined using small-area level information on household income, education attainment, and unemployment rate, and linked with individual data using Global Positioning System-coordinates. Associations between neighbourhood disadvantage and co-occurrence of three behaviour-related risk factors (smoking, heavy alcohol use, and physical inactivity), and the extent to which individual and neighbourhood level covariates explain neighbourhood differences in co-occurrence of risk factors were determined with multilevel cumulative logistic regression. After adjusting for age, sex, marital status, and population density we found a dose-response relationship between neighbourhood disadvantage and co-occurrence of risk factors within each level of individual socioeconomic status. The cumulative odds ratios for the sum of health risks comparing the most to the least disadvantaged neighbourhoods ranged between 1.13 (95% confidence interval (CI): 1.03-1.24) and 1.75 (95% CI, 1.54-1.98). Individual socioeconomic characteristics explained 35%, and neighbourhood disadvantage and population density 17% of the neighbourhood differences in the co-occurrence of risk factors. Co-occurrence of poor health behaviours associated with neighbourhood disadvantage over and above individual's own socioeconomic status. Neighbourhood differences cannot be captured using individual socioeconomic factors alone, but neighbourhood level characteristics should also be considered.
Wu, Nancy S; Schairer, Laura C; Dellor, Elinam; Grella, Christine
2010-01-01
This study describes the prevalence of childhood traumatic events (CTEs) among adults with comorbid substance use disorders (SUDs) and mental health problems (MHPs) and assesses the relation between cumulative CTEs and adult health outcomes. Adults with SUDs/MHPs (N=402) were recruited from residential treatment programs and interviewed at treatment admission. Exposures to 9 types of adverse childhood experiences were summed and categorized into 6 ordinal levels of exposure. Descriptive analyses were conducted to assess the prevalence and range of exposure to CTEs in comparison with a sample from primary health care. Logistic regression analyses were conducted to examine the association between the cumulative exposure to CTEs and adverse health outcomes. Most of the sample reported exposure to CTEs, with higher exposure rates among the study sample compared with the primary health care sample. Greater exposure to CTEs significantly increased the odds of several adverse adult outcomes, including PTSD, alcohol dependence, injection drug use, tobacco use, sex work, medical problems, and poor quality of life. Study findings support the importance of early prevention and intervention and provision of trauma treatment for individuals with SUDs/MHPs.
Harvey, Marc
2014-09-01
This paper proposes a model of human uniqueness based on an unusual distinction between two contrasted kinds of political competition and political status: (1) antagonistic competition, in quest of dominance (antagonistic status), a zero-sum, self-limiting game whose stake--who takes what, when, how--summarizes a classical definition of politics (Lasswell 1936), and (2) synergistic competition, in quest of merit (synergistic status), a positive-sum, self-reinforcing game whose stake becomes "who brings what to a team's common good." In this view, Rawls's (1971) famous virtual "veil of ignorance" mainly conceals politics' antagonistic stakes so as to devise the principles of a just, egalitarian society, yet without providing any means to enforce these ideals (Sen 2009). Instead, this paper proposes that human uniqueness flourished under a real "adapted veil of ignorance" concealing the steady inflation of synergistic politics which resulted from early humans' sturdy egalitarianism. This proposition divides into four parts: (1) early humans first stumbled on a purely cultural means to enforce a unique kind of within-team antagonistic equality--dyadic balanced deterrence thanks to handheld weapons (Chapais 2008); (2) this cultural innovation is thus closely tied to humans' darkest side, but it also launched the cumulative evolution of humans' brightest qualities--egalitarian team synergy and solidarity, together with the associated synergistic intelligence, culture, and communications; (3) runaway synergistic competition for differential merit among antagonistically equal obligate teammates is the single politically selective mechanism behind the cumulative evolution of all these brighter qualities, but numerous factors to be clarified here conceal this mighty evolutionary driver; (4) this veil of ignorance persists today, which explains why humans' unique prosocial capacities are still not clearly understood by science. The purpose of this paper is to start lifting this now-ill-adapted veil of ignorance, thus uncovering the tight functional relations between egalitarian team solidarity and the evolution of human uniqueness.
Park, Yoonah; Yong, Yuen Geng; Yun, Seong Hyeon; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung
2015-05-01
This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Gary E.; Diefenderfer, Heida L.; Borde, Amy B.
The goal of this multi-year study (2004-2010) is to develop a methodology to evaluate the cumulative effects of multiple habitat restoration projects intended to benefit ecosystems supporting juvenile salmonids in the lower Columbia River and estuary. Literature review in 2004 revealed no existing methods for such an evaluation and suggested that cumulative effects could be additive or synergistic. Field research in 2005, 2006, and 2007 involved intensive, comparative studies paired by habitat type (tidal swamp vs. marsh), trajectory (restoration vs. reference site), and restoration action (tide gate vs. culvert vs. dike breach). The field work established two kinds of monitoringmore » indicators for eventual cumulative effects analysis: core and higher-order indicators. Management implications of limitations and applications of site-specific effectiveness monitoring and cumulative effects analysis were identified.« less
Standardized unfold mapping: a technique to permit left atrial regional data display and analysis.
Williams, Steven E; Tobon-Gomez, Catalina; Zuluaga, Maria A; Chubb, Henry; Butakoff, Constantine; Karim, Rashed; Ahmed, Elena; Camara, Oscar; Rhode, Kawal S
2017-10-01
Left atrial arrhythmia substrate assessment can involve multiple imaging and electrical modalities, but visual analysis of data on 3D surfaces is time-consuming and suffers from limited reproducibility. Unfold maps (e.g., the left ventricular bull's eye plot) allow 2D visualization, facilitate multimodal data representation, and provide a common reference space for inter-subject comparison. The aim of this work is to develop a method for automatic representation of multimodal information on a left atrial standardized unfold map (LA-SUM). The LA-SUM technique was developed and validated using 18 electroanatomic mapping (EAM) LA geometries before being applied to ten cardiac magnetic resonance/EAM paired geometries. The LA-SUM was defined as an unfold template of an average LA mesh, and registration of clinical data to this mesh facilitated creation of new LA-SUMs by surface parameterization. The LA-SUM represents 24 LA regions on a flattened surface. Intra-observer variability of LA-SUMs for both EAM and CMR datasets was minimal; root-mean square difference of 0.008 ± 0.010 and 0.007 ± 0.005 ms (local activation time maps), 0.068 ± 0.063 gs (force-time integral maps), and 0.031 ± 0.026 (CMR LGE signal intensity maps). Following validation, LA-SUMs were used for automatic quantification of post-ablation scar formation using CMR imaging, demonstrating a weak but significant relationship between ablation force-time integral and scar coverage (R 2 = 0.18, P < 0.0001). The proposed LA-SUM displays an integrated unfold map for multimodal information. The method is applicable to any LA surface, including those derived from imaging and EAM systems. The LA-SUM would facilitate standardization of future research studies involving segmental analysis of the LA.
Aggarwal, Neil R; Brower, Roy G; Hager, David N; Thompson, B Taylor; Netzer, Giora; Shanholtz, Carl; Lagakos, Adrian; Checkley, William
2018-04-01
High fractions of inspired oxygen may augment lung damage to exacerbate lung injury in patients with acute respiratory distress syndrome. Participants enrolled in Acute Respiratory Distress Syndrome Network trials had a goal partial pressure of oxygen in arterial blood range of 55-80 mm Hg, yet the effect of oxygen exposure above this arterial oxygen tension range on clinical outcomes is unknown. We sought to determine if oxygen exposure that resulted in a partial pressure of oxygen in arterial blood above goal (> 80 mm Hg) was associated with worse outcomes in patients with acute respiratory distress syndrome. Longitudinal analysis of data collected in these trials. Ten clinical trials conducted at Acute Respiratory Distress Syndrome Network hospitals between 1996 and 2013. Critically ill patients with acute respiratory distress syndrome. None. We defined above goal oxygen exposure as the difference between the fraction of inspired oxygen and 0.5 whenever the fraction of inspired oxygen was above 0.5 and when the partial pressure of oxygen in arterial blood was above 80 mm Hg. We then summed above goal oxygen exposures in the first five days to calculate a cumulative above goal oxygen exposure. We determined the effect of a cumulative 5-day above goal oxygen exposure on mortality prior to discharge home at 90 days. Among 2,994 participants (mean age, 51.3 yr; 54% male) with a study-entry partial pressure of oxygen in arterial blood/fraction of inspired oxygen that met acute respiratory distress syndrome criteria, average cumulative above goal oxygen exposure was 0.24 fraction of inspired oxygen-days (interquartile range, 0-0.38). Participants with above goal oxygen exposure were more likely to die (adjusted interquartile range odds ratio, 1.20; 95% CI, 1.11-1.31) and have lower ventilator-free days (adjusted interquartile range mean difference of -0.83; 95% CI, -1.18 to -0.48) and lower hospital-free days (adjusted interquartile range mean difference of -1.38; 95% CI, -2.09 to -0.68). We observed a dose-response relationship between the cumulative above goal oxygen exposure and worsened clinical outcomes for participants with mild, moderate, or severe acute respiratory distress syndrome, suggesting that the observed relationship is not primarily influenced by severity of illness. Oxygen exposure resulting in arterial oxygen tensions above the protocol goal occurred frequently and was associated with worse clinical outcomes at all levels of acute respiratory distress syndrome severity.
NASA Astrophysics Data System (ADS)
Domino, Krzysztof
2017-02-01
The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.
To Sum or Not to Sum: Taxometric Analysis with Ordered Categorical Assessment Items
ERIC Educational Resources Information Center
Walters, Glenn D.; Ruscio, John
2009-01-01
Meehl's taxometric method has been shown to differentiate between categorical and dimensional data, but there are many ways to implement taxometric procedures. When analyzing the ordered categorical data typically provided by assessment instruments, summing items to form input indicators has been a popular practice for more than 20 years. A Monte…
A Multimethodological Analysis of Cumulative Risk and Allostatic Load among Rural Children.
ERIC Educational Resources Information Center
Evans, Gary W.
2003-01-01
This study modeled physical and psychosocial aspects of home environment and personal characteristics in a cumulative risk heuristic. Found that elevated cumulative risk was associated with heightened cardiovascular and neuroendocrine parameters, increased deposition of body fat, and higher summary index of total allostatic load. Replicated…
Lee, Kwang-Il; Lee, Jung-Soo; Jung, Hong-Hee; Lee, Hwa-Yong; Moon, Seong-Hwan; Kang, Kyoung-Tak; Shim, Young-Bock; Jang, Ju-Woong
2012-01-01
Xenografts, unlike other grafting products, cannot be commercialized unless they conform to stringent safety regulations. Particularly with bovine-derived materials, it is essential to remove viruses and inactivate infectious factors because of the possibility that raw materials are imbrued with infectious viruses. The removal of the characteristics of infectious viruses from the bovine bone grafting materials need to be proved and inactivation process should satisfy the management provision of the Food and Drug Administration (FDA). To date, while most virus inactivation studies were performed in human allograft tissues, there have been almost no studies on bovine bone. To evaluate the efficacy of virus inactivation after treatment of bovine bone with 70% ethanol, 4% sodium hydroxide, and gamma irradiation, we selected a variety of experimental model viruses that are known to be associated with bone pathogenesis, including bovine parvovirus (BPV), bovine herpes virus (BHV), bovine viral diarrhea virus (BVDV), and bovine parainfluenza-3 virus (BPIV-3). The cumulative virus log clearance factor or cumulative virus log reduction factor for the manufacturing process was obtained by calculating the sum of the individual virus log clearance factors or log reduction factors determined for individual process steps with different physicochemical methods. The cumulative log clearance factors achieved by three different virus inactivation processes were as follows: BPV ≥ 17.73, BHV ≥ 20.53, BVDV ≥ 19.00, and BPIV-3 ≥ 16.27. On the other hand, the cumulative log reduction factors achieved were as follows: BPV ≥ 16.95, BHV ≥ 20.22, BVDV ≥ 19.27, and BPIV-3 ≥ 15.58. Treatment with 70% ethanol, 4% sodium hydroxide, or gamma irradiation was found to be very effective in virus inactivation, since all viruses were at undetectable levels during each process. We have no doubt that application of this established process to bovine bone graft manufacture will be effective and essential. © 2012 John Wiley & Sons A/S.
A new universal dynamic model to describe eating rate and cumulative intake curves123
Paynter, Jonathan; Peterson, Courtney M; Heymsfield, Steven B
2017-01-01
Background: Attempts to model cumulative intake curves with quadratic functions have not simultaneously taken gustatory stimulation, satiation, and maximal food intake into account. Objective: Our aim was to develop a dynamic model for cumulative intake curves that captures gustatory stimulation, satiation, and maximal food intake. Design: We developed a first-principles model describing cumulative intake that universally describes gustatory stimulation, satiation, and maximal food intake using 3 key parameters: 1) the initial eating rate, 2) the effective duration of eating, and 3) the maximal food intake. These model parameters were estimated in a study (n = 49) where eating rates were deliberately changed. Baseline data was used to determine the quality of model's fit to data compared with the quadratic model. The 3 parameters were also calculated in a second study consisting of restrained and unrestrained eaters. Finally, we calculated when the gustatory stimulation phase is short or absent. Results: The mean sum squared error for the first-principles model was 337.1 ± 240.4 compared with 581.6 ± 563.5 for the quadratic model, or a 43% improvement in fit. Individual comparison demonstrated lower errors for 94% of the subjects. Both sex (P = 0.002) and eating duration (P = 0.002) were associated with the initial eating rate (adjusted R2 = 0.23). Sex was also associated (P = 0.03 and P = 0.012) with the effective eating duration and maximum food intake (adjusted R2 = 0.06 and 0.11). In participants directed to eat as much as they could compared with as much as they felt comfortable with, the maximal intake parameter was approximately double the amount. The model found that certain parameter regions resulted in both stimulation and satiation phases, whereas others only produced a satiation phase. Conclusions: The first-principles model better quantifies interindividual differences in food intake, shows how aspects of food intake differ across subpopulations, and can be applied to determine how eating behavior factors influence total food intake. PMID:28077377
CUMULATIVE RISK ANALYSIS FOR ORGANOPHOSPHORUS PESTICIDES
Cumulative Risk Analysis for Organophosphorus Pesticides
R. Woodrow Setzer, Jr. NHEERL MD-74, USEPA, RTP, NC 27711
The US EPA has recently completed a risk assessment of the effects of exposure to 33 organophosphorous pesticides (OPs) through the diet, water, and resi...
DeStefano, Frank; Price, Cristofer S; Weintraub, Eric S
2013-08-01
To evaluate the association between autism and the level of immunologic stimulation received from vaccines administered during the first 2 years of life. We analyzed data from a case-control study conducted in 3 managed care organizations (MCOs) of 256 children with autism spectrum disorder (ASD) and 752 control children matched on birth year, sex, and MCO. In addition to the broader category of ASD, we also evaluated autistic disorder and ASD with regression. ASD diagnoses were validated through standardized in-person evaluations. Exposure to total antibody-stimulating proteins and polysaccharides from vaccines was determined by summing the antigen content of each vaccine received, as obtained from immunization registries and medical records. Potential confounding factors were ascertained from parent interviews and medical charts. Conditional logistic regression was used to assess associations between ASD outcomes and exposure to antigens in selected time periods. The aOR (95% CI) of ASD associated with each 25-unit increase in total antigen exposure was 0.999 (0.994-1.003) for cumulative exposure to age 3 months, 0.999 (0.997-1.001) for cumulative exposure to age 7 months, and 0.999 (0.998-1.001) for cumulative exposure to age 2 years. Similarly, no increased risk was found for autistic disorder or ASD with regression. In this study of MCO members, increasing exposure to antibody-stimulating proteins and polysaccharides in vaccines during the first 2 years of life was not related to the risk of developing an ASD. Copyright © 2013 Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vijayan, Sarath; Shankar, Alok; Rudin, Stephen; Bednarek, Daniel R.
2016-03-01
The skin dose tracking system (DTS) that we developed provides a color-coded mapping of the cumulative skin dose distribution on a 3D graphic of the patient during fluoroscopic procedures in real time. The DTS has now been modified to also calculate the kerma area product (KAP) and cumulative air kerma (CAK) for fluoroscopic interventions using data obtained in real-time from the digital bus on a Toshiba Infinix system. KAP is the integral of air kerma over the beam area and is typically measured with a large-area transmission ionization chamber incorporated into the collimator assembly. In this software, KAP is automatically determined for each x-ray pulse as the product of the air kerma/ mAs from a calibration file for the given kVp and beam filtration times the mAs per pulse times the length and width of the beam times a field nonuniformity correction factor. Field nonuniformity is primarily the result of the heel effect and the correction factor was determined from the beam profile measured using radio-chromic film. Dividing the KAP by the beam area at the interventional reference point provides the area averaged CAK. The KAP and CAK per x-ray pulse are summed after each pulse to obtain the total procedure values in real-time. The calculated KAP and CAK were compared to the values displayed by the fluoroscopy machine with excellent agreement. The DTS now is able to automatically calculate both KAP and CAK without the need for measurement by an add-on transmission ionization chamber.
Hyperinsulinemia in polycystic ovary disease.
Arthur, L S; Selvakumar, R; Seshadri, M S; Seshadri, L
1999-09-01
To evaluate the prevalence of hyperinsulinemia and insulin resistance in women with polycystic ovary disease (PCOD). Forty women with clinical and biochemical evidence of PCOD and 20 with regular menstrual cycles were studied prospectively. All women underwent a three-hour oral glucose tolerance test following a 100-g glucose load. Plasma sugar and insulin levels were measured. The one-, two- and three-hour insulin values were significantly higher in women with PCOD. The sum insulin, cumulative insulin, peak insulin and area under the insulin response curve were similarly higher in women with PCOD than in the controls. The presence of hirsutism was more often associated with hyperinsulinemia and insulin resistance, but body mass index and menstrual irregularity were not. Hyperinsulinemia and insulin resistance seem to be commonly associated with PCOD.
NASA Astrophysics Data System (ADS)
Aslam, S. M.; Suleymanov, M. K.; Wazir, Z.; Gilani, A. R.
2018-06-01
In this paper the behavior of the cumulative number and also with maximum values of cumulative number distribution of protons, π + and π --mesons, have been studied, produced in d12C-interctions at 4.2 A GeV/c. The experimental data has been compared with ones coming from the Dubna version of the cascade model. In the analysis we have observed; four different regions in the cumulative number distributions for all charged particle and protons and the last region is corresponding to values of cumulative number greater than 1; for pions number of regions decreased to 2 for π ±-mesons but cumulative area is absent for both mesons. Cascade cannot describe satisfactorily the distributions of the cumulative protons and cumulative π -+-mesons, it gives less number for the all produced particles. In case of particles with maximum values of cumulative number cascade can describe the behavior of cumulative number distribution well. There exist some events with two cumulative particles which could not describe by the cascade dynamics. May be collective nucleon effect could be reasons of the observation two cumulative particles events.
Analysis of modeling cumulative noise from simultaneous flights volume 2 : supplemental analysis
DOT National Transportation Integrated Search
2012-12-31
This is the second of two volumes of the report on modeling cumulative noise from simultaneous flights. This volume examines the effect of several modeling input cases on Percent Time Audible results calculated by the Integrated Noise Model. The case...
Steps and Pips in the History of the Cumulative Recorder
ERIC Educational Resources Information Center
Lattal, Kennon A.
2004-01-01
From its inception in the 1930s until very recent times, the cumulative recorder was the most widely used measurement instrument in the experimental analysis of behavior. It was an essential instrument in the discovery and analysis of schedules of reinforcement, providing the first real-time analysis of operant response rates and patterns. This…
FIA BioSum: a tool to evaluate financial costs, opportunities and effectiveness of fuel treatments.
Jeremy Fried; Glenn Christensen
2004-01-01
FIA BioSum, a tool developed by the USDA Forest Services Forest Inventory and Analysis (FIA) Program, generates reliable cost estimates, identifies opportunities and evaluates the effectiveness of fuel treatments in forested landscapes. BioSum is an analytic framework that integrates a suite of widely used computer models with a foundation of attribute-rich,...
Jeremy S. Fried; Larry D. Potts; Sara M. Loreno; Glenn A. Christensen; R. Jamie Barbour
2017-01-01
The Forest Inventory and Analysis (FIA)-based BioSum (Bioregional Inventory Originated Simulation Under Management) is a free policy analysis framework and workflow management software solution. It addresses complex management questions concerning forest health and vulnerability for large, multimillion acre, multiowner landscapes using FIA plot data as the initial...
Briere, John; Dias, Colin P; Semple, Randye J; Scott, Catherine; Bigras, Noémie; Godbout, Natacha
2017-08-01
The relationship between type of trauma exposure, cumulative trauma, peritraumatic distress, and subsequent acute stress disorder (ASD) symptoms was examined prospectively in 96 individuals presenting with acute medical injuries to a Level 1 emergency/trauma department. Common precipitating traumas included motor vehicle-related events, stabbings, shootings, and physical assaults. At 2 to 3 weeks follow-up, 22.9% of participants had developed ASD. Univariate analysis revealed no relationship between type of precipitating trauma and ASD symptoms, whereas robust path analysis indicated direct effects of gender, lifetime cumulative trauma exposure, and peritraumatic distress. Peritraumatic distress did not mediate the association between cumulative trauma and symptoms, but did mediate the association between gender and symptomatology. These results, which account for 23.1% of the variance in ASD symptoms, suggest that ASD may be more due to cumulative trauma exposure than the nature of the precipitating trauma, but that cumulative trauma does not exert its primary effect by increasing peritraumatic distress to the most recent trauma. Copyright © 2017 International Society for Traumatic Stress Studies.
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
NASA Astrophysics Data System (ADS)
Wahid, Sharifah Norhuda Syed; Ujang, Suriyati
2015-02-01
Daily concentration of particulate matter with aerodynamic diameter less than 10 μm (PM10) could be very harmful to human health such as respiratory and cardiovascular diseases. The purpose of this paper is to describe on the experiences of air pollutants in the state of Pahang, Malaysia during the first quarter of year 2014. Data were gathered from available automatic air quality monitoring stations at Balok Baru, Pahang through the assistance from the Department of Environment. Cumulative sum technique shows that a change occurred at March, 8th with 88 μg/ m3, moderate air quality level. This change point indicated that the PM10 level started to have a potential in moderate or worse level. In addition, time series regression analysis shows that the trend of daily concentrations of Balok Baru station was an upward trend and for additional day, the PM10 level was increased by 0.1117 μg/ m3. It is hoped that this study will give a significant contribution for future researcher in the area of the study on the risk of PM10 or other types of air pollutant to air quality and also human health.
Closed loop statistical performance analysis of N-K knock controllers
NASA Astrophysics Data System (ADS)
Peyton Jones, James C.; Shayestehmanesh, Saeed; Frey, Jesse
2017-09-01
The closed loop performance of engine knock controllers cannot be rigorously assessed from single experiments or simulations because knock behaves as a random process and therefore the response belongs to a random distribution also. In this work a new method is proposed for computing the distributions and expected values of the closed loop response, both in steady state and in response to disturbances. The method takes as its input the control law, and the knock propensity characteristic of the engine which is mapped from open loop steady state tests. The method is applicable to the 'n-k' class of knock controllers in which the control action is a function only of the number of cycles n since the last control move, and the number k of knock events that have occurred in this time. A Cumulative Summation (CumSum) based controller falls within this category, and the method is used to investigate the performance of the controller in a deeper and more rigorous way than has previously been possible. The results are validated using onerous Monte Carlo simulations, which confirm both the validity of the method and its high computational efficiency.
Response Surface Analysis of Experiments with Random Blocks
1988-09-01
partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF
CUMULATIVE RISK ASSESSMENT: GETTING FROM TOXICOLOGY TO QUANTITATIVE ANALYSIS
INTRODUCTION: GETTING FROM TOXICOLOGY TO QUANTITATIVE ANALYSIS FOR CUMULATIVE RISK
Hugh A. Barton1 and Carey N. Pope2
1US EPA, Office of Research and Development, National Health and Environmental Effects Research Laboratory, Research Triangle Park, NC
2Department of...
DOT National Transportation Integrated Search
2012-12-31
This is the first of two volumes of the report on modeling cumulative noise from simultaneous flights. This volume includes: an overview of the time compression algorithms used to model simultaneous aircraft; revised summary of a preliminary study (w...
Decision analysis with cumulative prospect theory.
Bayoumi, A M; Redelmeier, D A
2000-01-01
Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.
NASA Astrophysics Data System (ADS)
Kitazawa, Masakiyo; Asakawa, Masayuki; Ono, Hirosato
2014-01-01
We investigate the time evolution of higher order cumulants of conserved charges in a volume with the diffusion master equation. Applying the result to the diffusion of non-Gaussian fluctuations in the hadronic stage of relativistic heavy ion collisions, we show that the fourth-order cumulant of net-electric charge at LHC energy is suppressed compared with the recently observed second-order cumulant at ALICE, if the higher order cumulants at hadronization are suppressed compared with their values in the hadron phase in equilibrium. The significance of the experimental information on the rapidity window dependence of various cumulants in investigating the history of the dynamical evolution of the hot medium created in relativistic heavy ion collisions is emphasized.
Tropospheric temperature climatology and trends observed over the Middle East
NASA Astrophysics Data System (ADS)
Basha, Ghouse; Marpu, P. R.; Ouarda, T. B. M. J.
2015-10-01
In this study, we report for the first time, the upper air temperature climatology, and trends over the Middle East, which seem to be significantly affected by the changes associated with hot summer and low precipitation. Long term (1985-2012) radiosonde data from 12 stations are used to derive the mean temperature climatology and vertical trends. The study was performed by analyzing the data at different latitudes. The vertical profiles of air temperature show distinct behavior in terms of vertical and seasonal variability at different latitudes. The seasonal cycle of temperature at the 100 hPa, however, shows an opposite pattern compared to the 200 hPa levels. The temperature at 100 hPa shows a maximum during winter and minimum in summer. Spectral analysis shows that the annual cycle is dominant in comparison with the semiannual cycle. The time-series of temperature data was analyzed using the Bayesian change point analysis and cumulative sum method to investigate the changes in temperature trends. Temperature shows a clear change point during the year 1999 at all stations. Further, Modified Mann-Kendall test was applied to study the vertical trend, and analysis shows statistically significant lower tropospheric warming and cooling in upper troposphere after the year 1999. In general, the magnitude of the trend decreases with altitude in the troposphere. In all the latitude bands in lower troposphere, significant warming is observed, whereas at higher altitudes cooling is noticed based on 28 years temperature observations over the Middle East.
Modeling of cumulative ash curve in hard red spring wheat
USDA-ARS?s Scientific Manuscript database
Analysis of cumulative ash curves (CAC) is very important for evaluation of milling quality of wheat and blending different millstreams for specific applications. The aim of this research was to improve analysis of CAC. Five hard red spring wheat genotype composites from two regions were milled on...
Hamada, Tsuyoshi; Nakai, Yousuke; Isayama, Hiroyuki; Togawa, Osamu; Kogure, Hirofumi; Kawakubo, Kazumichi; Tsujino, Takeshi; Sasahira, Naoki; Hirano, Kenji; Yamamoto, Natsuyo; Ito, Yukiko; Sasaki, Takashi; Mizuno, Suguru; Toda, Nobuo; Tada, Minoru; Koike, Kazuhiko
2014-03-01
Self-expandable metallic stent (SEMS) placement is widely carried out for distal malignant biliary obstruction, and survival analysis is used to evaluate the cumulative incidences of SEMS dysfunction (e.g. the Kaplan-Meier [KM] method and the log-rank test). However, these statistical methods might be inappropriate in the presence of 'competing risks' (here, death without SEMS dysfunction), which affects the probability of experiencing the event of interest (SEMS dysfunction); that is, SEMS dysfunction can no longer be observed after death. A competing risk analysis has rarely been done in studies on SEMS. We introduced the concept of a competing risk analysis and illustrated its impact on the evaluation of SEMS outcomes using hypothetical and actual data. Our illustrative study included 476 consecutive patients who underwent SEMS placement for unresectable distal malignant biliary obstruction. A significant difference between cumulative incidences of SEMS dysfunction in male and female patients via theKM method (P = 0.044 by the log-rank test) disappeared after applying a competing risk analysis (P = 0.115 by Gray's test). In contrast, although cumulative incidences of SEMS dysfunction via the KM method were similar with and without chemotherapy (P = 0.647 by the log-rank test), cumulative incidence of SEMS dysfunction in the non-chemotherapy group was shown to be significantly lower (P = 0.031 by Gray's test) in a competing risk analysis. Death as a competing risk event needs to be appropriately considered in estimating a cumulative incidence of SEMS dysfunction, otherwise analytical results may be biased. © 2013 The Authors. Digestive Endoscopy © 2013 Japan Gastroenterological Endoscopy Society.
An evaluation paradigm for cumulative impact analysis
NASA Astrophysics Data System (ADS)
Stakhiv, Eugene Z.
1988-09-01
Cumulative impact analysis is examined from a conceptual decision-making perspective, focusing on its implicit and explicit purposes as suggested within the policy and procedures for environmental impact analysis of the National Environmental Policy Act of 1969 (NEPA) and its implementing regulations. In this article it is also linked to different evaluation and decision-making conventions, contrasting a regulatory context with a comprehensive planning framework. The specific problems that make the application of cumulative impact analysis a virtually intractable evaluation requirement are discussed in connection with the federal regulation of wetlands uses. The relatively familiar US Army Corps of Engineers' (the Corps) permit program, in conjunction with the Environmental Protection Agency's (EPA) responsibilities in managing its share of the Section 404 regulatory program requirements, is used throughout as the realistic context for highlighting certain pragmatic evaluation aspects of cumulative impact assessment. To understand the purposes of cumulative impact analysis (CIA), a key distinction must be made between the implied comprehensive and multiobjective evaluation purposes of CIA, promoted through the principles and policies contained in NEPA, and the more commonly conducted and limited assessment of cumulative effects (ACE), which focuses largely on the ecological effects of human actions. Based on current evaluation practices within the Corps' and EPA's permit programs, it is shown that the commonly used screening approach to regulating wetlands uses is not compatible with the purposes of CIA, nor is the environmental impact statement (EIS) an appropriate vehicle for evaluating the variety of objectives and trade-offs needed as part of CIA. A heuristic model that incorporates the basic elements of CIA is developed, including the idea of trade-offs among social, economic, and environmental protection goals carried out within the context of environmental carrying capacity.
Turning stumbling blocks into stepping stones in the analysis of cumulative impacts
Leslie M. Reid
2004-01-01
Federal and state legislation, such as the National Environmental Policy Act and the California Environmental Quality Act, require that responsible agency staff consider the cumulative impacts of proposed activities before permits are issued for certain kinds of public or private projects. The Council on Environmental Quality (CEQ 1997) defined a cumulative impact as...
2013-01-01
Background The role of environmental factors in lumbar intervertebral disc degeneration (DD) in young adults is largely unknown. Therefore, we investigated whether body mass index (BMI), smoking, and physical activity are associated with lumbar DD among young adults. Methods The Oulu Back Study (OBS) is a subpopulation of the 1986 Northern Finland Birth Cohort (NFBC 1986) and it originally included 2,969 children. The OBS subjects received a postal questionnaire, and those who responded (N = 1,987) were invited to the physical examination. The participants (N = 874) were invited to lumbar MRI study. A total of 558 young adults (325 females and 233 males) underwent MRI that used a 1.5-T scanner at the mean age of 21. Each lumbar intervertebral disc was graded as normal (0), mildly (1), moderately (2), or severely (3) degenerated. We calculated a sum score of the lumbar DD, and analyzed the associations between environmental risk factors (smoking, physical activity and weight-related factors assessed at 16 and 19 years) and DD using ordinal logistic regression, the results being expressed as cumulative odds ratios (COR). All analyses were stratified by gender. Results Of the 558 subjects, 256 (46%) had no DD, 117 (21%) had sum score of one, 93 (17%) sum score of two, and 92 (17%) sum score of three or higher. In the multivariate ordinal logistic regression model, BMI at 16 years (highest vs. lowest quartile) was associated with DD sum score among males (COR 2.35; 95% CI 1.19-4.65) but not among females (COR 1.29; 95% CI 0.72-2.32). Smoking of at least four pack-years was associated with DD among males, but not among females (COR 2.41; 95% CI 0.99-5.86 and 1.59; 95% 0.67-3.76, respectively). Self-reported physical activity was not associated with DD. Conclusions High BMI at 16 years was associated with lumbar DD at 21 years among young males but not among females. High pack-years of smoking showed a comparable association in males, while physical activity had no association with DD in either gender. These results suggest that environmental factors are associated with DD among young males. PMID:23497297
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
Early snowmelt significantly enhances boreal springtime carbon uptake
Pulliainen, Jouni; Aurela, Mika; Laurila, Tuomas; Aalto, Tuula; Takala, Matias; Salminen, Miia; Kulmala, Markku; Barr, Alan; Heimann, Martin; Lindroth, Anders; Laaksonen, Ari; Derksen, Chris; Mäkelä, Annikki; Markkanen, Tiina; Lemmetyinen, Juha; Susiluoto, Jouni; Dengel, Sigrid; Mammarella, Ivan; Tuovinen, Juha-Pekka; Vesala, Timo
2017-01-01
We determine the annual timing of spring recovery from space-borne microwave radiometer observations across northern hemisphere boreal evergreen forests for 1979–2014. We find a trend of advanced spring recovery of carbon uptake for this period, with a total average shift of 8.1 d (2.3 d/decade). We use this trend to estimate the corresponding changes in gross primary production (GPP) by applying in situ carbon flux observations. Micrometeorological CO2 measurements at four sites in northern Europe and North America indicate that such an advance in spring recovery would have increased the January–June GPP sum by 29 g⋅C⋅m−2 [8.4 g⋅C⋅m−2 (3.7%)/decade]. We find this sensitivity of the measured springtime GPP to the spring recovery to be in accordance with the corresponding sensitivity derived from simulations with a land ecosystem model coupled to a global circulation model. The model-predicted increase in springtime cumulative GPP was 0.035 Pg/decade [15.5 g⋅C⋅m−2 (6.8%)/decade] for Eurasian forests and 0.017 Pg/decade for forests in North America [9.8 g⋅C⋅m−2 (4.4%)/decade]. This change in the springtime sum of GPP related to the timing of spring snowmelt is quantified here for boreal evergreen forests. PMID:28973918
Early learning effect of residents for laparoscopic sigmoid resection.
Bosker, Robbert; Groen, Henk; Hoff, Christiaan; Totte, Eric; Ploeg, Rutger; Pierie, Jean-Pierre
2013-01-01
To evaluate the effect of learning the laparoscopic sigmoid resection procedure on resident surgeons; establish a minimum number of cases before a resident surgeon could be expected to achieve proficiency with the procedure; and examine if an analysis could be used to measure and support the clinical evaluation of the surgeon's competence with the procedure. Retrospective analysis of data which was prospective entered in the database. From 2003 to 2007 all patients who underwent a laparoscopic sigmoid resection carried out by senior residents, who completed the procedure as the primary surgeon proctored by an experienced surgeon, were included in the study. A cumulative sum control chart (CUSUM) analysis was used evaluate performance. The procedure was defined as a failure if major intra-operative complications occurred such as intra abdominal organ injury, bleeding, or anastomotic leakage; if an inadequate number of lymph nodes (<12 nodes) were removed; or if conversion to an open surgical procedure was required. Thirteen residents performed 169 laparoscopic sigmoid resections in the period evaluated. A significant majority of the resident surgeons were able to consistently perform the procedure without failure after 11 cases and determined to be competent. One resident was not determined to be competent and the CUSUM score supported these findings. We concluded that at least 11 cases are required for most residents to obtain necessary competence with the laparoscopic sigmoid resection procedure. Evaluation with the CUSUM analysis can be used to measure and support the clinical evaluation of the resident surgeon's competence with the procedure. Copyright © 2013 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Fallout Deposition in the Marshall Islands from Bikini and Enewetak Nuclear Weapons Tests
Beck, Harold L.; Bouville, André; Moroz, Brian E.; Simon, Steven L.
2009-01-01
Deposition densities (Bq m-2) of all important dose-contributing radionuclides occurring in nuclear weapons testing fallout from tests conducted at Bikini and Enewetak Atolls (1946-1958) have been estimated on a test-specific basis for all the 31 atolls and separate reef islands of the Marshall Islands. A complete review of various historical and contemporary data, as well as meteorological analysis, was used to make judgments regarding which tests deposited fallout in the Marshall Islands and to estimate fallout deposition density. Our analysis suggested that only 20 of the 66 nuclear tests conducted in or near the Marshall Islands resulted in substantial fallout deposition on any of the 25 inhabited atolls. This analysis was confirmed by the fact that the sum of our estimates of 137Cs deposition from these 20 tests at each atoll is in good agreement with the total 137Cs deposited as estimated from contemporary soil sample analyses. The monitoring data and meteorological analyses were used to quantitatively estimate the deposition density of 63 activation and fission products for each nuclear test, plus the cumulative deposition of 239+240Pu at each atoll. Estimates of the degree of fractionation of fallout from each test at each atoll, as well as of the fallout transit times from the test sites to the atolls were used in this analysis. The estimates of radionuclide deposition density, fractionation, and transit times reported here are the most complete available anywhere and are suitable for estimations of both external and internal dose to representative persons as described in companion papers. PMID:20622548
Fallout deposition in the Marshall Islands from Bikini and Enewetak nuclear weapons tests.
Beck, Harold L; Bouville, André; Moroz, Brian E; Simon, Steven L
2010-08-01
Deposition densities (Bq m(-2)) of all important dose-contributing radionuclides occurring in nuclear weapons testing fallout from tests conducted at Bikini and Enewetak Atolls (1946-1958) have been estimated on a test-specific basis for 32 atolls and separate reef islands of the Marshall Islands. A complete review of various historical and contemporary data, as well as meteorological analysis, was used to make judgments regarding which tests deposited fallout in the Marshall Islands and to estimate fallout deposition density. Our analysis suggested that only 20 of the 66 nuclear tests conducted in or near the Marshall Islands resulted in substantial fallout deposition on any of the 23 inhabited atolls. This analysis was confirmed by the fact that the sum of our estimates of 137Cs deposition from these 20 tests at each atoll is in good agreement with the total 137Cs deposited as estimated from contemporary soil sample analyses. The monitoring data and meteorological analyses were used to quantitatively estimate the deposition density of 63 activation and fission products for each nuclear test, plus the cumulative deposition of 239+240Pu at each atoll. Estimates of the degree of fractionation of fallout from each test at each atoll, as well as of the fallout transit times from the test sites to the atolls were used in this analysis. The estimates of radionuclide deposition density, fractionation, and transit times reported here are the most complete available anywhere and are suitable for estimations of both external and internal dose to representative persons as described in companion papers.
Park, Yoonah; Yong, Yuen Geng; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung
2015-01-01
Purpose This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). Methods This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Results Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). Conclusion The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase. PMID:25960990
A comprehensive revisit of the ρ meson with improved Monte-Carlo based QCD sum rules
NASA Astrophysics Data System (ADS)
Wang, Qi-Nan; Zhang, Zhu-Feng; Steele, T. G.; Jin, Hong-Ying; Huang, Zhuo-Ran
2017-07-01
We improve the Monte-Carlo based QCD sum rules by introducing the rigorous Hölder-inequality-determined sum rule window and a Breit-Wigner type parametrization for the phenomenological spectral function. In this improved sum rule analysis methodology, the sum rule analysis window can be determined without any assumptions on OPE convergence or the QCD continuum. Therefore, an unbiased prediction can be obtained for the phenomenological parameters (the hadronic mass and width etc.). We test the new approach in the ρ meson channel with re-examination and inclusion of α s corrections to dimension-4 condensates in the OPE. We obtain results highly consistent with experimental values. We also discuss the possible extension of this method to some other channels. Supported by NSFC (11175153, 11205093, 11347020), Open Foundation of the Most Important Subjects of Zhejiang Province, and K. C. Wong Magna Fund in Ningbo University, TGS is Supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Z. F. Zhang and Z. R. Huang are Grateful to the University of Saskatchewan for its Warm Hospitality
NASA Technical Reports Server (NTRS)
Dickson, J.; Drury, H.; Van Essen, D. C.
2001-01-01
Surface reconstructions of the cerebral cortex are increasingly widely used in the analysis and visualization of cortical structure, function and connectivity. From a neuroinformatics perspective, dealing with surface-related data poses a number of challenges. These include the multiplicity of configurations in which surfaces are routinely viewed (e.g. inflated maps, spheres and flat maps), plus the diversity of experimental data that can be represented on any given surface. To address these challenges, we have developed a surface management system (SuMS) that allows automated storage and retrieval of complex surface-related datasets. SuMS provides a systematic framework for the classification, storage and retrieval of many types of surface-related data and associated volume data. Within this classification framework, it serves as a version-control system capable of handling large numbers of surface and volume datasets. With built-in database management system support, SuMS provides rapid search and retrieval capabilities across all the datasets, while also incorporating multiple security levels to regulate access. SuMS is implemented in Java and can be accessed via a Web interface (WebSuMS) or using downloaded client software. Thus, SuMS is well positioned to act as a multiplatform, multi-user 'surface request broker' for the neuroscience community.
Alan A. Ager; Caty Clifton
2005-01-01
The use of cumulative watershed effects models is mandated as part of interagency consultation over projects that might affect habitat for salmonids federally listed as threatened or endangered. Cumulative effects analysis is also required by a number of national forest plans in the Pacific Northwest Region (Region 6). Cumulative watershed effects in many cases are...
January, Stacy-Ann A; Mason, W Alex; Savolainen, Jukka; Solomon, Starr; Chmelka, Mary B; Miettunen, Jouko; Veijola, Juha; Moilanen, Irma; Taanila, Anja; Järvelin, Marjo-Riitta
2017-01-01
Children and adolescents exposed to multiple contextual risks are more likely to have academic difficulties and externalizing behavior problems than those who experience fewer risks. This study used data from the Northern Finland Birth Cohort 1986 (a population-based study; N = 6961; 51 % female) to investigate (a) the impact of cumulative contextual risk at birth on adolescents' academic performance and misbehavior in school, (b) learning difficulties and/or externalizing behavior problems in childhood as intervening mechanisms in the association of cumulative contextual risk with functioning in adolescence, and (c) potential gender differences in the predictive associations of cumulative contextual risk at birth with functioning in childhood or adolescence. The results of the structural equation modeling analysis suggested that exposure to cumulative contextual risk at birth had negative associations with functioning 16 years later, and academic difficulties and externalizing behavior problems in childhood mediated some of the predictive relations. Gender, however, did not moderate any of the associations. Therefore, the findings of this study have implications for the prevention of learning and conduct problems in youth and future research on the impact of cumulative risk exposure.
January, Stacy-Ann A.; Mason, W. Alex; Savolainen, Jukka; Solomon, Starr; Chmelka, Mary B.; Miettunen, Jouko; Veijola, Juha; Moilanen, Irma; Taanila, Anja; Järvelin, Marjo-Riitta
2016-01-01
Children and adolescents exposed to multiple contextual risks are more likely to have academic difficulties and externalizing behavior problems than those who experience fewer risks. This study used data from the Northern Finland Birth Cohort 1986 (a population-based study; N = 6,961; 51% female) to investigate (a) the impact of cumulative contextual risk at birth on adolescents’ academic performance and misbehavior in school, (b) learning difficulties and/or externalizing behavior problems in childhood as intervening mechanisms in the association of cumulative contextual risk with functioning in adolescence, and (c) potential gender differences in the predictive associations of cumulative contextual risk at birth with functioning in childhood or adolescence. The results of the structural equation modeling analysis suggested that exposure to cumulative contextual risk at birth had negative associations with functioning 16 years later, and academic difficulties and externalizing behavior problems in childhood mediated some of the predictive relations. Gender, however, did not moderate any of the associations. Therefore, the findings of this study have implications for the prevention of learning and conduct problems in youth and future research on the impact of cumulative risk exposure. PMID:27665276
Analysis of Memory Codes and Cumulative Rehearsal in Observational Learning
ERIC Educational Resources Information Center
Bandura, Albert; And Others
1974-01-01
The present study examined the influence of memory codes varying in meaningfulness and retrievability and cumulative rehearsal on retention of observationally learned responses over increasing temporal intervals. (Editor)
Third Generation Wireless Phone Threat Assessment for Aircraft Communication and Navigation Radios
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Smith, Laura J.; Williams, Reuben A.; Salud, Maria Theresa P.
2005-01-01
Radiated emissions in aircraft communication and navigation bands are measured from third generation (3G) wireless mobile phones. The two wireless technologies considered are the latest available to general consumers in the US. The measurements are conducted using reverberation chambers. The results are compared against baseline emissions from laptop computers and personal digital assistant devices that are currently allowed to operate on aircraft. Using existing interference path loss data and receivers interference threshold, a risk assessment is performed for several aircraft communication and navigation radio systems. In addition, cumulative interference effects of multiple similar devices are conservatively estimated or bounded. The effects are computed by summing the interference power from individual devices that is scaled according to the interference path loss at its location.
On designing a new cumulative sum Wilcoxon signed rank chart for monitoring process location
Nazir, Hafiz Zafar; Tahir, Muhammad; Riaz, Muhammad
2018-01-01
In this paper, ranked set sampling is used for developing a non-parametric location chart which is developed on the basis of Wilcoxon signed rank statistic. The average run length and some other characteristics of run length are used as the measures to assess the performance of the proposed scheme. Some selective distributions including Laplace (or double exponential), logistic, normal, contaminated normal and student’s t-distributions are considered to examine the performance of the proposed Wilcoxon signed rank control chart. It has been observed that the proposed scheme shows superior shift detection ability than some of the competing counterpart schemes covered in this study. Moreover, the proposed control chart is also implemented and illustrated with a real data set. PMID:29664919
Geologic constraints on the upper limits of reserve growth
Stanley, Richard G.
2001-01-01
For many oil and gas fields, estimates of ultimate recovery (the sum of cumulative production plus estimated reserves) tend to increase from one year to the next, and the gain is called reserve growth. Forecasts of reserve growth by the U.S. Geological Survey rely on statistical analyses of historical records of oil and gas production and estimated reserves. The preproposal in this Open-File Report suggests that this traditional petroleum–engineering approach to reserve growth might be supplemented, or at least better understood, by using geological data from individual oil and gas fields, 3–D modeling software, and standard volumetric techniques to estimate in–place volumes of oil and gas. Such estimates, in turn, can be used to constrain the upper limits of reserve growth and ultimate recovery from those fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lea, C.S.; Selvin, S.; Buffler, P.A.
This pilot study uses a unique method to calculate cumulative lifetime exposure to, ultraviolet radiation-b to determine if this refined method would indicate differences in lifetime cumulative UVB exposure between age and sex matched controls. Forty-four age and sex matched cases and controls demonstrated no significant difference in mean cumulative lifetime UVB exposure based on the duration and location of residence. This pilot study suggests that further analysis of the dataset should be conducted to determine if the cumulative lifetime exposure hypothesis is of primary importance regarding the association between UVB exposure and development of cutaneous malignant melanoma.
Predicting Family Burden Following Childhood Traumatic Brain Injury: A Cumulative Risk Approach
Josie, Katherine Leigh; Peterson, Catherine Cant; Burant, Christopher; Drotar, Dennis; Stancin, Terry; Wade, Shari L.; Yeates, Keith; Taylor, H. Gerry
2015-01-01
Objective To examine the utility of a cumulative risk index (CRI) in predicting the family burden of injury (FBI) over time in families of children with traumatic brain injury (TBI). Participants One hundred eight children with severe or moderate TBI and their families participated in the study. Measures The measures used in the study include the Socioeconomic Composite Index, Life Stressors and Social Resources Inventory—Adult Form, Vineland Adaptive Behavior Scales, Child Behavior Checklist, Children’s Depression Inventory, McMaster Family Assessment Device, Brief Symptom Inventory, and Family Burden of Injury Interview. In addition, information on injury-related risk was obtained via medical charts. Methods Participants were assessed immediately, 6, and 12 months postinjury and at a 4-year extended follow-up. Results Risk variables were dichotomized (ie, high- or low-risk) and summed to create a CRI for each child. The CRI predicted the FBI at all assessments, even after accounting for autocorrelations across repeated assessments. Path coefficients between the outcome measures at each time point were significant, as were all path coefficients from the CRI to family burden at each time point. In addition, all fit indices were above the recommended guidelines, and the χ2 statistic indicated a good fit to the data. Conclusions The current study provides initial support for the utility of a CRI (ie, an index of accumulated risk factors) in predicting family outcomes over time for children with TBI. The time period immediately after injury best predicts the future levels of FBI; however, cumulative risk continues to influence the change across successive postinjury assessments. These results suggest that clinical interventions could be proactive or preventive by intervening with identified “at-risk” subgroups immediately following injury. PMID:19033828
Van Cauwenberg, Jelle; Clarys, Peter; De Bourdeaudhuij, Ilse; Van Holle, Veerle; Verté, Dominique; De Witte, Nico; De Donder, Liesbeth; Buffel, Tine; Dury, Sarah; Deforche, Benedicte
2013-08-14
The physical environment may play a crucial role in promoting older adults' walking for transportation. However, previous studies on relationships between the physical environment and older adults' physical activity behaviors have reported inconsistent findings. A possible explanation for these inconsistencies is the focus upon studying environmental factors separately rather than simultaneously. The current study aimed to investigate the cumulative influence of perceived favorable environmental factors on older adults' walking for transportation. Additionally, the moderating effect of perceived distance to destinations on this relationship was studied. The sample was comprised of 50,685 non-institutionalized older adults residing in Flanders (Belgium). Cross-sectional data on demographics, environmental perceptions and frequency of walking for transportation were collected by self-administered questionnaires in the period 2004-2010. Perceived distance to destinations was categorized into short, medium, and large distance to destinations. An environmental index (=a sum of favorable environmental factors, ranging from 0 to 7) was constructed to investigate the cumulative influence of favorable environmental factors. Multilevel logistic regression analyses were applied to predict probabilities of daily walking for transportation. For short distance to destinations, probability of daily walking for transportation was significantly higher when seven compared to three, four or five favorable environmental factors were present. For medium distance to destinations, probabilities significantly increased for an increase from zero to four favorable environmental factors. For large distance to destinations, no relationship between the environmental index and walking for transportation was observed. Our findings suggest that the presence of multiple favorable environmental factors can motivate older adults to walk medium distances to facilities. Future research should focus upon the relationship between older adults' physical activity and multiple environmental factors simultaneously instead of separately.
2013-01-01
Background The physical environment may play a crucial role in promoting older adults’ walking for transportation. However, previous studies on relationships between the physical environment and older adults’ physical activity behaviors have reported inconsistent findings. A possible explanation for these inconsistencies is the focus upon studying environmental factors separately rather than simultaneously. The current study aimed to investigate the cumulative influence of perceived favorable environmental factors on older adults’ walking for transportation. Additionally, the moderating effect of perceived distance to destinations on this relationship was studied. Methods The sample was comprised of 50,685 non-institutionalized older adults residing in Flanders (Belgium). Cross-sectional data on demographics, environmental perceptions and frequency of walking for transportation were collected by self-administered questionnaires in the period 2004-2010. Perceived distance to destinations was categorized into short, medium, and large distance to destinations. An environmental index (=a sum of favorable environmental factors, ranging from 0 to 7) was constructed to investigate the cumulative influence of favorable environmental factors. Multilevel logistic regression analyses were applied to predict probabilities of daily walking for transportation. Results For short distance to destinations, probability of daily walking for transportation was significantly higher when seven compared to three, four or five favorable environmental factors were present. For medium distance to destinations, probabilities significantly increased for an increase from zero to four favorable environmental factors. For large distance to destinations, no relationship between the environmental index and walking for transportation was observed. Conclusions Our findings suggest that the presence of multiple favorable environmental factors can motivate older adults to walk medium distances to facilities. Future research should focus upon the relationship between older adults’ physical activity and multiple environmental factors simultaneously instead of separately. PMID:23945285
Posner-Schlossman syndrome in Wenzhou, China: a retrospective review study.
Jiang, Jun Hong; Zhang, Shao Dan; Dai, Ma Li; Yang, Juan Yuan; Xie, Yan Qian; Hu, Cheng; Mao, Guang Yun; Lu, Fan; Liang, Yuan Bo
2017-12-01
To describe the incidence of Posner-Schlossman syndrome (PSS) in Lucheng District, Wenzhou, China, over a 10-year period. We reviewed retrospectively the medical records of all inpatient and outpatient patients diagnosed with PSS during the years 2005-2014 in the Eye Hospital of Wenzhou Medical University. The keywords of 'glaucomatocyclitic crisis', 'Posner-Schlossman syndrome' and 'PSS' were used for the retrieval. Only patients with registered residing address in Lucheng District where the hospital located were finally selected. The cumulative incidence and annual incidence of PSS were calculated based on the sum of household registered population and temporary resident population in Lucheng District. A total of 576 patients with PSS (339 men and 237 women) met the retrieval criteria. The mean age of these subjects at the first clinic visit was 40±15 years. Intraocular pressure (IOP) of the initial record was 31.91±15.37 mm Hg. The 10-year cumulative incidence of PSS in Lucheng District was 39.53 per 100 000 population, whereas the mean annual incidence of PSS in this area was 3.91 per 100 000 population. The majority of these patients were aged 20-59 years (83.9%). Men showed a significantly higher cumulative incidence of PSS than women (p=0.010). Higher rate of newly onset cases was found in spring (31%) than in other seasons (p=0.006). Our results suggest a relatively high incidence of PSS in Wenzhou, a southeastern city in China. Young, male adults are prone to be affected in spring. However, the aetiology and other risk factors are still waited to be clarified. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng
2015-08-01
Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.
2009-07-16
0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation
Zhang, Mi; Wen, Xue Fa; Zhang, Lei Ming; Wang, Hui Min; Guo, Yi Wen; Yu, Gui Rui
2018-02-01
Extreme high temperature is one of important extreme weathers that impact forest ecosystem carbon cycle. In this study, applying CO 2 flux and routine meteorological data measured during 2003-2012, we examined the impacts of extreme high temperature and extreme high temperature event on net carbon uptake of subtropical coniferous plantation in Qianyanzhou. Combining with wavelet analysis, we analyzed environmental controls on net carbon uptake at different temporal scales, when the extreme high temperature and extreme high temperature event happened. The results showed that mean daily cumulative NEE decreased by 51% in the days with daily maximum air temperature range between 35 ℃ and 40 ℃, compared with that in the days with the range between 30 ℃ and 34 ℃. The effects of the extreme high temperature and extreme high temperature event on monthly NEE and annual NEE related to the strength and duration of extreme high tempe-rature event. In 2003, when strong extreme high temperature event happened, the sum of monthly cumulative NEE in July and August was only -11.64 g C·m -2 ·(2 month) -1 . The value decreased by 90%, compared with multi-year average value. At the same time, the relative variation of annual NEE reached -6.7%. In July and August, when the extreme high temperature and extreme high temperature event occurred, air temperature (T a ) and vapor press deficit (VPD) were the dominant controller for the daily variation of NEE. The coherency between NEE T a and NEE VPD was 0.97 and 0.95, respectively. At 8-, 16-, and 32-day periods, T a , VPD, soil water content at 5 cm depth (SWC), and precipitation (P) controlled NEE. The coherency between NEE SWC and NEE P was higher than 0.8 at monthly scale. The results indicated that atmospheric water deficit impacted NEE at short temporal scale, when the extreme high temperature and extreme high temperature event occurred, both of atmospheric water deficit and soil drought stress impacted NEE at long temporal scales in this ecosystem.
Stable and unstable chromosomal aberrations among Finnish nuclear power plant workers.
Lindholm, C
2001-01-01
Twenty nuclear power plant workers with relatively high recorded cumulative doses were studied using FISH chromosome painting and dicentric analysis after solid Giemsa staining. The results indicated that chronic exposure to ionising radiation can be detected on the group level using translocation analysis after chromosome painting, although the mean cumulative dose was approximately 100 mSv. A significant association between translocation frequency and cumulative dose was observed. Variability in the translocation yields among workers with similar recorded doses was large, resulting in a poor correlation between translocation frequencies and documented doses on the individual level. The yields of dicentric and acentric chromosomes were not correlated with the cumulative dose, indicating the inability of unstable aberrations to monitor long-term exposures. It was also shown that the unstable aberrations were not correlated with the most recent annual dose.
Wang, F; Yeung, K L; Chan, W C; Kwok, C C H; Leung, S L; Wu, C; Chan, E Y Y; Yu, I T S; Yang, X R; Tse, L A
2013-11-01
This study aimed to conduct a systematic review to sum up evidence of the associations between different aspects of night shift work and female breast cancer using a dose-response meta-analysis approach. We systematicly searched all cohort and case-control studies published in English on MEDLINE, Embase, PSYCInfo, APC Journal Club and Global Health, from January 1971 to May 2013. We extracted effect measures (relative risk, RR; odd ratio, OR; or hazard ratio, HR) from individual studies to generate pooled results using meta-analysis approaches. A log-linear dose-response regression model was used to evaluate the relationship between various indicators of exposure to night shift work and breast cancer risk. Downs and Black scale was applied to assess the methodological quality of included studies. Ten studies were included in the meta-analysis. A pooled adjusted relative risk for the association between 'ever exposed to night shift work' and breast cancer was 1.19 [95% confidence interval (CI) 1.05-1.35]. Further meta-analyses on dose-response relationship showed that every 5-year increase of exposure to night shift work would correspondingly enhance the risk of breast cancer of the female by 3% (pooled RR = 1.03, 95% CI 1.01-1.05; Pheterogeneity < 0.001). Our meta-analysis also suggested that an increase in 500-night shifts would result in a 13% (RR = 1.13, 95% CI 1.07-1.21; Pheterogeneity = 0.06) increase in breast cancer risk. This systematic review updated the evidence that a positive dose-response relationship is likely to present for breast cancer with increasing years of employment and cumulative shifts involved in the work.
About the cumulants of periodic signals
NASA Astrophysics Data System (ADS)
Barrau, Axel; El Badaoui, Mohammed
2018-01-01
This note studies cumulants of time series. These functions originating from the probability theory being commonly used as features of deterministic signals, their classical properties are examined in this modified framework. We show additivity of cumulants, ensured in the case of independent random variables, requires here a different hypothesis. Practical applications are proposed, in particular an analysis of the failure of the JADE algorithm to separate some specific periodic signals.
Pesticide Cumulative Risk Assessment: Framework for Screening Analysis
This document provides guidance on how to screen groups of pesticides for cumulative evaluation using a two-step approach: begin with evaluation of available toxicological information and, if necessary, follow up with a risk-based screening approach.
Sexton, Ken
2012-01-01
Systematic evaluation of cumulative health risks from the combined effects of multiple environmental stressors is becoming a vital component of risk-based decisions aimed at protecting human populations and communities. This article briefly examines the historical development of cumulative risk assessment as an analytical tool, and discusses current approaches for evaluating cumulative health effects from exposure to both chemical mixtures and combinations of chemical and nonchemical stressors. A comparison of stressor-based and effects-based assessment methods is presented, and the potential value of focusing on viable risk management options to limit the scope of cumulative evaluations is discussed. The ultimate goal of cumulative risk assessment is to provide answers to decision-relevant questions based on organized scientific analysis; even if the answers, at least for the time being, are inexact and uncertain. PMID:22470298
Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter; Ghali, William A; Marshall, Deborah A
2018-01-01
Kaplan-Meier survival analysis overestimates cumulative incidence in competing risks (CRs) settings. The extent of overestimation (or its clinical significance) has been questioned, and CRs methods are infrequently used. This meta-analysis compares the Kaplan-Meier method to the cumulative incidence function (CIF), a CRs method. We searched MEDLINE, EMBASE, BIOSIS Previews, Web of Science (1992-2016), and article bibliographies for studies estimating cumulative incidence using the Kaplan-Meier method and CIF. For studies with sufficient data, we calculated pooled risk ratios (RRs) comparing Kaplan-Meier and CIF estimates using DerSimonian and Laird random effects models. We performed stratified meta-analyses by clinical area, rate of CRs (CRs/events of interest), and follow-up time. Of 2,192 identified abstracts, we included 77 studies in the systematic review and meta-analyzed 55. The pooled RR demonstrated the Kaplan-Meier estimate was 1.41 [95% confidence interval (CI): 1.36, 1.47] times higher than the CIF. Overestimation was highest among studies with high rates of CRs [RR = 2.36 (95% CI: 1.79, 3.12)], studies related to hepatology [RR = 2.60 (95% CI: 2.12, 3.19)], and obstetrics and gynecology [RR = 1.84 (95% CI: 1.52, 2.23)]. The Kaplan-Meier method overestimated the cumulative incidence across 10 clinical areas. Using CRs methods will ensure accurate results inform clinical and policy decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Leimu, Roosa; Koricheva, Julia
2004-01-01
Temporal changes in the magnitude of research findings have recently been recognized as a general phenomenon in ecology, and have been attributed to the delayed publication of non-significant results and disconfirming evidence. Here we introduce a method of cumulative meta-analysis which allows detection of both temporal trends and publication bias in the ecological literature. To illustrate the application of the method, we used two datasets from recently conducted meta-analyses of studies testing two plant defence theories. Our results revealed three phases in the evolution of the treatment effects. Early studies strongly supported the hypothesis tested, but the magnitude of the effect decreased considerably in later studies. In the latest studies, a trend towards an increase in effect size was observed. In one of the datasets, a cumulative meta-analysis revealed publication bias against studies reporting disconfirming evidence; such studies were published in journals with a lower impact factor compared to studies with results supporting the hypothesis tested. Correlation analysis revealed neither temporal trends nor evidence of publication bias in the datasets analysed. We thus suggest that cumulative meta-analysis should be used as a visual aid to detect temporal trends and publication bias in research findings in ecology in addition to the correlative approach. PMID:15347521
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fillion, O; Gingras, L; Archambault, L
2015-06-15
Purpose: The knowledge of dose accumulation in the patient tissues in radiotherapy helps in determining the treatment outcomes. This project aims at providing a workflow to map cumulative doses that takes into account interfraction organ motion without the need for manual re-contouring. Methods: Five prostate cancer patients were studied. Each patient had a planning CT (pCT) and 5 to 13 CBCT scans. On each series, a physician contoured the prostate, rectum, bladder, seminal vesicles and the intestine. First, a deformable image registration (DIR) of the pCTs onto the daily CBCTs yielded registered CTs (rCT) . This rCT combined the accuratemore » CT numbers of the pCT with the daily anatomy of the CBCT. Second, the original plans (220 cGy per fraction for 25 fractions) were copied on the rCT for dose re-calculation. Third, the DIR software Elastix was used to find the inverse transform from the rCT to the pCT. This transformation was then applied to the rCT dose grid to map the dose voxels back to their pCT location. Finally, the sum of these deformed dose grids for each patient was applied on the pCT to calculate the actual dose delivered to organs. Results: The discrepancy between the planned D98 and D2 and these indices re-calculated on the rCT, are, on average, of −1 ± 1 cGy and 1 ± 2 cGy per fraction, respectively. For fractions with large anatomical motion, the D98 discrepancy on the re-calculated dose grid mapped onto the pCT can raise to −17 ± 4 cGy. The obtained cumulative dose distributions illustrate the same behavior. Conclusion: This approach allowed the evaluation of cumulative doses to organs with the help of uncontoured daily CBCT scans. With this workflow, the easy evaluation of doses delivered for EBRT treatments could ultimately lead to a better follow-up of prostate cancer patients.« less
Cao, Xia; Zhou, Jiansong; Yuan, Hong; Chen, Zhiheng
2015-12-21
The American Heart Association developed the Life's Simple 7 metric for defining cardiovascular health. Little is known, however, whether co-occurring reproductive factors, which affects endogenous oestrogen levels during a woman's life, also influences ideal cardiovascular health in postmenopausal women. Using data on a cross-sectional study with a convenience sample of 1,625 postmenopausal women (median age, 60.0 years) in a medical health checkup program at a general hospital in central south China 2013-2014, we examined the association between cumulative reproductive risk and ideal cardiovascular health in postmenopausal women. A cumulative risk score (range 0 to 4) was created by summing four reproductive risk factors (age at menarche, age at menopause, number of children, and pregnancy losses) present in each individual from binary variables in which 0 stands for favorable and 1 for less-than-favorable level. Ideal levels for each component in Life's Simple 7 (blood pressure, cholesterol, glucose, BMI, smoking, physical activity, and diet) were used to create an ideal Life's Simple 7 score [0-1 (low), 2, 3, 4, 5 and 6-7 (high)]. Participants with earlier age at menarche (odds ratio [OR] =0.42 [95 % CI 0.26-0.48]), earlier age at menopause [0.46 (0.32-0.58)], who have more than three children (0.42 [0.38-0.56]) and have history of pregnancy losses [0.76 (0.66-0.92)] were more likely to attain low (0-1) ideal Life's Simple 7 after adjustment for age. Participants were more likely to attain low (0-1) ideal Life's Simple 7 as exposure to the number of reproductive risk factors increased [OR (95 % CI) of 0.52 (0.42-0.66), 0.22 (0.16-0.26), and 0.16 (0.12-0.22) for cumulative reproductive risk scores of 1, 2, and 3 or 4, respectively, each versus 0]. The postmenopausal Chinese women with an increasing number of reproductive risk factors were progressively less likely to attain ideal levels of cardiovascular health factors.
Kim, Hyung-Don; Shim, Ju Hyun; Kim, Gi-Ae; Shin, Yong Moon; Yu, Eunsil; Lee, Sung-Gyu; Lee, Danbi; Kim, Kang Mo; Lim, Young-Suk; Lee, Han Chu; Chung, Young-Hwa; Lee, Yung Sang
2015-05-01
We investigated the optimal radiologic method for measuring hepatocellular carcinoma (HCC) treated by transarterial chemoembolization (TACE) in order to assess suitability for liver transplantation (LT). 271 HCC patients undergoing TACE prior to LT were classified according to both Milan and up-to-seven criteria after TACE by using the enhancement or size method on computed tomography images. The cumulative incidence function curves with competing risks regression was used in post-LT time-to-recurrence analysis. The predictive accuracy for recurrence was compared using area under the time-dependent receiver operating characteristic curves (AUC) estimation. Of the 271 patients, 246 (90.8%) and 164 (60.5%) fell within Milan and 252 (93.0%) and 210 (77.5%) fell within up-to-seven criteria, when assessed by enhancement and size methods, respectively. Competing risks regression analyses adjusting for covariates indicated that meeting the criteria by enhancement and by size methods was independently related to post-LT time-to-recurrence in the Milan or up-to-seven model. Higher AUC values were observed with the size method only in the up-to-seven model (p<0.05). Mean differences in the sum of tumor diameter and number of tumors between pathologic and radiologic findings were significantly less by the enhancement method (p<0.05). Cumulative incidence curves showed similar recurrence results between patients with and without prior TACE within the criteria based on either method, except for the within up-to-seven by the enhancement method (p=0.017). The enhancement method is a reliable tool for assessing the control or downstaging of HCC within Milan after TACE, although the size method may be preferable when applying the up-to-seven criterion. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Surface slip during large Owens Valley earthquakes
Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.
2016-01-01
The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.
On the Hardness of Subset Sum Problem from Different Intervals
NASA Astrophysics Data System (ADS)
Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke
The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
Dong, Huiru; Robison, Leslie L; Leisenring, Wendy M; Martin, Leah J; Armstrong, Gregory T; Yasui, Yutaka
2015-04-01
Cumulative incidence has been widely used to estimate the cumulative probability of developing an event of interest by a given time, in the presence of competing risks. When it is of interest to measure the total burden of recurrent events in a population, however, the cumulative incidence method is not appropriate because it considers only the first occurrence of the event of interest for each individual in the analysis: Subsequent occurrences are not included. Here, we discuss a straightforward and intuitive method termed "mean cumulative count," which reflects a summarization of all events that occur in the population by a given time, not just the first event for each subject. We explore the mathematical relationship between mean cumulative count and cumulative incidence. Detailed calculation of mean cumulative count is described by using a simple hypothetical example, and the computation code with an illustrative example is provided. Using follow-up data from January 1975 to August 2009 collected in the Childhood Cancer Survivor Study, we show applications of mean cumulative count and cumulative incidence for the outcome of subsequent neoplasms to demonstrate different but complementary information obtained from the 2 approaches and the specific utility of the former. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Bjorken unpolarized and polarized sum rules: comparative analysis of large- NF expansions
NASA Astrophysics Data System (ADS)
Broadhurst, D. J.; Kataev, A. L.
2002-09-01
Analytical all-orders results are presented for the one-renormalon-chain contributions to the Bjorken unpolarized sum rule for the F1 structure function of νN deep-inelastic scattering in the large-NF limit. The feasibility of estimating higher order perturbative QCD corrections, by the process of naive nonabelianization (NNA), is studied, in anticipation of measurement of this sum rule at a Neutrino Factory. A comparison is made with similar estimates obtained for the Bjorken polarized sum rule. Application of the NNA procedure to correlators of quark vector and scalar currents, in the euclidean region, is compared with recent analytical results for the O(αs4NF2) terms.
NASA Astrophysics Data System (ADS)
Sneath, P. H. A.
A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.
Detection of small surface defects using DCT based enhancement approach in machine vision systems
NASA Astrophysics Data System (ADS)
He, Fuqiang; Wang, Wen; Chen, Zichen
2005-12-01
Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
On the fluctuations of sums of independent random variables.
Feller, W
1969-07-01
If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.
Probability and Statistics in Sensor Performance Modeling
2010-12-01
language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of
NASA Astrophysics Data System (ADS)
Shaochuan, Lu; Vere-Jones, David
2011-10-01
The paper studies the statistical properties of deep earthquakes around North Island, New Zealand. We first evaluate the catalogue coverage and completeness of deep events according to cusum (cumulative sum) statistics and earlier literature. The epicentral, depth, and magnitude distributions of deep earthquakes are then discussed. It is worth noting that strong grouping effects are observed in the epicentral distribution of these deep earthquakes. Also, although the spatial distribution of deep earthquakes does not change, their occurrence frequencies vary from time to time, active in one period, relatively quiescent in another. The depth distribution of deep earthquakes also hardly changes except for events with focal depth less than 100 km. On the basis of spatial concentration we partition deep earthquakes into several groups—the Taupo-Bay of Plenty group, the Taranaki group, and the Cook Strait group. Second-order moment analysis via the two-point correlation function reveals only very small-scale clustering of deep earthquakes, presumably limited to some hot spots only. We also suggest that some models usually used for shallow earthquakes fit deep earthquakes unsatisfactorily. Instead, we propose a switching Poisson model for the occurrence patterns of deep earthquakes. The goodness-of-fit test suggests that the time-varying activity is well characterized by a switching Poisson model. Furthermore, detailed analysis carried out on each deep group by use of switching Poisson models reveals similar time-varying behavior in occurrence frequencies in each group.
Kubat, Eric; Hansen, Nathan; Nguyen, Huy; Wren, Sherry M; Eisenberg, Dan
2016-03-01
The use of robotic single-site cholecystectomy has increased exponentially. There are few reports describing the safety, efficacy, and operative learning curve of robotic single-site cholecystectomy either in the community setting or with nonelective surgery. We performed a retrospective review of a prospective database of our initial experience with robotic single-site cholecystectomy. Demographics and perioperative outcomes were evaluated for both urgent and elective cholecystectomy. Cumulative sum analysis was performed to determine the surgeon's learning curve. One hundred fifty patients underwent robotic single-site cholecystectomy. Seventy-four (49.3%) patients underwent urgent robotic single-site cholecystectomy, and 76 (50.7%) underwent elective robotic single-site cholecystectomy. Mean total operative time for robotic single-site cholecystectomy was 83.3 ± 2.7 minutes. Mean operative time for the urgent cohort was significantly longer than for the elective cohort (95.0 ± 4.4 versus 71.9 ± 2.6 minutes; P < .001). There was one conversion in the urgent cohort and none in the elective cohort. There was one bile duct injury (0.7%) in the urgent cohort. Perioperative complications occurred in 8.7% of patients, and most consisted of superficial surgical-site infections. There were no incisional hernias detected. The surgeon's learning curve, inclusive of urgent and elective cases, was 48 operations. Robotic single-site cholecystectomy can be performed safely and effectively in both elective and urgent cholecystectomy with a reasonable learning curve and acceptable perioperative outcomes.
Learning curve for intracranial angioplasty and stenting in single center.
Cai, Qiankun; Li, Yongkun; Xu, Gelin; Sun, Wen; Xiong, Yunyun; Sun, Wenshan; Bao, Yuanfei; Huang, Xianjun; Zhang, Yao; Zhou, Lulu; Zhu, Wusheng; Liu, Xinfeng
2014-01-01
To identify the specific caseload to overcome learning curve effect based on data from consecutive patients treated with Intracranial Angioplasty and Stenting (IAS) in our center. The Stenting and Aggressive Medical Management for Preventing Recurrent Stroke and Intracranial Stenosis trial was prematurely terminated owing to the high rate of periprocedural complications in the endovascular arm. To date, there are no data available for determining the essential caseload sufficient to overcome the learning effect and perform IAS with an acceptable level of complications. Between March 2004 and May 2012, 188 consecutive patients with 194 lesions who underwent IAS were analyzed retrospectively. The outcome variables used to assess the learning curve were periprocedural complications (included transient ischemic attack, ischemic stroke, vessel rupture, cerebral hyperperfusion syndrome, and vessel perforation). Multivariable logistic regression analysis was employed to illustrate the existence of learning curve effect on IAS. A risk-adjusted cumulative sum chart was performed to identify the specific caseload to overcome learning curve effect. The overall rate of 30-days periprocedural complications was 12.4% (24/194). After adjusting for case-mix, multivariate logistic regression analysis showed that operator experience was an independent predictor for periprocedural complications. The learning curve of IAS to overcome complications in a risk-adjusted manner was 21 cases. Operator's level of experience significantly affected the outcome of IAS. Moreover, we observed that the amount of experience sufficient for performing IAS in our center was 21 cases. Copyright © 2013 Wiley Periodicals, Inc.
Lv, Houning; Zhao, Ningning; Zheng, Zhongqing; Wang, Tao; Yang, Fang; Jiang, Xihui; Lin, Lin; Sun, Chao; Wang, Bangmao
2017-05-01
Peroral endoscopic myotomy (POEM) has emerged as an advanced technique for the treatment of achalasia, and defining the learning curve is mandatory. From August 2011 to June 2014, two operators in our institution (A&B) carried out POEM on 35 and 33 consecutive patients, respectively. Moving average and cumulative sum (CUSUM) methods were used to analyze the POEM learning curve for corrected operative time (cOT), referring to duration of per centimeter myotomy. Additionally, perioperative outcomes were compared among distinct learning curve phases. Using the moving average method, cOT reached a plateau at the 29th case and at the 24th case for operators A and B, respectively. CUSUM analysis identified three phases: initial learning period (Phase 1), efficiency period (Phase 2) and mastery period (Phase 3). The relatively smooth state in the CUSUM graph occurred at the 26th case and at the 24th case for operators A and B, respectively. Mean cOT of distinct phases for operator A were 8.32, 5.20 and 3.97 min, whereas they were 5.99, 3.06 and 3.75 min for operator B, respectively. Eckardt score and lower esophageal sphincter pressure significantly decreased during the 1-year follow-up period. Data were comparable regarding patient characteristics and perioperative outcomes. This single-center study demonstrated that expert endoscopists with experience in esophageal endoscopic submucosal dissection reached a plateau in learning of POEM after approximately 25 cases. © 2016 Japan Gastroenterological Endoscopy Society.
Gerges, Peter R A; Moore, Lynne; Léger, Caroline; Lauzier, François; Shemilt, Michèle; Zarychanski, Ryan; Scales, Damon C; Burns, Karen E A; Bernard, Francis; Zygun, David; Neveu, Xavier; Turgeon, Alexis F
2018-06-14
The intensity of care provided to critically ill patients has been shown to be associated with mortality. In patients with traumatic brain injury (TBI), specialized neurocritical care is often required, but whether it affects clinically significant outcomes is unknown. We aimed to determine the association of the intensity of care on mortality and the incidence of withdrawal of life-sustaining therapies in critically ill patients with severe TBI. We conducted a post hoc analysis of a multicentre retrospective cohort study of critically ill adult patients with severe TBI. We defined the intensity of care as a daily cumulative sum of interventions during the intensive care unit stay. Our outcome measures were all-cause hospital mortality and the incidence of withdrawal of life-sustaining therapies. Seven hundred sixteen severe TBI patients were included in our study. Most were male (77%) with a mean (standard deviation) age of 42 (20.5) yr and a median [interquartile range] Glasgow Coma Scale score of 3 [3-6]. Our results showed an association between the intensity of care and mortality (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.63 to 0.74) and the incidence of withdrawal of life-sustaining therapy (HR, 0.73; 95% CI, 0.67 to 0.79). In general, more intense care was associated with fewer deaths and a lower incidence of withdrawal of life-sustaining therapies in critically ill patients with severe TBI.
Assessing cumulative impacts to elk and mule deer in the Salmon River Basin, Idaho
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Neil, T.A.; Witmer, G.W.
1988-01-01
In this paper, we illustrate the method, using the potential for cumulative impacts to elk and mule deer from multiple hydroelectric development in the Salmon River Basin of Idaho. We attempted to incorporate knowledge of elk and mule deer habitat needs into a paradigm to assess cumulative impacts and aid in the regulatory decision making process. Undoubtedly, other methods could be developed based on different needs or constraints, but we offer this technique as a means to further refine cumulative impact assessment. Our approach is divided into three phases: analysis, evaluation, and documentation. 36 refs., 2 figs., 3 tabs.
Hayıroğlu, Mert İlker; Keskin, Muhammed; Uzun, Ahmet Okan; Türkkan, Ceyhan; Tekkeşin, Ahmet İlker; Kozan, Ömer
Electrical phenomenon and remote myocardial ischemia are the main factors of ST segment depression in inferior leads in acute anterior myocardial infarction (AAMI). We investigated the prognostic value of the sum of ST segment depression amplitudes in inferior leads in patients with first AAMI treated with primary percutaneous coronary intervention. (PPCI). In this prospective analysis, we evaluated the in-hospital prognostic impact of the sum of ST segment depression in inferior leads on 206 patients with first AAMI. Patients were stratified by tertiles of the sum of admission ST segment depression in inferior leads. Clinical outcomes were compared between those tertiles. Univariate analysis revealed higher rate of in-hospital death for patients with ST segment depression in inferior leads in tertile 3, as compared to patients in tertile 1 (OR 9.8, 95% CI 1.5-78.2, p<0.001). After adjustment for baseline variables, ST segment depression in inferior leads in tertile 3 was associated with 5.7-fold hazard of in-hospital death (OR: 5.7, 95% CI 1.2-35.1, p<0.001). Spearman rank correlation test revealed correlation between the sum of ST segment depression amplitude in inferior leads and the sum of ST segment elevation amplitude in V1-6, L1 and aVL. Multivessel disease and additional RCA stenosis were also detected more often in tertile 3. The sum of ST segment depression amplitude in inferior leads of admission ECG in patients with first AAMI treated with PPCI provide an independent prognostic marker of in-hospital outcomes. Our data suggest the sum of ST segment depression amplitude to be a simple, feasible and clinically applicable tool for rapid risk stratification in patients with first AAMI. Copyright © 2017 Elsevier Inc. All rights reserved.
Hasegawa, Kunihiro; Takata, Ryo; Nishikawa, Hiroki; Enomoto, Hirayuki; Ishii, Akio; Iwata, Yoshinori; Miyamoto, Yuho; Ishii, Noriko; Yuri, Yukihisa; Nakano, Chikage; Nishimura, Takashi; Yoh, Kazunori; Aizawa, Nobuhiro; Sakai, Yoshiyuki; Ikeda, Naoto; Takashima, Tomoyuki; Iijima, Hiroko; Nishiguchi, Shuhei
2016-09-12
We aimed to examine the effect of Wisteria floribunda agglutinin-positive Mac-2-binding protein (WFA⁺-M2BP) level on survival comparing with other laboratory liver fibrosis markers in hepatitis C virus (HCV)-related compensated liver cirrhosis (LC) (n = 165). For assessing prognostic performance of continuous fibrosis markers, we adapted time-dependent receiver operating characteristics (ROC) curves for clinical outcome. In time-dependent ROC analysis, annual area under the ROCs (AUROCs) were plotted. We also calculated the total sum of AUROCs in all time-points (TAAT score) in each fibrosis marker. WFA⁺-M2BP value ranged from 0.66 cutoff index (COI) to 19.95 COI (median value, 5.29 COI). Using ROC analysis for survival, the optimal cutoff point for WFA⁺-M2BP was 6.15 COI (AUROC = 0.79348, sensitivity = 80.0%, specificity = 74.78%). The cumulative five-year survival rate in patients with WFA⁺-M2BP ≥ 6.15 COI (n = 69) was 43.99%, while that in patients with WFA⁺-M2BP < 6.15 COI (n = 96) was 88.40% (p < 0.0001). In the multivariate analysis, absence of hepatocellular carcinoma (p = 0.0008), WFA⁺-M2BP < 6.15 COI (p = 0.0132), achievement of sustained virological response (p < 0.0001) and des-γ-carboxy prothrombin < 41 mAU/mL (p = 0.0018) were significant favorable predictors linked to survival. In time-dependent ROC analysis in all cases, WFA⁺-M2BP had the highest TAAT score among liver fibrosis markers. In conclusion, WFA⁺-M2BP can be a useful predictor in HCV-related compensated LC.
Step-stress analysis for predicting dental ceramic reliability
Borba, Márcia; Cesar, Paulo F.; Griggs, Jason A.; Bona, Álvaro Della
2013-01-01
Objective To test the hypothesis that step-stress analysis is effective to predict the reliability of an alumina-based dental ceramic (VITA In-Ceram AL blocks) subjected to a mechanical aging test. Methods Bar-shaped ceramic specimens were fabricated, polished to 1µm finish and divided into 3 groups (n=10): (1) step-stress accelerating test; (2) flexural strength- control; (3) flexural strength- mechanical aging. Specimens from group 1 were tested in an electromagnetic actuator (MTS Evolution) using a three-point flexure fixture (frequency: 2Hz; R=0.1) in 37°C water bath. Each specimen was subjected to an individual stress profile, and the number of cycles to failure was recorded. A cumulative damage model with an inverse power law lifetime-stress relation and Weibull lifetime distribution were used to fit the fatigue data. The data were used to predict the stress level and number of cycles for mechanical aging (group 3). Groups 2 and 3 were tested for three-point flexural strength (σ) in a universal testing machine with 1.0 s in 37°C water. Data were statistically analyzed using Mann-Whitney Rank Sum test. Results Step-stress data analysis showed that the profile most likely to weaken the specimens without causing fracture during aging (95% CI: 0–14% failures) was: 80 MPa stress amplitude and 105 cycles. The median σ values (MPa) for groups 2 (493±54) and 3 (423±103) were statistically different (p=0.009). Significance The aging profile determined by step-stress analysis was effective to reduce alumina ceramic strength as predicted by the reliability estimate, confirming the study hypothesis. PMID:23827018
Step-stress analysis for predicting dental ceramic reliability.
Borba, Márcia; Cesar, Paulo F; Griggs, Jason A; Della Bona, Alvaro
2013-08-01
To test the hypothesis that step-stress analysis is effective to predict the reliability of an alumina-based dental ceramic (VITA In-Ceram AL blocks) subjected to a mechanical aging test. Bar-shaped ceramic specimens were fabricated, polished to 1μm finish and divided into 3 groups (n=10): (1) step-stress accelerating test; (2) flexural strength-control; (3) flexural strength-mechanical aging. Specimens from group 1 were tested in an electromagnetic actuator (MTS Evolution) using a three-point flexure fixture (frequency: 2Hz; R=0.1) in 37°C water bath. Each specimen was subjected to an individual stress profile, and the number of cycles to failure was recorded. A cumulative damage model with an inverse power law lifetime-stress relation and Weibull lifetime distribution were used to fit the fatigue data. The data were used to predict the stress level and number of cycles for mechanical aging (group 3). Groups 2 and 3 were tested for three-point flexural strength (σ) in a universal testing machine with 1.0MPa/s stress rate, in 37°C water. Data were statistically analyzed using Mann-Whitney Rank Sum test. Step-stress data analysis showed that the profile most likely to weaken the specimens without causing fracture during aging (95% CI: 0-14% failures) was: 80MPa stress amplitude and 10(5) cycles. The median σ values (MPa) for groups 2 (493±54) and 3 (423±103) were statistically different (p=0.009). The aging profile determined by step-stress analysis was effective to reduce alumina ceramic strength as predicted by the reliability estimate, confirming the study hypothesis. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Mating system and ploidy influence levels of inbreeding depression in Clarkia (Onagraceae).
Barringer, Brian C; Geber, Monica A
2008-05-01
Inbreeding depression is the reduction in offspring fitness associated with inbreeding and is thought to be one of the primary forces selecting against the evolution of self-fertilization. Studies suggest that most inbreeding depression is caused by the expression of recessive deleterious alleles in homozygotes whose frequency increases as a result of self-fertilization or mating among relatives. This process leads to the selective elimination of deleterious alleles such that highly selfing species may show remarkably little inbreeding depression. Genome duplication (polyploidy) has also been hypothesized to influence levels of inbreeding depression, with polyploids expected to exhibit less inbreeding depression than diploids. We studied levels of inbreeding depression in allotetraploid and diploid species of Clarkia (Onagraceae) that vary in mating system (each cytotype was represented by an outcrossing and a selfing species). The outcrossing species exhibited more inbreeding depression than the selfing species for most fitness components and for two different measures of cumulative fitness. In contrast, though inbreeding depression was generally lower for the polyploid species than for the diploid species, the difference was statistically significant only for flower number and one of the two measures of cumulative fitness. Further, we detected no significant interaction between mating system and ploidy in determining inbreeding depression. In sum, our results suggest that a taxon's current mating system is more important than ploidy in influencing levels of inbreeding depression in natural populations of these annual plants.
Eckel, Sandrah P.; Louis, Thomas A.; Chaves, Paulo H. M.; Fried,, Linda P.; Margolis, and Helene G.
2012-01-01
The susceptibility of older adults to the health effects of air pollution is well-recognized. Advanced age may act as a partial surrogate for conditions associated with aging. The authors investigated whether gerontologic frailty (a clinical health status metric) modified the association between ambient level of ozone or particulate matter with an aerodynamic diameter less than 10 µm and lung function in 3,382 older adults using 7 years of follow-up data (1990–1997) from the Cardiovascular Health Study and its Environmental Factors Ancillary Study. Monthly average pollution and annual frailty assessments were related to up to 3 repeated measurements of lung function using cumulative summaries of pollution and frailty histories that accounted for duration as well as concentration. Frailty history was found to modify long-term associations of pollutants with forced vital capacity. For example, the decrease in forced vital capacity associated with a 70-ppb/month greater cumulative sum of monthly average ozone exposure was 12.3 mL (95% confidence interval: 10.4, 14.2) for a woman who had spent the prior 7 years prefrail or frail as compared with 4.7 mL (95% confidence interval: 3.8, 5.6) for a similar woman who was robust during all 7 years (interaction P < 0.001). PMID:22811494
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghadjar, Pirus; Zelefsky, Michael J.; Spratt, Daniel E.
2014-02-01
Purpose: To determine the potential association between genitourinary (GU) toxicity and planning dose–volume parameters for GU pelvic structures after high-dose intensity modulated radiation therapy in localized prostate cancer patients. Methods and Materials: A total of 268 patients who underwent intensity modulated radiation therapy to a prescribed dose of 86.4 Gy in 48 fractions during June 2004-December 2008 were evaluated with the International Prostate Symptom Score (IPSS) questionnaire. Dose–volume histograms of the whole bladder, bladder wall, urethra, and bladder trigone were analyzed. The primary endpoint for GU toxicity was an IPSS sum increase ≥10 points over baseline. Univariate and multivariate analysesmore » were done by the Kaplan-Meier method and Cox proportional hazard models, respectively. Results: Median follow-up was 5 years (range, 3-7.7 years). Thirty-nine patients experienced an IPSS sum increase ≥10 during follow-up; 84% remained event free at 5 years. After univariate analysis, lower baseline IPSS sum (P=.006), the V90 of the trigone (P=.006), and the maximal dose to the trigone (P=.003) were significantly associated with an IPSS sum increase ≥10. After multivariate analysis, lower baseline IPSS sum (P=.009) and increased maximal dose to the trigone (P=.005) remained significantly associated. Seventy-two patients had both a lower baseline IPSS sum and a higher maximal dose to the trigone and were defined as high risk, and 68 patients had both a higher baseline IPSS sum and a lower maximal dose to the trigone and were defined as low risk for development of an IPSS sum increase ≥10. Twenty-one of 72 high-risk patients (29%) and 5 of 68 low-risk patients (7%) experienced an IPSS sum increase ≥10 (P=.001; odds ratio 5.19). Conclusions: The application of hot spots to the bladder trigone was significantly associated with relevant changes in IPSS during follow-up. Reduction of radiation dose to the lower bladder and specifically the bladder trigone seems to be associated with a reduction in late GU toxicity.« less
Zaremba, Dario; Enneking, Verena; Meinert, Susanne; Förster, Katharina; Bürger, Christian; Dohm, Katharina; Grotegerd, Dominik; Redlich, Ronny; Dietsche, Bruno; Krug, Axel; Kircher, Tilo; Kugel, Harald; Heindel, Walter; Baune, Bernhard T; Arolt, Volker; Dannlowski, Udo
2018-02-08
Patients with major depression show reduced hippocampal volume compared to healthy controls. However, the contribution of patients' cumulative illness severity to hippocampal volume has rarely been investigated. It was the aim of our study to find a composite score of cumulative illness severity that is associated with hippocampal volume in depression. We estimated hippocampal gray matter volume using 3-tesla brain magnetic resonance imaging in 213 inpatients with acute major depression according to DSM-IV criteria (employing the SCID interview) and 213 healthy controls. Patients' cumulative illness severity was ascertained by six clinical variables via structured clinical interviews. A principal component analysis was conducted to identify components reflecting cumulative illness severity. Regression analyses and a voxel-based morphometry approach were used to investigate the influence of patients' individual component scores on hippocampal volume. Principal component analysis yielded two main components of cumulative illness severity: Hospitalization and Duration of Illness. While the component Hospitalization incorporated information from the intensity of inpatient treatment, the component Duration of Illness was based on the duration and frequency of illness episodes. We could demonstrate a significant inverse association of patients' Hospitalization component scores with bilateral hippocampal gray matter volume. This relationship was not found for Duration of Illness component scores. Variables associated with patients' history of psychiatric hospitalization seem to be accurate predictors of hippocampal volume in major depression and reliable estimators of patients' cumulative illness severity. Future studies should pay attention to these measures when investigating hippocampal volume changes in major depression.
Li, Chuanfu; Yang, Jun; Park, Kyungmo; Wu, Hongli; Hu, Sheng; Zhang, Wei; Bu, Junjie; Xu, Chunsheng; Qiu, Bensheng; Zhang, Xiaochu
2014-01-01
Most previous studies of brain responses to acupuncture were designed to investigate the acupuncture instant effect while the cumulative effect that should be more important in clinical practice has seldom been discussed. In this study, the neural basis of the acupuncture cumulative effect was analyzed. For this experiment, forty healthy volunteers were recruited, in which more than 40 minutes of repeated acupuncture stimulation was implemented at acupoint Zhusanli (ST36). Three runs of acupuncture fMRI datasets were acquired, with each run consisting of two blocks of acupuncture stimulation. Besides general linear model (GLM) analysis, the cumulative effects of acupuncture were analyzed with analysis of covariance (ANCOVA) to find the association between the brain response and the cumulative duration of acupuncture stimulation in each stimulation block. The experimental results showed that the brain response in the initial stage was the strongest although the brain response to acupuncture was time-variant. In particular, the brain areas that were activated in the first block and the brain areas that demonstrated cumulative effects in the course of repeated acupuncture stimulation overlapped in the pain-related areas, including the bilateral middle cingulate cortex, the bilateral paracentral lobule, the SII, and the right thalamus. Furthermore, the cumulative effects demonstrated bimodal characteristics, i.e. the brain response was positive at the beginning, and became negative at the end. It was suggested that the cumulative effect of repeated acupuncture stimulation was consistent with the characteristic of habituation effects. This finding may explain the neurophysiologic mechanism underlying acupuncture analgesia. PMID:24821143
Time course of ozone-induced changes in breathing pattern in healthy exercising humans.
Schelegle, Edward S; Walby, William F; Adams, William C
2007-02-01
We examined the time course of O3-induced changes in breathing pattern in 97 healthy human subjects (70 men and 27 women). One- to five-minute averages of breathing frequency (f(B)) and minute ventilation (Ve) were used to generate plots of cumulative breaths and cumulative exposure volume vs. time and cumulative exposure volume vs. cumulative breaths. Analysis revealed a three-phase response; delay, no response detected; onset, f(B) began to increase; response, f(B) stabilized. Regression analysis was used to identify four parameters: time to onset, number of breaths at onset, cumulative inhaled dose of ozone at onset of O3-induced tachypnea, and the percent change in f(B). The effect of altering O3 concentration, Ve, atropine treatment, and indomethacin treatment were examined. We found that the lower the O3 concentration, the greater the number of breaths at onset of tachypnea at a fixed ventilation, whereas number of breaths at onset of tachypnea remains unchanged when Ve is altered and O3 concentration is fixed. The cumulative inhaled dose of O3 at onset of tachypnea remained constant and showed no relationship with the magnitude of percent change in f(B). Atropine did not affect any of the derived parameters, whereas indomethacin did not affect time to onset, number of breaths at onset, or cumulative inhaled dose of O3 at onset of tachypnea but did attenuate percent change in f(B). The results are discussed in the context of dose response and intrinsic mechanisms of action.
Cumulative environmental impacts and integrated coastal management: the case of Xiamen, China.
Xue, Xiongzhi; Hong, Huasheng; Charles, Anthony T
2004-07-01
This paper examines the assessment of cumulative environmental impacts and the implementation of integrated coastal management within the harbour of Xiamen, China, an urban region in which the coastal zone is under increasing pressure as a result of very rapid economic growth. The first stage of analysis incorporates components of a cumulative effects assessment, including (a) identification of sources of environmental impacts, notably industrial expansion, port development, shipping, waste disposal, aquaculture and coastal construction, (b) selection of a set of valued ecosystem components, focusing on circulation and siltation, water quality, sediment, the benthic community, and mangrove forests, and (c) use of a set of key indicators to examine cumulative impacts arising from the aggregate of human activities. In the second stage of analysis, the paper describes and assesses the development of an institutional framework for integrated coastal management in Xiamen, one that combines policy and planning (including legislative and enforcement mechanisms) with scientific and monitoring mechanisms (including an innovative 'marine functional zoning' system). The paper concludes that the integrated coastal management framework in Xiamen has met all relevant requirements for 'integration' as laid out in the literature, and has explicitly incorporated consideration of cumulative impacts within its management and monitoring processes.
Watershed Planning within a Quantitative Scenario Analysis Framework.
Merriam, Eric R; Petty, J Todd; Strager, Michael P
2016-07-24
There is a critical need for tools and methodologies capable of managing aquatic systems within heavily impacted watersheds. Current efforts often fall short as a result of an inability to quantify and predict complex cumulative effects of current and future land use scenarios at relevant spatial scales. The goal of this manuscript is to provide methods for conducting a targeted watershed assessment that enables resource managers to produce landscape-based cumulative effects models for use within a scenario analysis management framework. Sites are first selected for inclusion within the watershed assessment by identifying sites that fall along independent gradients and combinations of known stressors. Field and laboratory techniques are then used to obtain data on the physical, chemical, and biological effects of multiple land use activities. Multiple linear regression analysis is then used to produce landscape-based cumulative effects models for predicting aquatic conditions. Lastly, methods for incorporating cumulative effects models within a scenario analysis framework for guiding management and regulatory decisions (e.g., permitting and mitigation) within actively developing watersheds are discussed and demonstrated for 2 sub-watersheds within the mountaintop mining region of central Appalachia. The watershed assessment and management approach provided herein enables resource managers to facilitate economic and development activity while protecting aquatic resources and producing opportunity for net ecological benefits through targeted remediation.
Spectral sum rules for confining large-N theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherman, Aleksey; McGady, David A.; Yamazaki, Masahito
2016-06-17
We consider asymptotically-free four-dimensional large-$N$ gauge theories with massive fermionic and bosonic adjoint matter fields, compactified on squashed three-spheres, and examine their regularized large-$N$ confined-phase spectral sums. The analysis is done in the limit of vanishing ’t Hooft coupling, which is justified by taking the size of the compactification manifold to be small compared to the inverse strong scale Λ ₋1. We find our results motivate us to conjecture some universal spectral sum rules for these large $N$ gauge theories.
Cumulative (Dis)Advantage and the Matthew Effect in Life-Course Analysis
Bask, Miia; Bask, Mikael
2015-01-01
To foster a deeper understanding of the mechanisms behind inequality in society, it is crucial to work with well-defined concepts associated with such mechanisms. The aim of this paper is to define cumulative (dis)advantage and the Matthew effect. We argue that cumulative (dis)advantage is an intra-individual micro-level phenomenon, that the Matthew effect is an inter-individual macro-level phenomenon and that an appropriate measure of the Matthew effect focuses on the mechanism or dynamic process that generates inequality. The Matthew mechanism is, therefore, a better name for the phenomenon, where we provide a novel measure of the mechanism, including a proof-of-principle analysis using disposable personal income data. Finally, because socio-economic theory should be able to explain cumulative (dis)advantage and the Matthew mechanism when they are detected in data, we discuss the types of models that may explain the phenomena. We argue that interactions-based models in the literature traditions of analytical sociology and statistical mechanics serve this purpose. PMID:26606386
Wang, Hongxin; Friedrich, Stephan; Li, Lei; Mao, Ziliang; Ge, Pinghua; Balasubramanian, Mahalingam; Patil, Daulat S
2018-03-28
According to L-edge sum rules, the number of 3d vacancies at a transition metal site is directly proportional to the integrated intensity of the L-edge X-ray absorption spectrum (XAS) for the corresponding metal complex. In this study, the numbers of 3d holes are characterized quantitatively or semi-quantitatively for a series of manganese (Mn) and nickel (Ni) complexes, including the electron configurations 3d 10 → 3d 0 . In addition, extremely dilute (<0.1% wt/wt) Ni enzymes were examined by two different approaches: (1) by using a high resolution superconducting tunnel junction X-ray detector to obtain XAS spectra with a very high signal-to-noise ratio, especially in the non-variant edge jump region; and (2) by adding an inert tracer to the sample that provides a prominent spectral feature to replace the weak edge jump for intensity normalization. In this publication, we present for the first time: (1) L-edge sum rule analysis for a series of Mn and Ni complexes that include electron configurations from an open shell 3d 0 to a closed shell 3d 10 ; (2) a systematic analysis on the uncertainties, especially on that from the edge jump, which was missing in all previous reports; (3) a clearly-resolved edge jump between pre-L 3 and post-L 2 regions from an extremely dilute sample; (4) an evaluation of an alternative normalization standard for L-edge sum rule analysis. XAS from two copper (Cu) proteins measured using a conventional semiconductor X-ray detector are also repeated as bridges between Ni complexes and dilute Ni enzymes. The differences between measuring 1% Cu enzymes and measuring <0.1% Ni enzymes are compared and discussed. This study extends L-edge sum rule analysis to virtually any 3d metal complex and any dilute biological samples that contain 3d metals.
An analysis of spectral envelope-reduction via quadratic assignment problems
NASA Technical Reports Server (NTRS)
George, Alan; Pothen, Alex
1994-01-01
A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.
Kaplan-Meier Survival Analysis Overestimates the Risk of Revision Arthroplasty: A Meta-analysis.
Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter D; Ghali, William A; Marshall, Deborah A
2015-11-01
Although Kaplan-Meier survival analysis is commonly used to estimate the cumulative incidence of revision after joint arthroplasty, it theoretically overestimates the risk of revision in the presence of competing risks (such as death). Because the magnitude of overestimation is not well documented, the potential associated impact on clinical and policy decision-making remains unknown. We performed a meta-analysis to answer the following questions: (1) To what extent does the Kaplan-Meier method overestimate the cumulative incidence of revision after joint replacement compared with alternative competing-risks methods? (2) Is the extent of overestimation influenced by followup time or rate of competing risks? We searched Ovid MEDLINE, EMBASE, BIOSIS Previews, and Web of Science (1946, 1980, 1980, and 1899, respectively, to October 26, 2013) and included article bibliographies for studies comparing estimated cumulative incidence of revision after hip or knee arthroplasty obtained using both Kaplan-Meier and competing-risks methods. We excluded conference abstracts, unpublished studies, or studies using simulated data sets. Two reviewers independently extracted data and evaluated the quality of reporting of the included studies. Among 1160 abstracts identified, six studies were included in our meta-analysis. The principal reason for the steep attrition (1160 to six) was that the initial search was for studies in any clinical area that compared the cumulative incidence estimated using the Kaplan-Meier versus competing-risks methods for any event (not just the cumulative incidence of hip or knee revision); we did this to minimize the likelihood of missing any relevant studies. We calculated risk ratios (RRs) comparing the cumulative incidence estimated using the Kaplan-Meier method with the competing-risks method for each study and used DerSimonian and Laird random effects models to pool these RRs. Heterogeneity was explored using stratified meta-analyses and metaregression. The pooled cumulative incidence of revision after hip or knee arthroplasty obtained using the Kaplan-Meier method was 1.55 times higher (95% confidence interval, 1.43-1.68; p < 0.001) than that obtained using the competing-risks method. Longer followup times and higher proportions of competing risks were not associated with increases in the amount of overestimation of revision risk by the Kaplan-Meier method (all p > 0.10). This may be due to the small number of studies that met the inclusion criteria and conservative variance approximation. The Kaplan-Meier method overestimates risk of revision after hip or knee arthroplasty in populations where competing risks (such as death) might preclude the occurrence of the event of interest (revision). Competing-risks methods should be used to more accurately estimate the cumulative incidence of revision when the goal is to plan healthcare services and resource allocation for revisions.
Humans experience chronic cumulative trace-level exposure to mixtures of volatile, semi-volatile, and non-volatile polycyclic aromatic hydrocarbons (PAHs) present in the environment as by-products of combustion processes. Certain PAHs are known or suspected human carcinogens and ...
High throughput reconfigurable data analysis system
NASA Technical Reports Server (NTRS)
Bearman, Greg (Inventor); Pelletier, Michael J. (Inventor); Seshadri, Suresh (Inventor); Pain, Bedabrata (Inventor)
2008-01-01
The present invention relates to a system and method for performing rapid and programmable analysis of data. The present invention relates to a reconfigurable detector comprising at least one array of a plurality of pixels, where each of the plurality of pixels can be selected to receive and read-out an input. The pixel array is divided into at least one pixel group for conducting a common predefined analysis. Each of the pixels has a programmable circuitry programmed with a dynamically configurable user-defined function to modify the input. The present detector also comprises a summing circuit designed to sum the modified input.
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
[Theoretical model study about the application risk of high risk medical equipment].
Shang, Changhao; Yang, Fenghui
2014-11-01
Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
Fluctuations of Wigner-type random matrices associated with symmetric spaces of class DIII and CI
NASA Astrophysics Data System (ADS)
Stolz, Michael
2018-02-01
Wigner-type randomizations of the tangent spaces of classical symmetric spaces can be thought of as ordinary Wigner matrices on which additional symmetries have been imposed. In particular, they fall within the scope of a framework, due to Schenker and Schulz-Baldes, for the study of fluctuations of Wigner matrices with additional dependencies among their entries. In this contribution, we complement the results of these authors by explicit calculations of the asymptotic covariances for symmetry classes DIII and CI and thus obtain explicit CLTs for these classes. On the technical level, the present work is an exercise in controlling the cumulative effect of systematically occurring sign factors in an involved sum of products by setting up a suitable combinatorial model for the summands. This aspect may be of independent interest. Research supported by Deutsche Forschungsgemeinschaft (DFG) via SFB 878.
An empirical model for global earthquake fatality estimation
Jaiswal, Kishor; Wald, David
2010-01-01
We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.
Meyer, N; McMenamin, J; Robertson, C; Donaghy, M; Allardice, G; Cooper, D
2008-07-01
In 18 weeks, Health Protection Scotland (HPS) deployed a syndromic surveillance system to early-detect natural or intentional disease outbreaks during the G8 Summit 2005 at Gleneagles, Scotland. The system integrated clinical and non-clinical datasets. Clinical datasets included Accident & Emergency (A&E) syndromes, and General Practice (GPs) codes grouped into syndromes. Non-clinical data included telephone calls to a nurse helpline, laboratory test orders, and hotel staff absenteeism. A cumulative sum-based detection algorithm and a log-linear regression model identified signals in the data. The system had a fax-based track for real-time identification of unusual presentations. Ninety-five signals were triggered by the detection algorithms and four forms were faxed to HPS. Thirteen signals were investigated. The system successfully complemented a traditional surveillance system in identifying a small cluster of gastroenteritis among the police force and triggered interventions to prevent further cases.
The dynamics of mergers and acquisitions: ancestry as the seminal determinant
Viegas, Eduardo; Cockburn, Stuart P.; Jensen, Henrik J.; West, Geoffrey B.
2014-01-01
Understanding the fundamental mechanisms behind the complex landscape of corporate mergers and acquisitions is of crucial importance to economies across the world. Adapting ideas from the fields of complexity and evolutionary dynamics to analyse business ecosystems, we show here that ancestry, i.e. the cumulative sum of historical mergers across all ancestors, is the key characteristic to company mergers and acquisitions. We verify this by comparing an agent-based model to an extensive range of business data, covering the period from the 1830s to the present day and a range of industries and geographies. This seemingly universal mechanism leads to imbalanced business ecosystems, with the emergence of a few very large, but sluggish ‘too big to fail’ entities, and very small, niche entities, thereby creating a paradigm where a configuration akin to effective oligopoly or monopoly is a likely outcome for free market systems. PMID:25383025
The dynamics of mergers and acquisitions: ancestry as the seminal determinant.
Viegas, Eduardo; Cockburn, Stuart P; Jensen, Henrik J; West, Geoffrey B
2014-11-08
Understanding the fundamental mechanisms behind the complex landscape of corporate mergers and acquisitions is of crucial importance to economies across the world. Adapting ideas from the fields of complexity and evolutionary dynamics to analyse business ecosystems, we show here that ancestry, i.e. the cumulative sum of historical mergers across all ancestors, is the key characteristic to company mergers and acquisitions. We verify this by comparing an agent-based model to an extensive range of business data, covering the period from the 1830s to the present day and a range of industries and geographies. This seemingly universal mechanism leads to imbalanced business ecosystems, with the emergence of a few very large, but sluggish 'too big to fail' entities, and very small, niche entities, thereby creating a paradigm where a configuration akin to effective oligopoly or monopoly is a likely outcome for free market systems.
Cloern, James E.; Jassby, Alan D.; Carstensen, Jacob; Bennett, William A.; Kimmerer, Wim; Mac Nally, Ralph; Schoellhamer, David H.; Winder, Monika
2012-01-01
We comment on a nonstandard statistical treatment of time-series data first published by Breton et al. (2006) in Limnology and Oceanography and, more recently, used by Glibert (2010) in Reviews in Fisheries Science. In both papers, the authors make strong inferences about the underlying causes of population variability based on correlations between cumulative sum (CUSUM) transformations of organism abundances and environmental variables. Breton et al. (2006) reported correlations between CUSUM-transformed values of diatom biomass in Belgian coastal waters and the North Atlantic Oscillation, and between meteorological and hydrological variables. Each correlation of CUSUM-transformed variables was judged to be statistically significant. On the basis of these correlations, Breton et al. (2006) developed "the first evidence of synergy between climate and human-induced river-based nitrate inputs with respect to their effects on the magnitude of spring Phaeocystis colony blooms and their dominance over diatoms."
78 FR 25440 - Request for Information and Citations on Methods for Cumulative Risk Assessment
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... Citations on Methods for Cumulative Risk Assessment AGENCY: Office of the Science Advisor, Environmental... requesting information and citations on approaches and methods for the planning, analysis, assessment, and... approaches to understanding risks to human health and the environment. For example, in Science & Decisions...
Journalism Abstracts; Cumulative Index, Volumes 1 to 15; 1963-1977.
ERIC Educational Resources Information Center
Popovich, Mark N., Ed.
Arranged by subject categories and authors, the more than 4,400 abstracts in this cumulative index provide information on doctoral dissertations and master's theses in the field of journalism. The 28 subject areas are as follows: advertising; audience analysis; communication and national development; communication theory, process, and effects;…
HESI EXPOSURE FACTORS DATABASE FOR AGGREGATE AND CUMULATIVE RISK ASSESSMENT
In recent years, the risk analysis community has broadened its use of complex aggregate and cumulative residential exposure models (e.g., to meet the requirements of the 1996 Food Quality Protection Act). The value of these models is their ability to incorporate a range of inp...
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
Zhai, Pei; Williams, Eric D
2010-10-15
This paper advances the life cycle assessment (LCA) of photovoltaic systems by expanding the boundary of the included processes using hybrid LCA and accounting for the technology-driven dynamics of embodied energy and carbon emissions. Hybrid LCA is an extended method that combines bottom-up process-sum and top-down economic input-output (EIO) methods. In 2007, the embodied energy was 4354 MJ/m(2) and the energy payback time (EPBT) was 2.2 years for a multicrystalline silicon PV system under 1700 kWh/m(2)/yr of solar radiation. These results are higher than those of process-sum LCA by approximately 60%, indicating that processes excluded in process-sum LCA, such as transportation, are significant. Even though PV is a low-carbon technology, the difference between hybrid and process-sum results for 10% penetration of PV in the U.S. electrical grid is 0.13% of total current grid emissions. Extending LCA from the process-sum to hybrid analysis makes a significant difference. Dynamics are characterized through a retrospective analysis and future outlook for PV manufacturing from 2001 to 2011. During this decade, the embodied carbon fell substantially, from 60 g CO(2)/kWh in 2001 to 21 g/kWh in 2011, indicating that technological progress is realizing reductions in embodied environmental impacts as well as lower module price.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y. S.; Cai, F.; Xu, W. M.
2011-09-28
The ship motion equation with a cosine wave excitement force describes the slip moments in regular waves. A new kind of wave excitement force model, with the form as sums of cosine functions was proposed to describe ship rolling in irregular waves. Ship rolling time series were obtained by solving the ship motion equation with the fourth-order-Runger-Kutta method. These rolling time series were synthetically analyzed with methods of phase-space track, power spectrum, primary component analysis, and the largest Lyapunove exponent. Simulation results show that ship rolling presents some chaotic characteristic when the wave excitement force was applied by sums ofmore » cosine functions. The result well explains the course of ship rolling's chaotic mechanism and is useful for ship hydrodynamic study.« less
Large-Nc masses of light mesons from QCD sum rules for nonlinear radial Regge trajectories
NASA Astrophysics Data System (ADS)
Afonin, S. S.; Solomko, T. D.
2018-04-01
The large-Nc masses of light vector, axial, scalar and pseudoscalar mesons are calculated from QCD spectral sum rules for a particular ansatz interpolating the radial Regge trajectories. The ansatz includes a linear part plus exponentially degreasing corrections to the meson masses and residues. The form of corrections was proposed some time ago for consistency with analytical structure of Operator Product Expansion of the two-point correlation functions. We revised that original analysis and found the second solution for the proposed sum rules. The given solution describes better the spectrum of vector and axial mesons.
Zintzaras, Elias; Doxani, Chrysoula; Rodopoulou, Paraskevi; Bakalos, Georgios; Ziogas, Dimitris C; Ziakas, Panayiotis; Voulgarelis, Michael
2012-04-01
Acute lymphoblastic leukemia (ALL) is a complex disease with genetic background. The genetic association studies (GAS) that investigated the association between ALL and the MTHFR C677T and A1298C gene variants have produced contradictory or inconclusive results. In order to decrease the uncertainty of estimated genetic risk effects, a meticulous meta-analysis of published GAS related the variants in the MTFHR gene with susceptibility to ALL was conducted. The risk effects were estimated based on the odds ratio (OR) of the allele contrast and the generalized odds ratio (OR(G)). Cumulative and recursive cumulative meta-analyses were also performed. The analysis showed marginal significant association for the C677T variant, overall [OR=0.91 (0.82-1.00) and OR(G)=0.89 (0.79-1.01)], and in Whites [OR=0.88 (0.77-0.99) and OR(G)=0.85 (0.73-0.99)]. The A1298C variant produced non-significant results. For both variants, the cumulative meta-analysis did not show a trend of association as evidence accumulates and the recursive cumulative meta-analysis indicated lack of sufficient evidence for denying or claiming an association. The current evidence is not sufficient to draw definite conclusions regarding the association of MTHFR variants and development of ALL. Copyright © 2011 Elsevier Ltd. All rights reserved.
Belchansky, G.I.; Douglas, David C.; Platonov, Nikita G.
2008-01-01
Sea ice thickness (SIT) is a key parameter of scientific interest because understanding the natural spatiotemporal variability of ice thickness is critical for improving global climate models. In this paper, changes in Arctic SIT during 1982-2003 are examined using a neural network (NN) algorithm trained with in situ submarine ice draft and surface drilling data. For each month of the study period, the NN individually estimated SIT of each ice-covered pixel (25-km resolution) based on seven geophysical parameters (four shortwave and longwave radiative fluxes, surface air temperature, ice drift velocity, and ice divergence/convergence) that were cumulatively summed at each monthly position along the pixel's previous 3-yr drift track (or less if the ice was <3 yr old). Average January SIT increased during 1982-88 in most regions of the Arctic (+7.6 ?? 0.9 cm yr-1), decreased through 1996 Arctic-wide (-6.1 ?? 1.2 cm yr-1), then modestly increased through 2003 mostly in the central Arctic (+2.1 ?? 0.6 cm yr-1). Net ice volume change in the Arctic Ocean from 1982 to 2003 was negligible, indicating that cumulative ice growth had largely replaced the estimated 45 000 km3 of ice lost by cumulative export. Above 65??N, total annual ice volume and interannual volume changes were correlated with the Arctic Oscillation (AO) at decadal and annual time scales, respectively. Late-summer ice thickness and total volume varied proportionally until the mid-1990s, but volume did not increase commensurate with the thickening during 1996-2002. The authors speculate that decoupling of the ice thickness-volume relationship resulted from two opposing mechanisms with different latitudinal expressions: a recent quasi-decadal shift in atmospheric circulation patterns associated with the AO's neutral state facilitated ice thickening at high latitudes while anomalously warm thermal forcing thinned and melted the ice cap at its periphery. ?? 2008 American Meteorological Society.
Adverse experiences in childhood, adulthood neighbourhood disadvantage and health behaviours.
Halonen, Jaana I; Vahtera, Jussi; Kivimäki, Mika; Pentti, Jaana; Kawachi, Ichiro; Subramanian, S V
2014-08-01
Early life adversities may play a role in the associations observed between neighbourhood contextual factors and health behaviours. We examined whether self-reported adverse experiences in childhood (parental divorce, long-term financial difficulties, serious conflicts, serious/chronic illness or alcohol problem in the family, and frequent fear of a family member) explain the association between adulthood neighbourhood disadvantage and co-occurrence of behavioural risk factors (smoking, moderate/heavy alcohol use, physical inactivity). Study population consisted of 31 271 public sector employees from Finland. The cross-sectional associations were analysed using two-level cumulative logistic regression models. Childhood adverse experiences were associated with the sum of risk factors (cumulative OR 1.32 (95% CI 1.25 to 1.40) among those reporting 3-6 vs 0 adversities). Adverse experiences did not attenuate the association between neighbourhood disadvantage and risk factors; this cumulative OR was 1.52 (95% CI 1.43 to 1.62) in the highest versus lowest quartile of neighbourhood disadvantage when not including adversities, and 1.50 (95% CI 1.40 to 1.60) when adjusted for childhood adversities. In adversity-stratified analyses those reporting 3-6 adversities had 1.60-fold (95% CI 1.42 to 1.80) likelihood of risk factors if living in the neighbourhood of the highest disadvantage, while in those with fewer adversities this likelihood was 1.09-1.34-fold (95% CI 0.98 to 1.53) (p interaction 0.07). Childhood adverse experiences and adulthood neighbourhood disadvantage were associated with behavioural risk factors. Childhood experiences did not explain associations between neighbourhood disadvantage and the risk factors. However, those with more adverse experiences may be susceptible for the socioeconomic conditions of neighbourhoods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
SEU System Analysis: Not Just the Sum of All Parts
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth
2014-01-01
Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.
A Chronic Fatigue Syndrome (CFS) severity score based on case designation criteria.
Baraniuk, James N; Adewuyi, Oluwatoyin; Merck, Samantha Jean; Ali, Mushtaq; Ravindran, Murugan K; Timbol, Christian R; Rayhan, Rakib; Zheng, Yin; Le, Uyenphuong; Esteitie, Rania; Petrie, Kristina N
2013-01-01
Chronic Fatigue Syndrome case designation criteria are scored as physicians' subjective, nominal interpretations of patient fatigue, pain (headaches, myalgia, arthralgia, sore throat and lymph nodes), cognitive dysfunction, sleep and exertional exhaustion. Subjects self-reported symptoms using an anchored ordinal scale of 0 (no symptom), 1 (trivial complaints), 2 (mild), 3 (moderate), and 4 (severe). Fatigue of 3 or 4 distinguished "Fatigued" from "Not Fatigued" subjects. The sum of the 8(Sum8) ancillary criteria was tested as a proxy for fatigue. All subjects had history and physical examinations to exclude medical fatigue, and ensure categorization as healthy or CFS subjects. Fatigued subjects were divided into CFS with ≥4 symptoms or Chronic Idiopathic Fatigue (CIF) with ≤3 symptoms. ROC of Sum8 for CFS and Not Fatigued subjects generated a threshold of 14 (specificity=0.934; sensitivity=0.928). CFS (n=256) and CIF (n=55) criteria were refined to include Sum8≥14 and ≤13, respectively. Not Fatigued subjects had highly skewed Sum8 responses. Healthy Controls (HC; n=269) were defined by fatigue≤2 and Sum8≤13. Those with Sum8≥14 were defined as CFS-Like With Insufficient Fatigue Syndrome (CFSLWIFS; n=20). Sum8 and Fatigue were highly correlated (R(2)=0.977; Cronbach's alpha=0.924) indicating an intimate relationship between symptom constructs. Cluster analysis suggested 4 clades each in CFS and HC. Translational utility was inferred from the clustering of proteomics from cerebrospinal fluid. Plotting Fatigue severity versus Sum8 produced an internally consistent classifying system. This is a necessary step for translating symptom profiles into fatigue phenotypes and their pathophysiological mechanisms.
A Chronic Fatigue Syndrome (CFS) severity score based on case designation criteria
Baraniuk, James N; Adewuyi, Oluwatoyin; Merck, Samantha Jean; Ali, Mushtaq; Ravindran, Murugan K; Timbol, Christian R; Rayhan, Rakib; Zheng, Yin; Le, Uyenphuong; Esteitie, Rania; Petrie, Kristina N
2013-01-01
Background: Chronic Fatigue Syndrome case designation criteria are scored as physicians’ subjective, nominal interpretations of patient fatigue, pain (headaches, myalgia, arthralgia, sore throat and lymph nodes), cognitive dysfunction, sleep and exertional exhaustion. Methods: Subjects self-reported symptoms using an anchored ordinal scale of 0 (no symptom), 1 (trivial complaints), 2 (mild), 3 (moderate), and 4 (severe). Fatigue of 3 or 4 distinguished “Fatigued” from “Not Fatigued” subjects. The sum of the 8(Sum8) ancillary criteria was tested as a proxy for fatigue. All subjects had history and physical examinations to exclude medical fatigue, and ensure categorization as healthy or CFS subjects. Results: Fatigued subjects were divided into CFS with ≥4 symptoms or Chronic Idiopathic Fatigue (CIF) with ≤3 symptoms. ROC of Sum8 for CFS and Not Fatigued subjects generated a threshold of 14 (specificity=0.934; sensitivity=0.928). CFS (n=256) and CIF (n=55) criteria were refined to include Sum8≥14 and ≤13, respectively. Not Fatigued subjects had highly skewed Sum8 responses. Healthy Controls (HC; n=269) were defined by fatigue≤2 and Sum8≤13. Those with Sum8≥14 were defined as CFS–Like With Insufficient Fatigue Syndrome (CFSLWIFS; n=20). Sum8 and Fatigue were highly correlated (R2=0.977; Cronbach’s alpha=0.924) indicating an intimate relationship between symptom constructs. Cluster analysis suggested 4 clades each in CFS and HC. Translational utility was inferred from the clustering of proteomics from cerebrospinal fluid. Conclusions: Plotting Fatigue severity versus Sum8 produced an internally consistent classifying system. This is a necessary step for translating symptom profiles into fatigue phenotypes and their pathophysiological mechanisms. PMID:23390566
A new color vision test to differentiate congenital and acquired color vision defects.
Shin, Young Joo; Park, Kyu Hyung; Hwang, Jeong-Min; Wee, Won Ryang; Lee, Jin Hak
2007-07-01
To investigate the efficacy of a novel computer-controlled color test for the differentiation of congenital and acquired color vision deficiency. Observational cross-sectional study. Thirty-one patients with congenital color vision deficiency and 134 patients with acquired color vision deficiency with a Snellen visual acuity better than 20/30 underwent an ophthalmologic examination including the Ishihara color test, Hardy-Rand-Rittler test, Nagel anomaloscopy, and the Seohan computerized hue test between June, 2003, and January, 2004. To investigate the type of color vision defect, a graph of the Seohan computerized hue test was divided into 4 quadrants and error scores in each quadrant were summated. The ratio between the sums of error scores of quadrants I and III (Q1+Q3) and those of quadrants II and IV (Q2+Q4) was calculated. Error scores and ratio in quadrant analysis of the Seohan computerized hue test. The Seohan computerized hue test showed that the sum of Q2+Q4 was significantly higher than the sum of Q1+Q3 in congenital color vision deficiency (P<0.01, paired t test) and that the sum of Q2+Q4 was significantly lower than the sum of Q1+Q3 in acquired color vision deficiency (P<0.01, paired t test). In terms of discriminating congenital and acquired color vision deficiency, the ratio in quadrant analysis had 93.3% sensitivity and 98.5% specificity with a reference value of 1.5 by the Seohan computerized hue test (95% confidence interval). The quadrant analysis and ratio of (Q2+Q4)/(Q1+Q3) using the Seohan computerized hue test effectively differentiated congenital and acquired color vision deficiency.
Bayesian Statistics in Educational Research: A Look at the Current State of Affairs
ERIC Educational Resources Information Center
König, Christoph; van de Schoot, Rens
2018-01-01
The ability of a scientific discipline to build cumulative knowledge depends on its predominant method of data analysis. A steady accumulation of knowledge requires approaches which allow researchers to consider results from comparable prior research. Bayesian statistics is especially relevant for establishing a cumulative scientific discipline,…
Cumulating the Intellectual Gold of Case Study Research.
ERIC Educational Resources Information Center
Rodgers, Robert; Jensen, Jason L.
2001-01-01
Looks at criticisms of public administration research: (1) knowledge in the field is not being accumulated, and (2) the research has low quality. Proposes meta-analysis as a solution to the first problem. Suggests that quality judgments should be based on knowledge cumulation, which acknowledges the value of all research methods. (Contains 48…
The US EPA’s N-Methyl Carbamate (NMC) Cumulative Risk assessment was based on the effect on acetylcholine esterase (AChE) activity of exposure to 10 NMC pesticides through dietary, drinking water, and residential exposures, assuming the effects of joint exposure to NMCs is dose-...
In recent years, the risk analysis community has broadened its use of complex aggregate and cumulative residential exposure models (e.g., to meet the requirements of the 1996 Food Quality Protection Act). The value of these models is their ability to incorporate a range of input...
Cann, A P; Connolly, M; Ruuska, R; MacNeil, M; Birmingham, T B; Vandervoort, A A; Callaghan, J P
2008-04-01
Despite the ongoing health problem of repetitive strain injuries, there are few tools currently available for ergonomic applications evaluating cumulative loading that have well-documented evidence of reliability and validity. The purpose of this study was to determine the inter-rater reliability of a posture matching based analysis tool (3DMatch, University of Waterloo) for predicting cumulative and peak spinal loads. A total of 30 food service workers were each videotaped for a 1-h period while performing typical work activities and a single work task was randomly selected from each for analysis by two raters. Inter-rater reliability was determined using intraclass correlation coefficients (ICC) model 2,1 and standard errors of measurement for cumulative and peak spinal and shoulder loading variables across all subjects. Overall, 85.5% of variables had moderate to excellent inter-rater reliability, with ICCs ranging from 0.30-0.99 for all cumulative and peak loading variables. 3DMatch was found to be a reliable ergonomic tool when more than one rater is involved.
Xu, Ke; Zhang, Xinyu; Wang, Zuoheng; Hu, Ying; Sinha, Rajita
2018-01-01
Chronic stress has a significant impact on obesity. However, how stress influences obesity remains unclear. We conducted an epigenome-wide DNA methylation association analysis of obesity (N=510) and examined whether cumulative stress influenced the DNA methylation on body weight. We identified 20 CpG sites associated with body mass index at the false discovery rate q<0.05, including a novel site, cg18181703, in suppressor of cytokine signaling 3 (SOCS3) gene (coefficient β=-0.0022, FDR q=4.94×10 -5 ). The interaction between cg18181703 and cumulative adverse life stress contributed to variations in body weight (p=0.002). Individuals with at least five major life events and lower methylation of cg1818703 showed a 1.38-fold higher risk of being obese (95%CI: 1.17-1.76). Our findings suggest that aberrant in DNA methylation is associated with body weight and that methylation of SOCS3 moderates the effect of cumulative stress on obesity. Copyright © 2016 Elsevier B.V. All rights reserved.
Coherence analysis of a class of weighted networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; He, Jiaojiao; Zong, Yue; Ju, Tingting; Sun, Yu; Su, Weiyi
2018-04-01
This paper investigates consensus dynamics in a dynamical system with additive stochastic disturbances that is characterized as network coherence by using the Laplacian spectrum. We introduce a class of weighted networks based on a complete graph and investigate the first- and second-order network coherence quantifying as the sum and square sum of reciprocals of all nonzero Laplacian eigenvalues. First, the recursive relationship of its eigenvalues at two successive generations of Laplacian matrix is deduced. Then, we compute the sum and square sum of reciprocal of all nonzero Laplacian eigenvalues. The obtained results show that the scalings of first- and second-order coherence with network size obey four and five laws, respectively, along with the range of the weight factor. Finally, it indicates that the scalings of our studied networks are smaller than other studied networks when 1/√{d }
Time-resolved x-ray scattering instrumentation
Borso, C.S.
1985-11-21
An apparatus and method for increased speed and efficiency of data compilation and analysis in real time is presented in this disclosure. Data is sensed and grouped in combinations in accordance with predetermined logic. The combinations are grouped so that a simplified reduced signal results, such as pairwise summing of data values having offsetting algebraic signs, thereby reducing the magnitude of the net pair sum. Bit storage requirements are reduced and speed of data compilation and analysis is increased by manipulation of shorter bit length data values, making real time evaluation possible.
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
Hierarchical Discriminant Analysis.
Lu, Di; Ding, Chuntao; Xu, Jinliang; Wang, Shangguang
2018-01-18
The Internet of Things (IoT) generates lots of high-dimensional sensor intelligent data. The processing of high-dimensional data (e.g., data visualization and data classification) is very difficult, so it requires excellent subspace learning algorithms to learn a latent subspace to preserve the intrinsic structure of the high-dimensional data, and abandon the least useful information in the subsequent processing. In this context, many subspace learning algorithms have been presented. However, in the process of transforming the high-dimensional data into the low-dimensional space, the huge difference between the sum of inter-class distance and the sum of intra-class distance for distinct data may cause a bias problem. That means that the impact of intra-class distance is overwhelmed. To address this problem, we propose a novel algorithm called Hierarchical Discriminant Analysis (HDA). It minimizes the sum of intra-class distance first, and then maximizes the sum of inter-class distance. This proposed method balances the bias from the inter-class and that from the intra-class to achieve better performance. Extensive experiments are conducted on several benchmark face datasets. The results reveal that HDA obtains better performance than other dimensionality reduction algorithms.
Health Equity and the Fallacy of Treating Causes of Population Health as if They Sum to 100.
Krieger, Nancy
2017-04-01
Numerous examples exist in population health of work that erroneously forces the causes of health to sum to 100%. This is surprising. Clear refutations of this error extend back 80 years. Because public health analysis, action, and allocation of resources are ill served by faulty methods, I consider why this error persists. I first review several high-profile examples, including Doll and Peto's 1981 opus on the causes of cancer and its current interpretations; a 2015 high-publicity article in Science claiming that two thirds of cancer is attributable to chance; and the influential Web site "County Health Rankings & Roadmaps: Building a Culture of Health, County by County," whose model sums causes of health to equal 100%: physical environment (10%), social and economic factors (40%), clinical care (20%), and health behaviors (30%). Critical analysis of these works and earlier historical debates reveals that underlying the error of forcing causes of health to sum to 100% is the still dominant but deeply flawed view that causation can be parsed as nature versus nurture. Better approaches exist for tallying risk and monitoring efforts to reach health equity.
Health Equity and the Fallacy of Treating Causes of Population Health as if They Sum to 100%
2017-01-01
Numerous examples exist in population health of work that erroneously forces the causes of health to sum to 100%. This is surprising. Clear refutations of this error extend back 80 years. Because public health analysis, action, and allocation of resources are ill served by faulty methods, I consider why this error persists. I first review several high-profile examples, including Doll and Peto’s 1981 opus on the causes of cancer and its current interpretations; a 2015 high-publicity article in Science claiming that two thirds of cancer is attributable to chance; and the influential Web site “County Health Rankings & Roadmaps: Building a Culture of Health, County by County,” whose model sums causes of health to equal 100%: physical environment (10%), social and economic factors (40%), clinical care (20%), and health behaviors (30%). Critical analysis of these works and earlier historical debates reveals that underlying the error of forcing causes of health to sum to 100% is the still dominant but deeply flawed view that causation can be parsed as nature versus nurture. Better approaches exist for tallying risk and monitoring efforts to reach health equity. PMID:28272952
Cui, Kai; Shen, Fuhai; Han, Bing; Yuan, Juxiang; Suo, Xia; Qin, Tianbang; Liu, Hongbo; Chen, Jie
2015-01-01
The purpose of this study was to identify differences in the incidence characteristics of coal workers’ pneumoconiosis (CWP) based on data from four large state-owned colliery groups of China, by comparing the cumulative incidence rates of CWP. We investigated 87,904 coal workers from the Datong, Kailuan, Fuxin, and Tiefa Colliery Groups, who were exposed to dust for at least 1 year. The cumulative incidence rate of CWP was calculated with the life-table method and stratified analysis among coal workers with different occupational categories during different years of first dust exposure. Our results showed the cumulative incidence rate of Datong was higher than that of any other colliery group among workers with different occupational categories during different years of first dust exposure. For Datong workers who started their dust exposure in the 1970s, the cumulative incidence rates of CWP among tunneling, mining, combining, and helping workers were 34.77%, 10.20%, 34.59%, and 4.91% during the observed time of 34 years, respectively. For those in the 1980s, the cumulative incidence rates were 32.29%, 13.51%, 2.98%, and 0.47%, respectively. The cumulative incidence rates of Fuxin and Tiefa were the lowest. In conclusion, the Datong colliery has the highest cumulative incidence rate of CWP among the four studied collieries, followed by Kailuan. The cumulative incidence rates of Fuxin and Tiefa were the lowest. Additional dust-proofing measures for decreasing dust concentrations are still necessary. PMID:26133134
Cumulative iron dose and resistance to erythropoietin.
Rosati, A; Tetta, C; Merello, J I; Palomares, I; Perez-Garcia, R; Maduell, F; Canaud, B; Aljama Garcia, P
2015-10-01
Optimizing anemia treatment in hemodialysis (HD) patients remains a priority worldwide as it has significant health and financial implications. Our aim was to evaluate in a large cohort of chronic HD patients in Fresenius Medical Care centers in Spain the value of cumulative iron (Fe) dose monitoring for the management of iron therapy in erythropoiesis-stimulating agent (ESA)-treated patients, and the relationship between cumulative iron dose and risk of hospitalization. Demographic, clinical and laboratory parameters from EuCliD(®) (European Clinical Dialysis Database) on 3,591 patients were recorded including ESA dose (UI/kg/week), erythropoietin resistance index (ERI) [U.I weekly/kg/gr hemoglobin (Hb)] and hospitalizations. Moreover the cumulative Fe dose (mg/kg of bodyweight) administered over the last 2 years was calculated. Univariate and multivariate analyses were performed to identify the main predictors of ESA resistance and risk of hospitalization. Patients belonging to the 4th quartile of ERI were defined as hypo-responders. The 2-year iron cumulative dose was significantly higher in the 4th quartile of ERI. In hypo-responders, 2-year cumulative iron dose was the only iron marker associated with ESA resistance. At case-mix adjusted multivariate analysis, 2-year iron cumulative dose was an independent predictor of hospitalization risk. In ESA-treated patients cumulative Fe dose could be a useful tool to monitor the appropriateness of Fe therapy and to prevent iron overload. To establish whether the associations between cumulative iron dose, ERI and hospitalization risk are causal or attributable to selection bias by indication, clinical trials are necessary.
NASA Astrophysics Data System (ADS)
Banerji, Anirban; Magarkar, Aniket
2012-09-01
We feel happy when web browsing operations provide us with necessary information; otherwise, we feel bitter. How to measure this happiness (or bitterness)? How does the profile of happiness grow and decay during the course of web browsing? We propose a probabilistic framework that models the evolution of user satisfaction, on top of his/her continuous frustration at not finding the required information. It is found that the cumulative satisfaction profile of a web-searching individual can be modeled effectively as the sum of a random number of random terms, where each term is a mutually independent random variable, originating from ‘memoryless’ Poisson flow. Evolution of satisfaction over the entire time interval of a user’s browsing was modeled using auto-correlation analysis. A utilitarian marker, a magnitude of greater than unity of which describes happy web-searching operations, and an empirical limit that connects user’s satisfaction with his frustration level-are proposed too. The presence of pertinent information in the very first page of a website and magnitude of the decay parameter of user satisfaction (frustration, irritation etc.) are found to be two key aspects that dominate the web user’s psychology. The proposed model employed different combinations of decay parameter, searching time and number of helpful websites. The obtained results are found to match the results from three real-life case studies.
Saint Onge, Jarron M; Cepeda, Alice; Lee King, Patricia A; Valdez, Avelardo
2013-12-01
We used an intersectional minority stress perspective to examine the association between family/cultural stress and mental health among substance-using Mexican-Americans. Employing a unique longitudinal sample of 239 socioeconomically disadvantaged, non-injecting heroin-using Mexican-Americans from San Antonio, Texas, we examined how culturally relevant stressors are related to depression and suicidal ideation. First, we identified depression and suicidal ideation prevalence rates for this disadvantaged sample. Second, we determined how cultural stress is experienced over time using stress trajectories. Third, we evaluated how family/cultural stressors and stress trajectories are related to depression and suicidal ideation outcomes. Results showed high rates of baseline depression (24 %) and suicidal ideation (30 %). We used latent class growth analysis to identify three primary stress trajectories (stable, high but decreasing, and increasing) over three time points during 1 year. We found that the increasing stressors trajectory was associated with higher rates of depression and suicidal ideation, and that stress trajectories had unique relationships with mental illness. We also showed that baseline stressors, sum stressors, and high but decreasing stressors maintained positive associations with mental illness after controlling for baseline depression. Our results highlight the importance of focusing on within-group, culturally specific stressors and addressing both operant and cumulative stressors in the study of mental health for marginalized populations and suggest the importance of early intervention in minimizing stressors.
Metrological activity determination of 133Ba by sum-peak absolute method
NASA Astrophysics Data System (ADS)
da Silva, R. L.; de Almeida, M. C. M.; Delgado, J. U.; Poledna, R.; Santos, A.; de Veras, E. V.; Rangel, J.; Trindade, O. L.
2016-07-01
The National Laboratory for Metrology of Ionizing Radiation provides gamma sources of radionuclide and standardized in activity with reduced uncertainties. Relative methods require standards to determine the sample activity while the absolute methods, as sum-peak, not. The activity is obtained directly with good accuracy and low uncertainties. 133Ba is used in research laboratories and on calibration of detectors for analysis in different work areas. Classical absolute methods don't calibrate 133Ba due to its complex decay scheme. The sum-peak method using gamma spectrometry with germanium detector standardizes 133Ba samples. Uncertainties lower than 1% to activity results were obtained.
Budäus, Lars; Graefen, Markus; Salomon, Georg; Isbarn, Hendrik; Lughezzani, Giovanni; Sun, Maxine; Chun, Felix K H; Schlomm, Thorsten; Steuber, Thomas; Haese, Alexander; Koellermann, Jens; Sauter, Guido; Fisch, Margit; Heinzer, Hans; Huland, Hartwig; Karakiewicz, Pierre I
2010-10-01
To examine the rate of Gleason sum upgrading (GSU) from a sum of 6 to a Gleason sum of ≥7 in patients undergoing radical prostatectomy (RP), who fulfilled the recommendations for low dose rate brachytherapy (Gleason sum 6, prostate-specific antigen ≤10 ng/mL, clinical stage ≤T2a and prostate volume ≤50 mL), and to test the performance of an existing nomogram for prediction of GSU in this specific cohort of patients. The analysis focused on 414 patients, who fulfilled the European Society for Therapeutic Radiation and Oncology and American Brachytherapy Society criteria for low dose rate brachytherapy (LD-BT) and underwent a 10-core prostate biopsy followed by RP. The rate of GSU was tabulated and the ability of available clinical and pathological parameters for predicting GSU was tested. Finally, the performance of an existing GSU nomogram was explored. The overall rate of GSU was 35.5%. When applied to LD-BT candidates, the existing nomogram was 65.8% accurate versus 70.8% for the new nomogram. In decision curve analysis tests, the new nomogram fared substantially better than the assumption that no patient is upgraded and better than the existing nomogram. GSU represents an important issue in LD-BT candidates. The new nomogram might improve patient selection for LD-BT and cancer control outcome by excluding patients with an elevated probability of GSU. © 2010 The Japanese Urological Association.
Wang, Hongxin; Friedrich, Stephan; Li, Lei; ...
2018-02-13
According to L-edge sum rules, the number of 3d vacancies at a transition metal site is directly proportional to the integrated intensity of the L-edge X-ray absorption spectrum (XAS) for the corresponding metal complex. In this study, the numbers of 3d holes are characterized quantitatively or semi-quantitatively for a series of manganese (Mn) and nickel (Ni) complexes, including the electron configurations 3d 10 → 3d 0. In addition, extremely dilute (<0.1% wt/wt) Ni enzymes were examined by two different approaches: (1) by using a high resolution superconducting tunnel junction X-ray detector to obtain XAS spectra with a very high signal-to-noisemore » ratio, especially in the non-variant edge jump region; and (2) by adding an inert tracer to the sample that provides a prominent spectral feature to replace the weak edge jump for intensity normalization. In this publication, we present for the first time: (1) L-edge sum rule analysis for a series of Mn and Ni complexes that include electron configurations from an open shell 3d0 to a closed shell 3d 10; (2) a systematic analysis on the uncertainties, especially on that from the edge jump, which was missing in all previous reports; (3) a clearly-resolved edge jump between pre-L 3 and post-L 2 regions from an extremely dilute sample; (4) an evaluation of an alternative normalization standard for L-edge sum rule analysis. XAS from two copper (Cu) proteins measured using a conventional semiconductor X-ray detector are also repeated as bridges between Ni complexes and dilute Ni enzymes. The differences between measuring 1% Cu enzymes and measuring <0.1% Ni enzymes are compared and discussed. As a result, this study extends L-edge sum rule analysis to virtually any 3d metal complex and any dilute biological samples that contain 3d metals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hongxin; Friedrich, Stephan; Li, Lei
According to L-edge sum rules, the number of 3d vacancies at a transition metal site is directly proportional to the integrated intensity of the L-edge X-ray absorption spectrum (XAS) for the corresponding metal complex. In this study, the numbers of 3d holes are characterized quantitatively or semi-quantitatively for a series of manganese (Mn) and nickel (Ni) complexes, including the electron configurations 3d 10 → 3d 0. In addition, extremely dilute (<0.1% wt/wt) Ni enzymes were examined by two different approaches: (1) by using a high resolution superconducting tunnel junction X-ray detector to obtain XAS spectra with a very high signal-to-noisemore » ratio, especially in the non-variant edge jump region; and (2) by adding an inert tracer to the sample that provides a prominent spectral feature to replace the weak edge jump for intensity normalization. In this publication, we present for the first time: (1) L-edge sum rule analysis for a series of Mn and Ni complexes that include electron configurations from an open shell 3d0 to a closed shell 3d 10; (2) a systematic analysis on the uncertainties, especially on that from the edge jump, which was missing in all previous reports; (3) a clearly-resolved edge jump between pre-L 3 and post-L 2 regions from an extremely dilute sample; (4) an evaluation of an alternative normalization standard for L-edge sum rule analysis. XAS from two copper (Cu) proteins measured using a conventional semiconductor X-ray detector are also repeated as bridges between Ni complexes and dilute Ni enzymes. The differences between measuring 1% Cu enzymes and measuring <0.1% Ni enzymes are compared and discussed. As a result, this study extends L-edge sum rule analysis to virtually any 3d metal complex and any dilute biological samples that contain 3d metals.« less
Armstrong, April W; Feldman, Steven R; Korman, Neil J; Meng, Xiangyi; Guana, Adriana; Nyirady, Judit; Herrera, Vivian; Zhao, Yang
2017-05-01
Conventional measurements for assessing psoriasis treatment effects capture improvements at fixed, pre-specified timepoints, failing to account for cumulative clinical benefit over time. Explore the innovative concept of "cumulative clinical benefit" by examining the effect of secukinumab over 52 weeks in moderate-to-severe psoriasis patients. Cumulative clinical benefit was determined as the area-under-the-curve of the percentage of responders over 52 weeks (AUC 0-52 wks ), using pooled data from two phase III trials for patients receiving secukinumab (300 or 150 mg) or etanercept. Normalized cumulative benefit with secukinumab 300 mg, secukinumab 150 mg, and etanercept was 74.2%, 63.2%, and 50.5%, respectively, for PASI 75; 58.0%, 42.5%, and 29.5%, respectively, for PASI 90; 32.3%, 18.8%, and 8.7%, respectively, for PASI 100; and 58.3%, 47.9%, and 38.3%, respectively, for DLQI 0/1. 52-week PASI 75 clinical benefit ratios for secukinumab 300 and 150 mg versus etanercept were 1.47 and 1.25, respectively; the ratio of the two secukinumab doses was 1.17, favoring 300 mg. Post hoc analysis. Cumulative clinical benefit estimated by AUC 0-52 wks is a novel measure for comparing psoriasis treatments. Secukinumab 300 mg provides greater cumulative clinical benefit than secukinumab 150 mg; both provide greater cumulative benefit than etanercept.
ERIC Educational Resources Information Center
Maschi, Tina
2006-01-01
This study examined how the cumulative (additive) versus differential (individual) effects of trauma influenced male delinquency. Using a comprehensive measure of trauma, a secondary data analysis was conducted on a nationally representative sample of male youths between the ages of 12 and 17. Logistic regression analyses revealed that all three…
Investigation of the external flow analysis for density measurements at high altitude
NASA Technical Reports Server (NTRS)
Bienkowski, G. K.
1984-01-01
The results of analysis performed on the external flow around the shuttle orbiter nose regions at the Shuttle Upper Atmosphere Mass Spectrometer (SUMS) inlet orifice are presented. The purpose of the analysis is to quantitatively characterize the flow conditions to facilitate SUMS flight data reduction and subsequent determination of orbiter aerodynamic force coefficients in the hypersonic rarefied flow regime. Experimental determination of aerodynamic force coefficients requires accurate simultaneous measurement of forces (or acceleration) and dynamic pressure along with independent knowledge of density and velocity. The SUMS provides independent measurement of dynamic pressure; however, it does so indirectly and requires knowledge of the relationship between measured orifice conditions and the dynamic pressure which can only be determined on the basis of molecule or theory for a winged configuration. Monte Carlo direct simulation computer codes were developed for both the flow field solution at the orifice and for the internal orifice flow. These codes were used to study issues associated with geometric modeling of the orbiter nose geometry and the modeling of intermolecular collisions including rotational energy exchange and a preliminary analysis of vibrational excitation and dissociation effects. Data obtained from preliminary simulation runs are presented.
SEDIMENT-HOSTED PRECIOUS METAL DEPOSITS.
Bagby, W.C.; Pickthorn, W.J.; Goldfarb, R.; Hill, R.A.
1984-01-01
The Dee mine is a sediment-hosted, disseminated gold deposit in the Roberts Mountains allochthon of north central Nevada. Soil samples were collected from the C-horizon in undisturbed areas over the deposit in order to investigate the usefulness of soil geochemistry in identifying this type of deposit. Each sample was sieved to minus 80 mesh and analyzed quantitatively for Au, Ag, As, Sb, Hg, Tl and semi-quantitative data for an additional 31 elements. Rank sum analysis is successful for the Au, Ag, As, Sb, Hg, Tl suite, even though bedrock geology is disregarded. This method involves data transformation into a total element signature by ranking the data in ascending order and summing the element ranks for each sample. The rank sums are then divided into percentile groups and plotted. The rank sum plot for the Dee soils unequivocally identifies three of four known ore zones.
Comparison of logging residue from lump sum and log scale timber sales.
James O Howard; Donald J. DeMars
1985-01-01
Data from 1973 and 1980 logging residues studies were used to compare the volume of residue from lump sum and log scale timber sales. Covariance analysis was used to adjust the mean volume for each data set for potential variation resulting from differences in stand conditions. Mean residue volumes from the two sale types were significantly different at the 5-percent...
Jeremy S. Fried; Theresa B. Jain; Sara Loreno; Robert F. Keefe; Conor K. Bell
2017-01-01
The BioSum modeling framework summarizes current and prospective future forest conditions under alternative management regimes along with their costs, revenues and product yields. BioSum translates Forest Inventory and Analysis (FIA) data for input to the Forest Vegetation Simulator (FVS), summarizes FVS outputs for input to the treatment operations cost model (OpCost...
The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators
NASA Astrophysics Data System (ADS)
Ahmedov, Anvarjon
2018-03-01
In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.
Projected lifetime risks and hospital care expenditure for traumatic injury.
Chang, David C; Anderson, Jamie E; Kobayashi, Leslie; Coimbra, Raul; Bickler, Stephen W
2012-08-01
The lifetime risk and expected cost of trauma care would be valuable for health policy planners, but this information is currently unavailable. The cumulative incidence rates methodology, based on a cross-sectional population analysis, offers an alternative approach to prohibitively costly prospective cohort studies. Retrospective analysis of the California Office of Statewide Health Planning and Development (OSHPD) database was performed for 2008. Trauma admissions were identified by ICD-9 primary diagnosis codes 800-959, with certain exclusions. Cumulative incidence rates were calculated as the cumulative summation of incidence risks sequentially across age groups. A total of 2.2 million admissions were identified, with mean age of 63.8 y, 49.6% men, 82.8% Whites, 5.7% Blacks, 11.3% Hispanics, and 3.1% Asians. The cumulative incidence rate for patients older than age 85 y was 1119 per 10,000 people, with the majority of risk in the elderly, compared with 24,325 per 10,000 people for all-cause hospitalizations. The rates were 946 for men, 1079 for women, 999 for non-Hispanic Whites, 568 for Blacks, 577 for Hispanics, and 395 for Asians, per 10,000 population. The cumulative expected hospital charge was $6538, compared with $81,257 for all-cause hospitalizations. The cumulative lifetime risk of trauma/injury requiring hospitalization for a person living to age 85 y in California is 11.2%, accounting for 4.6% of expected lifetime hospitalizations, but accounting for 8.0% of expected lifetime hospital expenditures. Risk of trauma is significant in the elderly. The total expenditure for all trauma hospitalizations in California was $7.62 billion in 2008. Copyright © 2012 Elsevier Inc. All rights reserved.
Sakr, Yasser; Rubatto Birri, Paolo Nahuel; Kotfis, Katarzyna; Nanchal, Rahul; Shah, Bhagyesh; Kluge, Stefan; Schroeder, Mary E; Marshall, John C; Vincent, Jean-Louis
2017-03-01
Excessive fluid therapy in patients with sepsis may be associated with risks that outweigh any benefit. We investigated the possible influence of early fluid balance on outcome in a large international database of ICU patients with sepsis. Observational cohort study. Seven hundred and thirty ICUs in 84 countries. All adult patients admitted between May 8 and May 18, 2012, except admissions for routine postoperative surveillance. For this analysis, we included only the 1,808 patients with an admission diagnosis of sepsis. Patients were stratified according to quartiles of cumulative fluid balance 24 hours and 3 days after ICU admission. ICU and hospital mortality rates were 27.6% and 37.3%, respectively. The cumulative fluid balance increased from 1,217 mL (-90 to 2,783 mL) in the first 24 hours after ICU admission to 1,794 mL (-951 to 5,108 mL) on day 3 and decreased thereafter. The cumulative fluid intake was similar in survivors and nonsurvivors, but fluid balance was less positive in survivors because of higher fluid output in these patients. Fluid balances became negative after the third ICU day in survivors but remained positive in nonsurvivors. After adjustment for possible confounders in multivariable analysis, the 24-hour cumulative fluid balance was not associated with an increased hazard of 28-day in-hospital death. However, there was a stepwise increase in the hazard of death with higher quartiles of 3-day cumulative fluid balance in the whole population and after stratification according to the presence of septic shock. In this large cohort of patients with sepsis, higher cumulative fluid balance at day 3 but not in the first 24 hours after ICU admission was independently associated with an increase in the hazard of death.
Olsen, Geary W; Andres, Kara L; Johnson, Rebecca A; Buehrer, Betsy D; Holen, Brian M; Morey, Sandy Z; Logan, Perry W; Hewett, Paul
2012-01-01
The mortality of 2650 employees (93.4% males) in the mine and mill production of roofing granules at four plants was examined between 1945 and 2004. Hypotheses focused on diseases associated with exposure to silica: nonmalignant respiratory disease, lung cancer, and nonmalignant renal disease. Study eligibility required ≥ 1 year of employment by 2000. Work history and vital status were followed through 2004 with < 1% lost to follow-up. Industrial hygiene sampling data (1871 sampling measurements over a 32-year period) and professional judgment were used to construct 15 respirable crystalline silica exposure categories. A category was assigned to all plant-, department-, and time-dependent standard job titles. Cumulative respirable crystalline silica exposure (mg/m(3)-years) was calculated as the sum of the product of time spent and the average exposure for each plant-, department-, job-, and calendar-year combination. The cohort geometric mean was 0.17 mg/m(3)-years (geometric standard deviation 4.01) and differed by plant. Expected deaths were calculated using U.S. (entire cohort) and regional (each plant) mortality rates. Poisson regression was used for internal comparisons. For the entire cohort, 772 deaths (97.4% males) were identified (standardized mortality ratio 0.95, 95% CI 0.88-1.02). There were 50 deaths from nonmalignant respiratory diseases (1.14, 95% CI 0.85-1.51). Lagging exposure 15 years among the male cohort, the relative risks for nonmalignant respiratory disease were 1.00 (reference), 0.80, 1.94, and 2.03 (p value trend = 0.03) when cumulative exposure was categorized < 0.1, 0.1- < 0.5, 0.5- < 1.0, and ≥ 1.0 mg/m(3)-years, respectively. There was a total of 77 lung cancer deaths (1.11, 95% CI 0.88-1.39). Lagging exposure 15 years, the relative risks for males were 1.00 (reference), 1.83, 1.83, and 1.05 (p value trend = 0.9). There were 16 deaths from nonmalignant renal disease (1.76, 95% CI 1.01-2.86). This exposure-response trend was suggestive but imprecise. The study results are consistent with other cohorts with similar levels of exposure to respirable crystalline silica.
ERIC Educational Resources Information Center
Brody, Gene H.; Yu, Tianyi; Chen, Yi-Fu; Kogan, Steven M.; Evans, Gary W.; Beach, Steven R. H.; Windle, Michael; Simons, Ronald L.; Gerrard, Meg; Gibbons, Frederick X.; Philibert, Robert A.
2013-01-01
The health disparities literature has identified a common pattern among middle-aged African Americans that includes high rates of chronic disease along with low rates of psychiatric disorders despite exposure to high levels of cumulative socioeconomic status (SES) risk. The current study was designed to test hypotheses about the developmental…
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Xin -Hu; Ye, Yun -Xiu; Chen, Jian -Ping
2015-07-17
The radiation and ionization energy loss are presented for single arm Monte Carlo simulation for the GDH sum rule experiment in Hall-A at Jefferson Lab. Radiation and ionization energy loss are discussed formore » $$^{12}C$$ elastic scattering simulation. The relative momentum ratio $$\\frac{\\Delta p}{p}$$ and $$^{12}C$$ elastic cross section are compared without and with radiation energy loss and a reasonable shape is obtained by the simulation. The total energy loss distribution is obtained, showing a Landau shape for $$^{12}C$$ elastic scattering. This simulation work will give good support for radiation correction analysis of the GDH sum rule experiment.« less
Ansell, Emily B; Rando, Kenneth; Tuit, Keri; Guarnaccia, Joseph; Sinha, Rajita
2012-07-01
Cumulative adversity and stress are associated with risk of psychiatric disorders. While basic science studies show repeated and chronic stress effects on prefrontal and limbic neurons, human studies examining cumulative stress and effects on brain morphology are rare. Thus, we assessed whether cumulative adversity is associated with differences in gray matter volume, particularly in regions regulating emotion, self-control, and top-down processing in a community sample. One hundred three healthy community participants, aged 18 to 48 and 68% male, completed interview assessment of cumulative adversity and a structural magnetic resonance imaging protocol. Whole-brain voxel-based-morphometry analysis was performed adjusting for age, gender, and total intracranial volume. Cumulative adversity was associated with smaller volume in medial prefrontal cortex (PFC), insular cortex, and subgenual anterior cingulate regions (familywise error corrected, p < .001). Recent stressful life events were associated with smaller volume in two clusters: the medial PFC and the right insula. Life trauma was associated with smaller volume in the medial PFC, anterior cingulate, and subgenual regions. The interaction of greater subjective chronic stress and greater cumulative life events was associated with smaller volume in the orbitofrontal cortex, insula, and anterior and subgenual cingulate regions. Current results demonstrate that increasing cumulative exposure to adverse life events is associated with smaller gray matter volume in key prefrontal and limbic regions involved in stress, emotion and reward regulation, and impulse control. These differences found in community participants may serve to mediate vulnerability to depression, addiction, and other stress-related psychopathology. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Ansell, Emily B.; Rando, Kenneth; Tuit, Keri; Guarnaccia, Joseph; Sinha, Rajita
2012-01-01
Background Cumulative adversity and stress are associated with risk of psychiatric disorders. While basic science studies show repeated and chronic stress effects on prefrontal and limbic neurons, human studies examining cumulative stress and effects on brain morphology are rare. Thus, we assessed whether cumulative adversity is associated with differences in gray matter volume, particularly in regions regulating emotion, self-control, and top-down processing in a community sample. Methods One hundred three healthy community participants, aged 18 to 48 and 68% male, completed interview assessment of cumulative adversity and a structural magnetic resonance imaging protocol. Whole-brain voxel-based-morphometry analysis was performed adjusting for age, gender, and total intracranial volume. Results Cumulative adversity was associated with smaller volume in medial prefrontal cortex (PFC), insular cortex, and subgenual anterior cingulate regions (familywise error corrected, p <.001). Recent stressful life events were associated with smaller volume in two clusters: the medial PFC and the right insula. Life trauma was associated with smaller volume in the medial PFC, anterior cingulate, and subgenual regions. The interaction of greater subjective chronic stress and greater cumulative life events was associated with smaller volume in the orbitofrontal cortex, insula, and anterior and subgenual cingulate regions. Conclusions Current results demonstrate that increasing cumulative exposure to adverse life events is associated with smaller gray matter volume in key prefrontal and limbic regions involved in stress, emotion and reward regulation, and impulse control. These differences found in community participants may serve to mediate vulnerability to depression, addiction, and other stress-related psychopathology. PMID:22218286
Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models
2002-03-01
such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most
The frequency-difference and frequency-sum acoustic-field autoproducts.
Worthmann, Brian M; Dowling, David R
2017-06-01
The frequency-difference and frequency-sum autoproducts are quadratic products of solutions of the Helmholtz equation at two different frequencies (ω + and ω - ), and may be constructed from the Fourier transform of any time-domain acoustic field. Interestingly, the autoproducts may carry wave-field information at the difference (ω + - ω - ) and sum (ω + + ω - ) frequencies even though these frequencies may not be present in the original acoustic field. This paper provides analytical and simulation results that justify and illustrate this possibility, and indicate its limitations. The analysis is based on the inhomogeneous Helmholtz equation and its solutions while the simulations are for a point source in a homogeneous half-space bounded by a perfectly reflecting surface. The analysis suggests that the autoproducts have a spatial phase structure similar to that of a true acoustic field at the difference and sum frequencies if the in-band acoustic field is a plane or spherical wave. For multi-ray-path environments, this phase structure similarity persists in portions of the autoproduct fields that are not suppressed by bandwidth averaging. Discrepancies between the bandwidth-averaged autoproducts and true out-of-band acoustic fields (with potentially modified boundary conditions) scale inversely with the product of the bandwidth and ray-path arrival time differences.
Giovannoni, Gavin; Kappos, Ludwig; Gold, Ralf; Khatri, Bhupendra O; Selmaj, Krzysztof; Umans, Kimberly; Greenberg, Steven J; Sweetser, Marianne; Elkins, Jacob; McCroskery, Peter
2016-09-01
Daclizumab has been evaluated in multicentre, randomised, double-blind studies for the treatment of patients with relapsing-remitting multiple sclerosis (RRMS). Safety and tolerability are key considerations in MS treatment selection, as they influence adherence to medication. Evaluate the safety of daclizumab in patients with RRMS from an integrated analysis of six clinical studies. Patients treated with at least one dose of subcutaneous daclizumab 150mg or 300mg monthly in three completed and three ongoing clinical studies were included in this integrated analysis. Cumulative incidence of treatment-emergent adverse events (AEs) was the primary endpoint. This analysis included 2236 patients with 5214 patient-years of exposure to daclizumab. The cumulative incidence of any AE was 84% and of any serious AE excluding MS relapse was 16%. The incidences of AEs when evaluated by 6-month intervals remained stable over the 6.5 years of maximum follow-up. Most AEs were mild or moderate in severity. An important safety concern associated with daclizumab therapy involved hepatic AEs (16%) and serum transaminase elevations at least three times the upper limit of normal (10%), most of which were asymptomatic, self-limiting, and non-recurring. Cumulative incidences of cutaneous, infectious, and gastrointestinal AEs were 33%, 59%, and 25%, respectively; most events either resolved spontaneously or were treated successfully with standard medical interventions and did not result in discontinuation of treatment. This integrated analysis demonstrates that treatment of RRMS with daclizumab for periods of up to 6.5 years is associated with an acceptable safety profile with no evidence of cumulative toxicity over time. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A probabilistic analysis of cumulative carbon emissions and long-term planetary warming
Fyke, Jeremy Garmeson; Matthews, H. Damon
2015-11-16
Efforts to mitigate and adapt to long-term climate change could benefit greatly from probabilistic estimates of cumulative carbon emissions due to fossil fuel burning and resulting CO 2-induced planetary warming. Here we demonstrate the use of a reduced-form model to project these variables. We performed simulations using a large-ensemble framework with parametric uncertainty sampled to produce distributions of future cumulative emissions and consequent planetary warming. A hind-cast ensemble of simulations captured 1980–2012 historical CO 2 emissions trends and an ensemble of future projection simulations generated a distribution of emission scenarios that qualitatively resembled the suite of Representative and Extended Concentrationmore » Pathways. The resulting cumulative carbon emission and temperature change distributions are characterized by 5–95th percentile ranges of 0.96–4.9 teratonnes C (Tt C) and 1.4 °C–8.5 °C, respectively, with 50th percentiles at 3.1 Tt C and 4.7 °C. Within the wide range of policy-related parameter combinations that produced these distributions, we found that low-emission simulations were characterized by both high carbon prices and low costs of non-fossil fuel energy sources, suggesting the importance of these two policy levers in particular for avoiding dangerous levels of climate warming. With this analysis we demonstrate a probabilistic approach to the challenge of identifying strategies for limiting cumulative carbon emissions and assessing likelihoods of surpassing dangerous temperature thresholds.« less
Rasmuson, James O; Roggli, Victor L; Boelter, Fred W; Rasmuson, Eric J; Redinger, Charles F
2014-01-01
A detailed evaluation of the correlation and linearity of industrial hygiene retrospective exposure assessment (REA) for cumulative asbestos exposure with asbestos lung burden analysis (LBA) has not been previously performed, but both methods are utilized for case-control and cohort studies and other applications such as setting occupational exposure limits. (a) To correlate REA with asbestos LBA for a large number of cases from varied industries and exposure scenarios; (b) to evaluate the linearity, precision, and applicability of both industrial hygiene exposure reconstruction and LBA; and (c) to demonstrate validation methods for REA. A panel of four experienced industrial hygiene raters independently estimated the cumulative asbestos exposure for 363 cases with limited exposure details in which asbestos LBA had been independently determined. LBA for asbestos bodies was performed by a pathologist by both light microscopy and scanning electron microscopy (SEM) and free asbestos fibers by SEM. Precision, reliability, correlation and linearity were evaluated via intraclass correlation, regression analysis and analysis of covariance. Plaintiff's answers to interrogatories, work history sheets, work summaries or plaintiff's discovery depositions that were obtained in court cases involving asbestos were utilized by the pathologist to provide a summarized brief asbestos exposure and work history for each of the 363 cases. Linear relationships between REA and LBA were found when adjustment was made for asbestos fiber-type exposure differences. Significant correlation between REA and LBA was found with amphibole asbestos lung burden and mixed fiber-types, but not with chrysotile. The intraclass correlation coefficients (ICC) for the precision of the industrial hygiene rater cumulative asbestos exposure estimates and the precision of repeated laboratory analysis were found to be in the excellent range. The ICC estimates were performed independent of specific asbestos fiber-type. Both REA and pathology assessment are reliable and complementary predictive methods to characterize asbestos exposures. Correlation analysis between the two methods effectively validates both REA methodology and LBA procedures within the determined precision, particularly for cumulative amphibole asbestos exposures since chrysotile fibers, for the most part, are not retained in the lung for an extended period of time.
Bonofiglio, Federico; Beyersmann, Jan; Schumacher, Martin; Koller, Michael; Schwarzer, Guido
2016-09-01
Meta-analysis of a survival endpoint is typically based on the pooling of hazard ratios (HRs). If competing risks occur, the HRs may lose translation into changes of survival probability. The cumulative incidence functions (CIFs), the expected proportion of cause-specific events over time, re-connect the cause-specific hazards (CSHs) to the probability of each event type. We use CIF ratios to measure treatment effect on each event type. To retrieve information on aggregated, typically poorly reported, competing risks data, we assume constant CSHs. Next, we develop methods to pool CIF ratios across studies. The procedure computes pooled HRs alongside and checks the influence of follow-up time on the analysis. We apply the method to a medical example, showing that follow-up duration is relevant both for pooled cause-specific HRs and CIF ratios. Moreover, if all-cause hazard and follow-up time are large enough, CIF ratios may reveal additional information about the effect of treatment on the cumulative probability of each event type. Finally, to improve the usefulness of such analysis, better reporting of competing risks data is needed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Implications of Network Topology on Stability
Kinkhabwala, Ali
2015-01-01
In analogy to chemical reaction networks, I demonstrate the utility of expressing the governing equations of an arbitrary dynamical system (interaction network) as sums of real functions (generalized reactions) multiplied by real scalars (generalized stoichiometries) for analysis of its stability. The reaction stoichiometries and first derivatives define the network’s “influence topology”, a signed directed bipartite graph. Parameter reduction of the influence topology permits simplified expression of the principal minors (sums of products of non-overlapping bipartite cycles) and Hurwitz determinants (sums of products of the principal minors or the bipartite cycles directly) for assessing the network’s steady state stability. Visualization of the Hurwitz determinants over the reduced parameters defines the network’s stability phase space, delimiting the range of its dynamics (specifically, the possible numbers of unstable roots at each steady state solution). Any further explicit algebraic specification of the network will project onto this stability phase space. Stability analysis via this hierarchical approach is demonstrated on classical networks from multiple fields. PMID:25826219
NASA Technical Reports Server (NTRS)
Talbot, R.; Dibb, J.; Scheuer, E.; Seid, G.; Russo, R.; Sandholm, S.; Tan, D.; Blake, D.; Blake, N.; Singh, H.
2003-01-01
We present here results for reactive nitrogen species measured aboard the NASA DC-8 aircraft during the Transport and Chemical Evolution over the Pacific TRACE-P) mission. The large-scale distributions total reactive nitrogen (NO(sub y,sum) = NO + NO2 + HNO3 + PAN + C(sub 1)-C(sub 5) alkyl nitrates) and O3 and CO were better defined in the boundary layer with significant degradation of the relationships as altitude increased. Typically, NO(sub y,sum) was enhanced over background levels of approx.260 pptv by 20-to-30-fold. The ratio C2H2/CO had values of 1-4 at altitudes up to 10 km and as far eastward as 150degE, implying significant vertical mixing of air parcels followed by rapid advection across the Pacific. Analysis air parcels originating from five principal Asian source regions showed that HNO3 and PAN dominated NO(sub y,sum). Correlations of NO(sub y,sum) with C2Cl4 (urban tracer) were not well defined in any of the source regions, and they were only slightly better with CH3Cl (biomass tracer). Air parcels over the western Pacific contained a complex mixture of emission sources that are not easily resolvable as shown by analysis of the Shanghai mega-city plume. It contained an intricate mixture of pollution emissions and exhibited the highest mixing ratios of NO(sub y,sum) species observed during TRACE-P. Comparison of tropospheric chemistry between the earlier PEM-West B mission and the recent TRACE-P data showed that in the boundary layer significant increases in the mixing ratios of NO(sub y,sum)species have occurred, but the middle and upper troposphere seems to have been affected minimally by increasing emissions on the Asian continent over the last 7 years.
Calculating the nutrient composition of recipes with computers.
Powers, P M; Hoover, L W
1989-02-01
The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.
A Groupwise Association Test for Rare Mutations Using a Weighted Sum Statistic
Madsen, Bo Eskerod; Browning, Sharon R.
2009-01-01
Resequencing is an emerging tool for identification of rare disease-associated mutations. Rare mutations are difficult to tag with SNP genotyping, as genotyping studies are designed to detect common variants. However, studies have shown that genetic heterogeneity is a probable scenario for common diseases, in which multiple rare mutations together explain a large proportion of the genetic basis for the disease. Thus, we propose a weighted-sum method to jointly analyse a group of mutations in order to test for groupwise association with disease status. For example, such a group of mutations may result from resequencing a gene. We compare the proposed weighted-sum method to alternative methods and show that it is powerful for identifying disease-associated genes, both on simulated and Encode data. Using the weighted-sum method, a resequencing study can identify a disease-associated gene with an overall population attributable risk (PAR) of 2%, even when each individual mutation has much lower PAR, using 1,000 to 7,000 affected and unaffected individuals, depending on the underlying genetic model. This study thus demonstrates that resequencing studies can identify important genetic associations, provided that specialised analysis methods, such as the weighted-sum method, are used. PMID:19214210
Hydrologic response to modeled snowmelt input in alpine catchments in the Southwestern United States
NASA Astrophysics Data System (ADS)
Driscoll, J. M.; Molotch, N. P.; Jepsen, S. M.; Meixner, T.; Williams, M. W.; Sickman, J. O.
2012-12-01
Snowmelt from high elevation catchments is the primary source of water resources in the Southwestern United States. Timing and duration of snowmelt and resulting catchment response can show the physical and chemical importance of storage at the catchment scale. Storage of waters in subsurface materials provides a physical and chemical buffer to hydrologic input variability. We expect the hydrochemistry of catchments with less storage capacity will more closely reflect input waters than a catchment with more storage and therefore more geochemical evolution of waters. Two headwater catchments were compared for this study; Emerald Lake Watershed (ELW) in the southern Sierra Nevada and Green Lake 4 (GL4) in the Colorado Front Range. These sites have geochemically similar granitic terrane, and negligible evaporation and transpiration due to their high-elevation setting. Eleven years of data (1996-2006) from spatially-distributed snowmelt models were spatially and temporally aggregated to generate daily values of snowmelt volume for each catchment area. Daily storage flux was calculated as the difference between snowmelt input and catchment outflow at a daily timestep, normalized to the catchment area. Daily snowmelt values in GL4 are more consistent (the annual standard deviation ranged from 0.19 to 0.76 cm) than the daily snowmelt in ELW (0.60 to 1.04 cm). Outflow follows the same trend, with an even narrower range of standard deviations from GL4 (0.27 to 0.54 cm) compared to the standard deviation of outflow in ELW (0.38 to 0.98 cm). The dampening of the input variability could be due to storage in the catchment; the larger effect would mean a larger storage capacity in the catchment. Calculations of storage flux (the input snowmelt minus the output catchment discharge) show the annual sum of water into storage in ELW ranges from -0.9200 to 1.1124 meters, in GL4 the ranger is narrower, from -0.655 to 0.0992 meters. Cumulative storage for each year can be negative (more water leaving the system than entering; storage loss) or positive (more water coming into the system than leaving; storage gain). The cumulative storage for all years in GL4 show a similar positive trend from day of year 60 through 150, followed by a decrease to the end of the snowmelt season. Only two years (1997 and 2005) in GL4 were calculated to cumulatively gain storage water, the other nine years lost stored water to outflow. The cumulative storage annual data in ELW do not show as strong of a trend for all years. ELW also a different distribution of cumulative storage values; with four years showing a cumulative loss and seven years showing a gain in stored water. This could show a depletion of stored water, an underestimate of snowmelt or a connection to deeper flowpaths. Mass-balance inverse geochemical models will be used to determine the hydrochemical connectivity or lack of connectivity of snowmelt to outflow relative to the physical calculations. Initial hydrochemical results show generally higher concentrations of solutes from GL4 outflow, which may show more contribution from stored waters.
Evaluation of marginal failures of dental composite restorations by acoustic emission analysis.
Gu, Ja-Uk; Choi, Nak-Sam
2013-01-01
In this study, a nondestructive method based on acoustic emission (AE) analysis was developed to evaluate the marginal failure states of dental composite restorations. Three types of ring-shaped substrates, which were modeled after a Class I cavity, were prepared from polymethyl methacrylate, stainless steel, and human molar teeth. A bonding agent and a composite resin were applied to the ring-shaped substrates and cured by light exposure. At each time-interval measurement, the tooth substrate presented a higher number of AE hits than polymethyl methacrylate and steel substrates. Marginal disintegration estimations derived from cumulative AE hits and cumulative AE energy parameters showed that a signification portion of marginal gap formation was already realized within 1 min at the initial light-curing stage. Estimation based on cumulative AE energy gave a higher level of marginal failure than that based on AE hits. It was concluded that the AE analysis method developed in this study was a viable approach in predicting the clinical survival of dental composite restorations efficiently within a short test period.
SU-E-I-98: PET/CT's Most-Cited 50 Articles since 2000: A Bibliometric Analysis of Classics.
Sayed, G
2012-06-01
Despite its relatively recent introduction to clinical practice, PET/CT has gained wide acceptance both in diagnostic and therapeutic applications. Scientific publication in PET/CT has also experienced significant development since its introduction. Bibliometric analyses allow an understanding of how this publication trend has developed at an aggregated level. Citation analysis is one of the most widely used bibliometric tools of scientometrics. Analysis of classics, defined as an articles with 100 or more citations, is common in the biomedical sciences as it reflects an article's influence in its professional and scientific community. Our objective was to identify the 50 most frequently cited classic articles in PET/CT in the past 10 years. The 50 most-cited PET/CT articles were identified by searching ISI's Web of Knowledge and Pubmed databases for all related publications from 2000 through 2010. Articles were evaluated for several characteristics such as author(s), institution, country of origin, publication year, type, and number of citations. An unadjusted categorical analysis was performed to compare all articles published in the search period. The search yielded a cumulative total of 22,554 entries for the publication period, of which 15,943 were original research articles. The 50 most-cited articles were identified from the latter sum and selected out of 73 classics. The number of citations for the top 50 classics ranged from 114 to 700. PET/CT classics appeared in three general and 12 core journals. The majority of the classics were in oncologic applications of PET/CT (62%). Articles related to diagnostic topics were 6%. The rest were focused on physics and instrumentation 24% and other basic sciences 16%. Despite its relatively short history PET/CT accumulated 73 classic articles in a decade. Such information is of importance to researchers and those who wish to study the scientific development in the field. © 2012 American Association of Physicists in Medicine.
Kennedy, Angie C; Adams, Adrienne E
2016-04-01
Using a cluster analysis approach with a sample of 205 young mothers recruited from community sites in an urban Midwestern setting, we examined the effects of cumulative violence exposure (community violence exposure, witnessing intimate partner violence, physical abuse by a caregiver, and sexual victimization, all with onset prior to age 13) on school participation, as mediated by attention and behavior problems in school. We identified five clusters of cumulative exposure, and found that the HiAll cluster (high levels of exposure to all four types) consistently fared the worst, with significantly higher attention and behavior problems, and lower school participation, in comparison with the LoAll cluster (low levels of exposure to all types). Behavior problems were a significant mediator of the effects of cumulative violence exposure on school participation, but attention problems were not. © The Author(s) 2014.
Review of: Methods to complete watershed analysis on Pacific Lumber lands in Northern California
L. M. Reid
1999-01-01
The three questions of primary concern for this review are: 1) are the WDNR modules adequately and validly modified to suit local conditions, as required by the HCP/SYP? 2) is there an adequate "distinct cumulative effects assessment" method, as required by the HCP/SYP? 3) will the cumulative effects assessment method and the modified WDNR modules be...
Alvard, Michael; Carlson, David; McGaffey, Ethan
2015-01-01
Foragers must often travel from a central place to exploit aggregations of prey. These patches can be identified behaviorally when a forager shifts from travel to area restricted search, identified by a decrease in speed and an increase in sinuosity of movement. Faster, more directed movement is associated with travel. Differentiating foraging behavior at patches from travel to patches is important for a variety of research questions and has now been made easier by the advent of small, GPS devices that can track forager movement with high resolution. In the summer and fall of 2012, movement data were collected from GPS devices placed on foraging trips originating in the artisanal fishing village of Desa Ikan (pseudonym), on the east coast of the Caribbean island nation of the Commonwealth Dominica. Moored FADs are human-made structures anchored to the ocean floor with fish attraction material on or near the surface designed to effectively create a resource patch. The ultimate goal of the research is to understand how property rights are emerging after the introduction of fish aggregating device (FAD) technology at the site in 1999. This paper reports on research to identify area-restricted search foraging behavior at FAD patches. For 22 foraging trips simultaneous behavioral observations were made to ground-truth the GPS movement data. Using a cumulative sum method, area restricted search was identified as negative deviations from the mean travel speed and the method was able to correctly identify FAD patches in every case.
The total position-spread tensor: Spin partition
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Khatib, Muammar, E-mail: elkhatib@irsamc.ups-tlse.fr; Evangelisti, Stefano, E-mail: stefano@irsamc.ups-tlse.fr; Leininger, Thierry, E-mail: Thierry.Leininger@irsamc.ups-tlse.fr
2015-03-07
The Total Position Spread (TPS) tensor, defined as the second moment cumulant of the position operator, is a key quantity to describe the mobility of electrons in a molecule or an extended system. In the present investigation, the partition of the TPS tensor according to spin variables is derived and discussed. It is shown that, while the spin-summed TPS gives information on charge mobility, the spin-partitioned TPS tensor becomes a powerful tool that provides information about spin fluctuations. The case of the hydrogen molecule is treated, both analytically, by using a 1s Slater-type orbital, and numerically, at Full Configuration Interactionmore » (FCI) level with a V6Z basis set. It is found that, for very large inter-nuclear distances, the partitioned tensor growths quadratically with the distance in some of the low-lying electronic states. This fact is related to the presence of entanglement in the wave function. Non-dimerized open chains described by a model Hubbard Hamiltonian and linear hydrogen chains H{sub n} (n ≥ 2), composed of equally spaced atoms, are also studied at FCI level. The hydrogen systems show the presence of marked maxima for the spin-summed TPS (corresponding to a high charge mobility) when the inter-nuclear distance is about 2 bohrs. This fact can be associated to the presence of a Mott transition occurring in this region. The spin-partitioned TPS tensor, on the other hand, has a quadratical growth at long distances, a fact that corresponds to the high spin mobility in a magnetic system.« less
ERβ Expression and Breast Cancer Risk Prediction for Women with Atypias
Hieken, Tina J; Carter, Jodi M; Hawse, John R; Hoskin, Tanya L; Bois, Melanie; Frost, Marlene; Hartmann, Lynn C; Radisky, Derek C; Visscher, Daniel W; Degnim, Amy C
2015-01-01
Estrogen receptor beta (ERβ) is highly expressed in normal breast epithelium and a putative tumor suppressor. Atypical hyperplasia substantially increases breast cancer risk, but identification of biomarkers to further improve risk stratification is needed. We evaluated ERβ expression in breast tissues from women with atypical hyperplasia and association with subsequent breast cancer risk. ERβ expression was examined by immunohistochemistry in a well-characterized 171 women cohort with atypical hyperplasia diagnosed 1967–1991. Nuclear ERβ percent and intensity was scored in the atypia and adjacent normal lobules. An ERβ sum score (percent + intensity) was calculated and grouped as low, moderate or high. Competing risks regression was used to assess associations of ERβ expression with breast cancer risk. After 15 years median follow-up, 36 women developed breast cancer. ERβ expression was lower in atypia lobules than normal lobules, by percent staining and intensity (both p<0.001). Higher ERβ expression in the atypia or normal lobules, evaluated by percent staining, intensity or sum score, decreased the risk of subsequent breast cancer by 2 (p=0.04) and 2.5-fold (p=0.006). High normal lobule ERβ expression conferred the strongest protective effect in pre-menopausal women: the 20-year cumulative incidence of breast cancer was 0% for women
Improvement of gross theory of beta-decay for application to nuclear data
NASA Astrophysics Data System (ADS)
Koura, Hiroyuki; Yoshida, Tadashi; Tachibana, Takahiro; Chiba, Satoshi
2017-09-01
A theoretical study of β decay and delayed neutron has been carried out with a global β-decay model, the gross theory. The gross theory is based on a consideration of the sum rule of the β-strength function, and gives reasonable results of β-decay rates and delayed neutron in the entire nuclear mass region. In a fissioning nucleus, neutrons are produced by β decay of neutron-rich fission fragments from actinides known as delayed neutrons. The average number of delayed neutrons is estimated based on the sum of the β-delayed neutron-emission probabilities multiplied by the cumulative fission yield for each nucleus. Such a behavior is important to manipulate nuclear reactors, and when we adopt some new high-burn-up reactors, properties of minor actinides will play an important roll in the system, but these data have not been sufficient. We re-analyze and improve the gross theory. For example, we considered the parity of neutrons and protons at the Fermi surface, and treat a suppression for the allowed transitions in the framework of the gross theory. By using the improved gross theory, underestimated half-lives in the neutron-rich indium isotopes and neighboring region increase, and consequently follow experimental trend. The ability of reproduction (and also prediction) of the β-decay rates, delayed-neutron emission probabilities is discussed. With this work, we have described the development of a programming code of the gross theory of β-decay including the improved parts. After preparation finished, this code can be released for the nuclear data community.
Alvard, Michael; Carlson, David; McGaffey, Ethan
2015-01-01
Foragers must often travel from a central place to exploit aggregations of prey. These patches can be identified behaviorally when a forager shifts from travel to area restricted search, identified by a decrease in speed and an increase in sinuosity of movement. Faster, more directed movement is associated with travel. Differentiating foraging behavior at patches from travel to patches is important for a variety of research questions and has now been made easier by the advent of small, GPS devices that can track forager movement with high resolution. In the summer and fall of 2012, movement data were collected from GPS devices placed on foraging trips originating in the artisanal fishing village of Desa Ikan (pseudonym), on the east coast of the Caribbean island nation of the Commonwealth Dominica. Moored FADs are human-made structures anchored to the ocean floor with fish attraction material on or near the surface designed to effectively create a resource patch. The ultimate goal of the research is to understand how property rights are emerging after the introduction of fish aggregating device (FAD) technology at the site in 1999. This paper reports on research to identify area-restricted search foraging behavior at FAD patches. For 22 foraging trips simultaneous behavioral observations were made to ground-truth the GPS movement data. Using a cumulative sum method, area restricted search was identified as negative deviations from the mean travel speed and the method was able to correctly identify FAD patches in every case. PMID:25647288
Spectral Analysis: From Additive Perspective to Multiplicative Perspective
NASA Astrophysics Data System (ADS)
Wu, Z.
2017-12-01
The early usage of trigonometric functions can be traced back to at least 17th century BC. It was Bhaskara II of the 12th century CE who first proved the mathematical equivalence between the sum of two trigonometric functions of any given angles and the product of two trigonometric functions of related angles, which has been taught these days in middle school classroom. The additive perspective of trigonometric functions led to the development of the Fourier transform that is used to express any functions as the sum of a set of trigonometric functions and opened a new mathematical field called harmonic analysis. Unfortunately, Fourier's sum cannot directly express nonlinear interactions between trigonometric components of different periods, and thereby lacking the capability of quantifying nonlinear interactions in dynamical systems. In this talk, the speaker will introduce the Huang transform and Holo-spectrum which were pioneered by Norden Huang and emphasizes the multiplicative perspective of trigonometric functions in expressing any function. Holo-spectrum is a multi-dimensional spectral expression of a time series that explicitly identifies the interactions among different scales and quantifies nonlinear interactions hidden in a time series. Along with this introduction, the developing concepts of physical, rather than mathematical, analysis of data will be explained. Various enlightening applications of Holo-spectrum analysis in atmospheric and climate studies will also be presented.
Aad, G.
2014-11-26
ATLAS measurements of the azimuthal anisotropy in lead–lead collisions at √s NN = 2.76 TeV are shown using a dataset of approximately 7 μb –1 collected at the LHC in 2010. The measurements are performed for charged particles with transverse momenta 0.5 < p T < 20 GeV and in the pseudorapidity range |η| < 2.5. The anisotropy is characterized by the Fourier coefficients, v n, of the charged-particle azimuthal angle distribution for n = 2–4. The Fourier coefficients are evaluated using multi-particle cumulants calculated with the generating function method. Results on the transverse momentum, pseudorapidity and centrality dependence ofmore » the v n coefficients are presented. The elliptic flow, v 2, is obtained from the two-, four-, six- and eight-particle cumulants while higher-order coefficients, v 3 and v 4, are determined with two- and four-particle cumulants. Flow harmonics v n measured with four-particle cumulants are significantly reduced compared to the measurement involving two-particle cumulants. A comparison to vn measurements obtained using different analysis methods and previously reported by the LHC experiments is also shown. Results of measurements of flow fluctuations evaluated with multi-particle cumulants are shown as a function of transverse momentum and the collision centrality. As a result, models of the initial spatial geometry and its fluctuations fail to describe the flow fluctuations measurements.« less
OpCost: an open-source system for estimating costs of stand-level forest operations
Conor K. Bell; Robert F. Keefe; Jeremy S. Fried
2017-01-01
This report describes and documents the OpCost forest operations cost model, a key component of the BioSum analysis framework. OpCost is available in two editions: as a callable module for use with BioSum, and in a stand-alone edition that can be run directly from R. OpCost model logic and assumptions for this open-source tool are explained, references to the...
Determination of total dissolved solids in water analysis
Howard, C.S.
1933-01-01
The figure for total dissolved solids, based on the weight of the residue on evaporation after heating for 1 hour at 180??C., is reasonably close to the sum of the determined constituents for most natural waters. Waters of the carbonate type that are high in magnesium may give residues that weigh less than the sum. Natural waters of the sulfate type usually give residues that are too high on account of incomplete drying.
A Quantitative Analysis of the Benefits of Prototyping Fixed-Wing Aircraft
2012-06-14
in then-year dollars. The RDT&E costs through FSD were provided in then-year dollars as a lump sum. Additionally, the cost of full capability ...development was available in then-year dollars as a lump sum. Full capability development was the RDT&E that continued after the completion of the FSD...contract, which ended in July 1984. In [31] [31], the authors stated that full capability development occurred through approximately 1990
Is walkability associated with a lower cardiometabolic risk?
Coffee, Neil T; Howard, Natasha; Paquet, Catherine; Hugo, Graeme; Daniel, Mark
2013-05-01
Walkability of residential environments has been associated with more walking. Given the health benefits of walking, it is expected that people living in locations with higher measured walkability should have a lower risk of cardiometabolic diseases. This study tested the hypothesis that higher walkability was associated with a lower cardiometabolic risk (CMR) for two administrative spatial units and three road buffers. Data were from the North West Adelaide Health Study first wave of data collected between 2000 and 2003. CMR was expressed as a cumulative sum of six clinical risk markers, selected to reflect components of the metabolic syndrome. Walkability was based on an established methodology and operationalised as dwelling density, intersection density, land-use mix and retail footprint. Walkability was associated with lower CMR for the three road buffer representations of the built environment but not for the two administrative spatial units. This may indicate a limitation in the use of administrative spatial units for analyses of walkability and health outcomes. Copyright © 2013 Elsevier Ltd. All rights reserved.
A risk-adjusted O-E CUSUM with monitoring bands for monitoring medical outcomes.
Sun, Rena Jie; Kalbfleisch, John D
2013-03-01
In order to monitor a medical center's survival outcomes using simple plots, we introduce a risk-adjusted Observed-Expected (O-E) Cumulative SUM (CUSUM) along with monitoring bands as decision criterion.The proposed monitoring bands can be used in place of a more traditional but complicated V-shaped mask or the simultaneous use of two one-sided CUSUMs. The resulting plot is designed to simultaneously monitor for failure time outcomes that are "worse than expected" or "better than expected." The slopes of the O-E CUSUM provide direct estimates of the relative risk (as compared to a standard or expected failure rate) for the data being monitored. Appropriate rejection regions are obtained by controlling the false alarm rate (type I error) over a period of given length. Simulation studies are conducted to illustrate the performance of the proposed method. A case study is carried out for 58 liver transplant centers. The use of CUSUM methods for quality improvement is stressed. Copyright © 2013, The International Biometric Society.
Keefe, Matthew J; Loda, Justin B; Elhabashy, Ahmad E; Woodall, William H
2017-06-01
The traditional implementation of the risk-adjusted Bernoulli cumulative sum (CUSUM) chart for monitoring surgical outcome quality requires waiting a pre-specified period of time after surgery before incorporating patient outcome information. We propose a simple but powerful implementation of the risk-adjusted Bernoulli CUSUM chart that incorporates outcome information as soon as it is available, rather than waiting a pre-specified period of time after surgery. A simulation study is presented that compares the performance of the traditional implementation of the risk-adjusted Bernoulli CUSUM chart to our improved implementation. We show that incorporating patient outcome information as soon as it is available leads to quicker detection of process deterioration. Deterioration of surgical performance could be detected much sooner using our proposed implementation, which could lead to the earlier identification of problems. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
A concept for routine emergency-care data-based syndromic surveillance in Europe.
Ziemann, A; Rosenkötter, N; Garcia-Castrillo Riesgo, L; Schrell, S; Kauhl, B; Vergeiner, G; Fischer, M; Lippert, F K; Krämer, A; Brand, H; Krafft, T
2014-11-01
We developed a syndromic surveillance (SyS) concept using emergency dispatch, ambulance and emergency-department data from different European countries. Based on an inventory of sub-national emergency data availability in 12 countries, we propose framework definitions for specific syndromes and a SyS system design. We tested the concept by retrospectively applying cumulative sum and spatio-temporal cluster analyses for the detection of local gastrointestinal outbreaks in four countries and comparing the results with notifiable disease reporting. Routine emergency data was available daily and electronically in 11 regions, following a common structure. We identified two gastrointestinal outbreaks in two countries; one was confirmed as a norovirus outbreak. We detected 1/147 notified outbreaks. Emergency-care data-based SyS can supplement local surveillance with near real-time information on gastrointestinal patients, especially in special circumstances, e.g. foreign tourists. It most likely cannot detect the majority of local gastrointestinal outbreaks with few, mild or dispersed cases.
Mechanical properties of the gastro-esophageal junction in health, achalasia, and scleroderma.
Mearin, F; Fonollosa, V; Vilardell, M; Malagelada, J R
2000-07-01
Manometric assessment of the gastro-esophageal junction (GEJ) is deceptive in that ignores key dynamic properties of the junction, such as resistance to flow and compliance. Our aim was to investigate the mechanical properties of the GEJ comprising intraluminal pressure (measured by manometry), resistance to flow and compliance (measured by resistometry). We studied 8 healthy subjects, 11 patients with achalasia and 11 patients with scleroderma. We used a pneumatic resistometer, previously developed and validated in our laboratory. The resistometer consists of a flaccid polyurethane 5-cm cylinder connected to an electronically regulated nitrogen-injection system; the instrument records nitrogen flow through the cylinder while maintaining a constant pressure gradient between its proximal and distal ends. By placing the cylinder successively in the proximal stomach and along the GEJ we measured the GEJ-gastric resistance gradient (GEJ resistance minus gastric resistance) and were able to calculate the cumulative resistance (sum of resistance exerted at each pressure level), peak resistance (at any injection pressure), nil resistance point (injection pressure in mmHg at which GEJ resistance equals gastric resistance), and compliance slope (flow/pressure relationship). We found that GEJ resistance to flow (cumulative resistance, peak resistance, and nil resistance point) is significantly increased in achalasia and decreased in scleroderma (P < 0.05 versus health) while GEJ compliance is diminished in achalasia (P < 0.05 versus health) and normal in scleroderma. Achalasia is a disease characterized by increased GEJ resistance and rigidity. By contrast, although scleroderma is characterized by decreased GEJ resistance, GEJ compliance may be normal.
75 FR 2938 - National Ambient Air Quality Standards for Ozone
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-19
...Based on its reconsideration of the primary and secondary national ambient air quality standards (NAAQS) for ozone (O3) set in March 2008, EPA proposes to set different primary and secondary standards than those set in 2008 to provide requisite protection of public health and welfare, respectively. With regard to the primary standard for O3, EPA proposes that the level of the 8-hour primary standard, which was set at 0.075 ppm in the 2008 final rule, should instead be set at a lower level within the range of 0.060 to 0.070 parts per million (ppm), to provide increased protection for children and other ``at risk'' populations against an array of O3-related adverse health effects that range from decreased lung function and increased respiratory symptoms to serious indicators of respiratory morbidity including emergency department visits and hospital admissions for respiratory causes, and possibly cardiovascular-related morbidity as well as total non-accidental and cardiopulmonary mortality. With regard to the secondary standard for O3, EPA proposes that the secondary O3 standard, which was set identical to the revised primary standard in the 2008 final rule, should instead be a new cumulative, seasonal standard expressed as an annual index of the sum of weighted hourly concentrations, cumulated over 12 hours per day (8 am to 8 pm) during the consecutive 3-month period within the O3 season with the maximum index value, set at a level within the range of 7 to 15 ppm- hours, to provide increased protection against O3-related adverse impacts on vegetation and forested ecosystems.
Andreasen, Nancy C; Pressler, Marcus; Nopoulos, Peg; Miller, Del; Ho, Beng-Choon
2010-02-01
A standardized quantitative method for comparing dosages of different drugs is a useful tool for designing clinical trials and for examining the effects of long-term medication side effects such as tardive dyskinesia. Such a method requires establishing dose equivalents. An expert consensus group has published charts of equivalent doses for various antipsychotic medications for first- and second-generation medications. These charts were used in this study. Regression was used to compare each drug in the experts' charts to chlorpromazine and haloperidol and to create formulas for each relationship. The formulas were solved for chlorpromazine 100 mg and haloperidol 2 mg to derive new chlorpromazine and haloperidol equivalents. The formulas were incorporated into our definition of dose-years such that 100 mg/day of chlorpromazine equivalent or 2 mg/day of haloperidol equivalent taken for 1 year is equal to one dose-year. All comparisons to chlorpromazine and haloperidol were highly linear with R(2) values greater than .9. A power transformation further improved linearity. By deriving a unique formula that converts doses to chlorpromazine or haloperidol equivalents, we can compare otherwise dissimilar drugs. These equivalents can be multiplied by the time an individual has been on a given dose to derive a cumulative value measured in dose-years in the form of (chlorpromazine equivalent in mg) x (time on dose measured in years). After each dose has been converted to dose-years, the results can be summed to provide a cumulative quantitative measure of lifetime exposure. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Adverse Childhood Experiences, the Medical Home, and Child Well-Being.
Balistreri, Kelly Stamper
2015-11-01
To examine the relationship between adverse childhood experiences (ACE), access to a medical home and a global measure of well-being among children ages 6-17 using the 2011-2012 National Survey of Children's Health. Multivariate linear regressions assessed the associations between each adverse experience and an index of child well-being with and without the impact of other events. The number of ACE was summed for each respondent and the analyses were repeated with the cumulative score as a continuous variable. The cumulative model was repeated with the addition of an interaction term between ACE score and medical home access. All analyses were conducted separately for children ages 6-11 and adolescents 12-17. Over half (53 %) of US children ages 6-17 have experienced some adverse experience during childhood. Over a quarter (28 %) has experienced at least two adverse experiences, while 15 % have experienced three or more hardships. Results suggest that the accumulation of ACE reduces well-being in children. The associations remained significant after controlling for gender, race/ethnicity, age, parental education, special health condition, and medical home access. Medical home access was consistently associated with higher levels of child well-being and was a significant moderator of the relationship between the total ACE and child well-being among children ages 6-11. Children with ACE exposure and access to a medical home have higher levels of well-being than comparable children without access to a medical home. Children exposed to adverse experiences have measurably lower levels of well-being, although younger children with access to a medical home are protected at increasing exposure.
Freyburger, Geneviève; Labrouche, Sylvie; Hubert, Christophe; Bauduer, Frédéric
2015-01-01
The Genetic Markers for Thrombosis (GMT) study compared the relative influence of ethnicity and thrombotic phenotype regarding the distribution of SNPs implicated in haemostasis pathophysiology ("haemostaseome"). We assessed 384 SNPs in three groups, each of 480 subjects: 1) general population of Aquitaine region (Southwestern France) used as control; 2) patients with venous thromboembolism from the same area; and 3) autochthonous Basques, a genetic isolate, who demonstrate unusual characteristics regarding the coagulation system. This study sought to evaluate i) the value of looking for a large number of genes in order to identify new genetic markers of thrombosis, ii) the value of investigating low risk factors and potential preferential associations, iii) the impact of ethnicity on the characterisation of markers for thrombosis. We did not detect any previously unrecognised SNP significantly associated with thrombosis risk or any preferential associations of low-risk factors in patients with thrombosis. The sum of ϰ² values for our 110 significant SNPs demonstrated a smaller genetic distance between patients and controls (321 cumulated ϰ² value) than between Basques and controls (1,570 cumulated ϰ² value). Hence, our study confirms the genetic particularity of Basques especially regarding a significantly lower expression of the non-O blood group (p< 0.0004). This is mitigated by a higher prevalence of factor II Leiden (p< 0.02) while factor V Leiden prevalence does not differ. Numerous other differences covering a wide range of proteins of the haemostaseome may result in an overall different genetic risk for venous thromboembolism.
A new approach to the problem of bulk-mediated surface diffusion.
Berezhkovskii, Alexander M; Dagdug, Leonardo; Bezrukov, Sergey M
2015-08-28
This paper is devoted to bulk-mediated surface diffusion of a particle which can diffuse both on a flat surface and in the bulk layer above the surface. It is assumed that the particle is on the surface initially (at t = 0) and at time t, while in between it may escape from the surface and come back any number of times. We propose a new approach to the problem, which reduces its solution to that of a two-state problem of the particle transitions between the surface and the bulk layer, focusing on the cumulative residence times spent by the particle in the two states. These times are random variables, the sum of which is equal to the total observation time t. The advantage of the proposed approach is that it allows for a simple exact analytical solution for the double Laplace transform of the conditional probability density of the cumulative residence time spent on the surface by the particle observed for time t. This solution is used to find the Laplace transform of the particle mean square displacement and to analyze the peculiarities of its time behavior over the entire range of time. We also establish a relation between the double Laplace transform of the conditional probability density and the Fourier-Laplace transform of the particle propagator over the surface. The proposed approach treats the cases of both finite and infinite bulk layer thicknesses (where bulk-mediated surface diffusion is normal and anomalous at asymptotically long times, respectively) on equal footing.
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
A fast summation method for oscillatory lattice sums
NASA Astrophysics Data System (ADS)
Denlinger, Ryan; Gimbutas, Zydrunas; Greengard, Leslie; Rokhlin, Vladimir
2017-02-01
We present a fast summation method for lattice sums of the type which arise when solving wave scattering problems with periodic boundary conditions. While there are a variety of effective algorithms in the literature for such calculations, the approach presented here is new and leads to a rigorous analysis of Wood's anomalies. These arise when illuminating a grating at specific combinations of the angle of incidence and the frequency of the wave, for which the lattice sums diverge. They were discovered by Wood in 1902 as singularities in the spectral response. The primary tools in our approach are the Euler-Maclaurin formula and a steepest descent argument. The resulting algorithm has super-algebraic convergence and requires only milliseconds of CPU time.
Ramanujan sums for signal processing of low-frequency noise.
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums c(q)(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
Ramanujan sums for signal processing of low-frequency noise
NASA Astrophysics Data System (ADS)
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums cq(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
Competing approaches to analysis of failure times with competing risks.
Farley, T M; Ali, M M; Slaymaker, E
2001-12-15
For the analysis of time to event data in contraceptive studies when individuals are subject to competing causes for discontinuation, some authors have recently advocated the use of the cumulative incidence rate as a more appropriate measure to summarize data than the complement of the Kaplan-Meier estimate of discontinuation. The former method estimates the rate of discontinuation in the presence of competing causes, while the latter is a hypothetical rate that would be observed if discontinuations for the other reasons could not occur. The difference between the two methods of analysis is the continuous time equivalent of a debate that took place in the contraceptive literature in the 1960s, when several authors advocated the use of net (adjusted or single decrement life table rates) rates in preference to crude rates (multiple decrement life table rates). A small simulation study illustrates the interpretation of the two types of estimate - the complement of the Kaplan-Meier estimate corresponds to a hypothetical rate where discontinuations for other reasons did not occur, while the cumulative incidence gives systematically lower estimates. The Kaplan-Meier estimates are more appropriate when estimating the effectiveness of a contraceptive method, but the cumulative incidence estimates are more appropriate when making programmatic decisions regarding contraceptive methods. Other areas of application, such as cancer studies, may prefer to use the cumulative incidence estimates, but their use should be determined according to the application. Copyright 2001 John Wiley & Sons, Ltd.
A Screening Method for Assessing Cumulative Impacts
Alexeeff, George V.; Faust, John B.; August, Laura Meehan; Milanes, Carmen; Randles, Karen; Zeise, Lauren; Denton, Joan
2012-01-01
The California Environmental Protection Agency (Cal/EPA) Environmental Justice Action Plan calls for guidelines for evaluating “cumulative impacts.” As a first step toward such guidelines, a screening methodology for assessing cumulative impacts in communities was developed. The method, presented here, is based on the working definition of cumulative impacts adopted by Cal/EPA [1]: “Cumulative impacts means exposures, public health or environmental effects from the combined emissions and discharges in a geographic area, including environmental pollution from all sources, whether single or multi-media, routinely, accidentally, or otherwise released. Impacts will take into account sensitive populations and socio-economic factors, where applicable and to the extent data are available.” The screening methodology is built on this definition as well as current scientific understanding of environmental pollution and its adverse impacts on health, including the influence of both intrinsic, biological factors and non-intrinsic socioeconomic factors in mediating the effects of pollutant exposures. It addresses disparities in the distribution of pollution and health outcomes. The methodology provides a science-based tool to screen places for relative cumulative impacts, incorporating both the pollution burden on a community- including exposures to pollutants, their public health and environmental effects- and community characteristics, specifically sensitivity and socioeconomic factors. The screening methodology provides relative rankings to distinguish more highly impacted communities from less impacted ones. It may also help identify which factors are the greatest contributors to a community’s cumulative impact. It is not designed to provide quantitative estimates of community-level health impacts. A pilot screening analysis is presented here to illustrate the application of this methodology. Once guidelines are adopted, the methodology can serve as a screening tool to help Cal/EPA programs prioritize their activities and target those communities with the greatest cumulative impacts. PMID:22470315
Evaluation of factors that affect hip moment impulse during gait: A systematic review.
Inai, Takuma; Takabayashi, Tomoya; Edama, Mutsuaki; Kubo, Masayoshi
2018-03-01
Decreasing the daily cumulative hip moments in the frontal and sagittal planes may lower the risk of hip osteoarthritis. Therefore, it may be important to evaluate factors that affect hip moment impulse during gait. It is unclear what factors affect hip moment impulse during gait. This systematic review aimed to evaluate different factors that affect hip moment impulse during gait in healthy adults and patients with hip osteoarthritis. Four databases (Scopus, ScienceDirect, PubMed, and PEDro) were searched up to August 2017 to identify studies that examined hip moment impulse during gait. Data extracted for analysis included the sample size, age, height, body mass, type of intervention, and main findings. After screening, 10 of the 975 studies identified were included in our analysis. Several factors, including a rocker bottom shoe, FitFlop™ sandals, ankle push-off, posture, stride length, body-weight unloading, a rollator, walking poles, and a knee brace, were reviewed. The main findings were as follows: increasing ankle push-off decreased both the hip flexion and extension moment impulses; body-weight unloading decreased both the hip extension and adduction moment impulses; the FitFlop™ sandal increased the sum of the hip flexion and extension moment impulses; long strides increased the hip extension moment impulse; and the use of a knee brace increased hip flexion moment impulse. Of note, none of the eligible studies included patients with hip osteoarthritis. The hip moment impulses can be modified by person-specific factors (ankle push-off and long strides) and external factors (body-weight unloading and use of the FitFlop™ sandals and a knee brace). Effects on the progression of hip osteoarthritis remain to be evaluated. Copyright © 2018 Elsevier B.V. All rights reserved.
Resting Heart Rate Variability Among Professional Baseball Starting Pitchers.
Cornell, David J; Paxson, Jeffrey L; Caplinger, Roger A; Seligman, Joshua R; Davis, Nicholas A; Ebersole, Kyle T
2017-03-01
Cornell, DJ, Paxson, JL, Caplinger, RA, Seligman, JR, Davis, NA, and Ebersole, KT. Resting heart rate variability among professional baseball starting pitchers. J Strength Cond Res 31(3): 575-581, 2017-The purpose of this study was to examine the changes in resting heart rate variability (HRV) across a 5-day pitching rotation schedule among professional baseball starting pitchers. The HRV data were collected daily among 8 Single-A level professional baseball starting pitchers (mean ± SD, age = 21.9 ± 1.3 years; height = 185.4 ± 3.6 cm; weight = 85.2 ± 7.5 kg) throughout the entire baseball season with the participant quietly lying supine for 10 minutes. The HRV was quantified by calculating the natural log of the square root of the mean sum of the squared differences (lnRMSSD) during the middle 5 minutes of each R-R series data file. A split-plot repeated-measures analysis of variance was used to examine the influence of pitching rotation day on resting lnRMSSD. A statistically significant main effect of rotation day was identified (F4,706 = 3.139, p = 0.029). Follow-up pairwise analyses indicated that resting lnRMSSD on day 2 was significantly (p ≤ 0.05) lower than all other rotation days. In addition, a statistically significant main effect of pitcher was also identified (F7,706 = 83.388, p < 0.001). These results suggest that professional baseball starting pitchers display altered autonomic nervous system function 1 day after completing a normally scheduled start, as day 2 resting HRV was significantly lower than all other rotation days. In addition, the season average resting lnRMSSD varied among participants, implying that single-subject analysis of resting measures of HRV may be more appropriate when monitoring cumulative workload among this cohort population of athletes.
Morelli, Luca; Guadagni, Simone; Lorenzoni, Valentina; Di Franco, Gregorio; Cobuccio, Luigi; Palmeri, Matteo; Caprili, Giovanni; D'Isidoro, Cristiano; Moglia, Andrea; Ferrari, Vincenzo; Di Candio, Giulio; Mosca, Franco; Turchetti, Giuseppe
2016-09-01
The aim of this study is to compare surgical parameters and the costs of robotic surgery with those of laparoscopic approach in rectal cancer based on a single surgeon's early robotic experience. Data from 25 laparoscopic (LapTME) and the first 50 robotic (RobTME) rectal resections performed at our institution by an experienced laparoscopic surgeon (>100 procedures) between 2009 and 2014 were retrospectively analyzed and compared. Patient demographic, procedure, and outcome data were gathered. Costs of the two procedures were collected, differentiated into fixed and variable costs, and analyzed against the robotic learning curve according to the cumulative sum (CUSUM) method. Based on CUSUM analysis, RobTME group was divided into three phases (Rob1: 1-19; Rob2: 20-40; Rob3: 41-50). Overall median operative time (OT) was significantly lower in LapTME than in RobTME (270 vs 312.5 min, p = 0.006). A statistically significant change in OT by phase of robotic experience was detected in the RobTME group (p = 0.010). Overall mean costs associated with LapTME procedures were significantly lower than with RobTME (p < 0.001). Statistically significant reductions in variable and overall costs were found between robotic phases (p < 0.009 for both). With fixed costs excluded, the difference between laparoscopic and Rob3 was no longer statistically significant. Our results suggest a significant optimization of robotic rectal surgery's costs with experience. Efforts to reduce the dominant fixed cost are recommended to maintain the sustainability of the system and benefit from the technical advantages offered by the robot.
A decade of U.S. Air Force bat strikes
Peurach, Suzanne C.; Dove, Carla J.; Stepko, Laura
2009-01-01
From 1997 through 2007, 821 bat strikes were reported to the U.S. Air Force (USAF) Safety Center by aircraft personnel or ground crew and sent to the National Museum of Natural History, Smithsonian Institution, for identification. Many samples were identified by macroscopic and or microscopic comparisons with bat specimens housed in the museum and augmented during the last 2 years by DNA analysis. Bat remains from USAF strikes during this period were received at the museum from 40 states in the United States and from 20 countries. We confirmed that 46% of the strikes were caused by bats, but we did not identify them further; we identified 5% only to the family or genus level, and 49% to the species level. Fifty-five of the 101 bat-strike samples submitted for DNA analysis have been identified to the species level. Twenty-five bat species have been recorded striking USAF planes worldwide. The Brazilian free-tailed bat (Tadarida brasiliensis; n = 173) is the species most commonly identified in USAF strike impacts, followed by the red bat (Lasiurus borealis; n = 83). Bat strikes peak during the spring and fall, with >57% occurring from August through October; 82% of the reports that included time of strike were recorded between 2100 and 0900 hours. More than 12% of the bat strikes were reported at >300 m above ground level (AGL). Although <1% of the bat-strike reports indicated damage to USAF aircraft, cumulative damage for 1997 through 2007 totaled >$825,000 and >50% of this sum was attributable to 5 bat-strike incidents. Only 5 bats from the 10 most damaging bat strikes were identified to the species level, either because we did not receive remains with the reports or the sample was insufficient for identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, M; Fan, T; Duan, J
2015-06-15
Purpose: Prospectively assess the potential utility of texture analysis for differentiation of central cancer from atelectasis. Methods: 0 consecutive central lung cancer patients who were referred for CT imaging and PET-CT were enrolled. Radiotherapy doctor delineate the tumor and atelectasis according to the fusion imaging based on CT image and PET-CT image. The texture parameters (such as energy, correlation, sum average, difference average, difference entropy), were obtained respectively to quantitatively discriminate tumor and atelectasis based on gray level co-occurrence matrix (GLCM) Results: The texture analysis results showed that the parameters of correlation and sum average had an obviously statistical significance(P<0.05).more » Conclusion: the results of this study indicate that texture analysis may be useful for the differentiation of central lung cancer and atelectasis.« less
Cumulative doses analysis in young trauma patients: a single-centre experience.
Salerno, Sergio; Marrale, Maurizio; Geraci, Claudia; Caruso, Giuseppe; Lo Re, Giuseppe; Lo Casto, Antonio; Midiri, Massimo
2016-02-01
Multidetector computed tomography (MDCT) represents the main source of radiation exposure in trauma patients. The radiation exposure of young patients is a matter of considerable medical concern due to possible long-term effects. Multiple MDCT studies have been observed in the young trauma population with an increase in radiation exposure. We have identified 249 young adult patients (178 men and 71 women; age range 14-40 years) who had received more than one MDCT study between June 2010 and June 2014. According to the International Commission on Radiological Protection publication, we have calculated the cumulative organ dose tissue-weighting factors by using CT-EXPO software(®). We have observed a mean cumulative dose of about 27 mSv (range from 3 to 297 mSv). The distribution analysis is characterised by low effective dose, below 20 mSv, in the majority of the patients. However, in 29 patients, the effective dose was found to be higher than 20 mSv. Dose distribution for the various organs analysed (breasts, ovaries, testicles, heart and eye lenses) shows an intense peak for lower doses, but in some cases high doses were recorded. Even though cumulative doses may have long-term effects, which are still under debate, high doses are observed in this specific group of young patients.
Muroi, Carl; Hugelshofer, Michael; Seule, Martin; Keller, Emanuela
2014-04-01
The degree of inflammatory response with cytokine release is associated with poor outcomes after aneurysmal subarachnoid hemorrhage (SAH). Previously, we reported on an association between systemic IL-6 levels and clinical outcome in patients with aneurysmal SAH. The intention was to assess the impact of nonsteroidal anti-inflammatory drugs (NSAIDs) and acetaminophen on the inflammatory response after SAH. Our method involved exploratory analysis of data and samples collected within a previous study. In 138 patients with SAH, systemic interleukin (IL-6) and c-reactive protein (CRP) were measured daily up to day 14 after SAH. The correlations among the cumulatively applied amount of NSAIDs, inflammatory parameters, and clinical outcome were calculated. An inverse correlation between cumulatively applied NSAIDs and both IL-6 and CRP levels was found (r = -0.437, p < 0.001 and r = -0.369, p < 0.001 respectively). Multivariable linear regression analysis showed a cumulative amount of NSAIDs to be independently predictive for systemic IL-6 and CRP levels. The cumulative amount of NSAIDs reduced the odds for unfavorable outcome, defined as Glasgow outcome scale 1-3. The results indicate a potential beneficial effect of NSAIDs in patients with SAH in terms of ameliorating inflammatory response, which might have an impact on outcome.
Hauff, Nancy J; Fry-McComish, Judith; Chiodo, Lisa M
2017-08-01
To describe relationships between cumulative trauma, partner conflict and post-traumatic stress in African-American postpartum women. Cumulative trauma exposure estimates for women in the USA range from 51-69%. During pregnancy, most trauma research has focused on physical injury to the mother. Post-traumatic stress disorder (PTSD) is associated with trauma and more prevalent in African-American women than women of other groups. Knowledge about both the rate and impact of cumulative trauma on pregnancy may contribute to our understanding of women seeking prenatal care, and disparities in infant morbidity and mortality. This retrospective, correlational, cross-sectional study took place on postpartum units of two Detroit hospitals. Participants were 150 African-American women aged between 18-45 who had given birth. Mothers completed the Cumulative Trauma Scale, Conflict Tactics Scale, Clinician Administered Post-traumatic Stress Scale, Edinburgh Postnatal Depression Scale and a Demographic Data form. Descriptive statistics, correlations and multiple regressions were used for data analysis. All participants reported at least one traumatic event in their lifetime. Cumulative trauma and partner conflict predicted PTSD, with the trauma of a life-threatening event for a loved one reported by 60% of the sample. Nearly, one-fourth of the women screened were at risk for PTSD. Increased cumulative trauma, increased partner conflict and lower level of education were related to higher rates of PTSD symptoms. Both cumulative trauma and partner conflict in the past year predict PTSD. Reasoning was used most often for partner conflict resolution. The results of this study offer additional knowledge regarding relationships between cumulative trauma, partner conflict and PTSD in African-American women. Healthcare providers need to be sensitive to patient life-threatening events, personal failures, abuse and other types of trauma. Current evidence supports the need to assess for post-traumatic stress symptoms during pregnancy. © 2016 John Wiley & Sons Ltd.
Nucifora, Gaetano; Miani, Daniela; Di Chiara, Antonio; Piccoli, Gianluca; Artico, Jessica; Puppato, Michela; Slavich, Gianaugusto; De Biasio, Marzia; Gasparini, Daniele; Proclemer, Alessandro
2013-03-01
Acute myocarditis (AM) may occasionally have an infarct-like presentation. The aim of the present study was to investigate the relation between electrocardiographic (ECG) findings in this group of patients and myocardial damage assessed by cardiac magnetic resonance imaging (MRI) with the late gadolinium enhancement (LGE) technique. Myocardial damage may be associated with ECG changes in infarct-like AM. Forty-one consecutive patients (36 males; mean age, 36 ± 12 years) with diagnosis of AM according to cardiac MRI Lake Louise criteria and infarct-like presentation were included. The relation between site of ST-segment elevation (STE), sum of STE (sumSTE), time to normalization of STE, and development of negative T wave with the extent of LGE (expressed as % of left ventricular mass [%LV LGE]), was evaluated. Most (80%) patients presented with inferolateral STE; mean sumSTE was 5 ± 3 mm. Normalization of STE occurred within 24 hours in 20 (49%) patients. Development of negative T wave occurred in 28 (68%) patients. Cardiac MRI showed LGE in all patients; mean %LV LGE was 9.6 ± 7.2%. Topographic agreement between site of STE and LGE was 68%. At multivariate analysis, sumSTE (β = 0.42, P < 0.001), normalization of STE >24 hours (β = 0.39, P < 0.001), and development of negative T wave (β = 0.49, P < 0.001) were independently related to %LV LGE. Analysis of the site of STE underestimates the extent of myocardial injury among patients with infarct-like myocarditis. However, some ECG features (ie, sumSTE, normalization of STE >24 hours, and development of negative T wave) may help to identify patients with larger areas of myocardial damage. © 2012 Wiley Periodicals, Inc.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Chung, Hweemin; Youn, Kanwoo; Kim, Kyuyeon; Park, Kyunggeun
2017-01-01
In Korea, Carbon disulfide (CS 2 ) toxicity was an important social problem from the late 1980s to the early 1990s but there have been few large-scale studies examining the prevalence of diseases after CS 2 exposure discontinuance. So we investigated past working exposure to CS 2 characteristics from surviving ex-workers of a rayon manufacturing plant including cumulative CS 2 exposure index. Furthermore, we studied the prevalence of their chronic diseases recently after many years. We interviewed 633 ex-workers identified as CS 2 poisoning-related occupational diseases to determine demographic and occupational characteristics and reviewed their medical records. The work environment measurement data from 1992 was used as a reference. Based on the interviews and foreign measurement documents, weights were assigned to the reference concentrations followed by calculation of individual exposure index, the sum of the portion of each time period multiplied by the concentrations of CS 2 during that period. The cumulative exposure index was 128.2 ppm on average. Workers from the spinning, electrical equipment repair, and motor repair departments were exposed to high concentrations of ≥10 ppm. Workers from the maintenance of the ejector, manufacturing of CS 2, post-process, refining, maintenance and manufacturing of viscose departments were exposed to low concentrations below 10 ppm. The prevalence for hypertension, coronary artery disease, cerebrovascular disease, diabetes, arrhythmia, psychoneurotic disorder, disorders of the nervous system and sensory organ were 69.2%, 13.9%, 24.8%, 24.5%, 1.3%, 65.7%, 72.4% respectively. We estimated the individual cumulative CS 2 exposure based on interviews and foreign measurement documents, and work environment measurement data. Comparing the work environment measurement data from 1992, these values were similar to them. After identified as CS 2 poisoning, there are subjects over 70 years of average age with disorders of the nervous system and sensory organs, hypertension, psychoneurotic disorder, cerebrovascular disease, diabetes, coronary artery disease, and arrhythmia. Because among ex-workers of the rayon manufacturing plant, only 633 survivors recognized as CS 2 poisoning were studied, the others not identified as CS 2 poisoning should also be investigated in the future.
Relative residential property value as a socio-economic status indicator for health research
2013-01-01
Background Residential property is reported as the most valuable asset people will own and therefore provides the potential to be used as a socio-economic status (SES) measure. Location is generally recognised as the most important determinant of residential property value. Extending the well-established relationship between poor health and socio-economic disadvantage and the role of residential property in the overall wealth of individuals, this study tested the predictive value of the Relative Location Factor (RLF), a SES measure designed to reflect the relationship between location and residential property value, and six cardiometabolic disease risk factors, central obesity, hypertriglyceridemia, reduced high density lipoprotein (HDL), hypertension, impaired fasting glucose, and high low density lipoprotein (LDL). These risk factors were also summed and expressed as a cumulative cardiometabolic risk (CMR) score. Methods RLF was calculated using a global hedonic regression model from residential property sales transaction data based upon several residential property characteristics, but deliberately blind to location, to predict the selling price of the property. The predicted selling price was divided by the actual selling price and the results interpolated across the study area and classified as tertiles. The measures used to calculate CMR were collected via clinic visits from a population-based cohort study. Models with individual risk factors and the cumulative cardiometabolic risk (CMR) score as dependent variables were respectively tested using log binomial and Poisson generalised linear models. Results A statistically significant relationship was found between RLF, the cumulative CMR score and all but one of the risk factors. In all cases, participants in the most advantaged and intermediate group had a lower risk for cardio-metabolic diseases. For the CMR score the RR for the most advantaged was 19% lower (RR = 0.81; CI 0.76-0.86; p <0.0001) and the middle group was 9% lower (RR = 0.91; CI 0.86-0.95; p <0.0001) than the least advantaged group. Conclusions This paper advances the understanding of the nexus between place, health and SES by providing an objective spatially informed SES measure for testing health outcomes and reported a robust association between RLF and several health measures. PMID:23587373
NASA Astrophysics Data System (ADS)
Sumargo, E.; Cayan, D. R.; Iacobellis, S.
2014-12-01
Obtaining accurate solar radiation input to snowmelt runoff models remains a fundamental challenge for water supply forecasters in the mountainous western U.S. The variability of cloud cover is a primary source of uncertainty in estimating surface radiation, especially given that ground-based radiometer networks in mountain terrains are sparse. Thus, remote sensed cloud properties provide a way to extend in situ observations and more importantly, to understand cloud variability in montane environment. We utilize 17 years of NASA/NOAA GOES visible albedo product with 4 km spatial and half-hour temporal resolutions to investigate daytime cloud variability in the western U.S. at elevations above 800 m. REOF/PC analysis finds that the 5 leading modes account for about two-thirds of the total daily cloud albedo variability during the whole year (ALL) and snowmelt season (AMJJ). The AMJJ PCs are significantly correlated with de-seasonalized snowmelt derived from CDWR CDEC and NRCS SNOTEL SWE data and USGS stream discharge across the western conterminous states. The sum of R2 from 7 days prior to the day of snowmelt/discharge amounts to as much as ~52% on snowmelt and ~44% on discharge variation. Spatially, the correlation patterns take on broad footprints, with strongest signals in regions of highest REOF weightings. That the response of snowmelt and streamflow to cloud variation is spread across several days indicates the cumulative effect of cloud variation on the energy budget in mountain catchments.
Spatial analysis of the annual and seasonal aridity trends in Extremadura, southwestern Spain
NASA Astrophysics Data System (ADS)
Moral, Francisco J.; Paniagua, Luis L.; Rebollo, Francisco J.; García-Martín, Abelardo
2017-11-01
The knowledge of drought (or wetness) conditions is necessary not only for a rational use of water resources but also for explaining landscape and ecology characteristics. An increase in aridity in many areas of the world is expected because of climate change (global warming). With the aim of analysing annual and seasonal aridity trends in Extremadura, southwestern Spain, climate data from 81 locations within the 1951-2010 period were used. After computing the De Martonne aridity index at each location, a geographic information system (GIS) and multivariate geostatistics (regression kriging) were utilised to map this index throughout the region. Later, temporal trends were analysed using the Mann-Kendall test, and the Sen's estimator was utilised to estimate the magnitude of trends. Maps of aridity trends were generated by ordinary kriging algorithm, providing a visualisation of detected annual and seasonal tendencies. An increase in aridity, as the De Martonne aridity index decreased, was apparent during the study period, mainly in the more humid locations of the north of the region. An increase of the seasonal De Martonne aridity index was also found, but it was only statistically significant in some locations in spring and summer, with the highest decreasing rate in the north of Extremadura. Change year detection was achieved using cumulative sum graphs, obtaining that firstly the change point occurred in spring, in the mid-1970s, later in the annual period in the late 1970s and finally in summer at the end of the 1980s.
González-Parrado, Zulima; Valencia-Barrera, Rosa Ma; Vega-Maray, Ana Ma; Fuertes-Rodríguez, Carmen Reyes; Fernández-González, Delia
2014-09-01
Plantago L. species are very common in nitrified areas such as roadsides and their pollen is a major cause of pollinosis in temperate regions. In this study, we sampled airborne pollen grains in the city of León (NW, Spain) from January 1995 to December 2011, by using a Burkard® 7-day-recording trap. The percentage of Plantago pollen compared to the total pollen count ranged from 11% (1997) to 3% (2006) in the period under study. Peak pollen concentrations were recorded in May and June. Our 17-year analysis failed to disclose significant changes in the seasonal trend of plantain pollen concentration. In addition, there were no important changes in the start dates of pollen release and the meteorological parameters analyzed did not show significant variations in their usual trends. We analyzed the influence of several meteorological parameters on Plantago pollen concentration to explain the differences in pollen concentration trends during the study. Our results show that temperature, sun hours, evaporation, and relative humidity are the meteorological parameters best correlated to the behavior of Plantago pollen grains. In general, the years with low pollen concentrations correspond to the years with less precipitation or higher temperatures. We calculated the approximate Plantago flowering dates using the cumulative sum of daily maximum temperatures and compared them with the real bloom dates. The differences obtained were 4 days in 2009, 3 days in 2010, and 1 day in 2011 considering the complete period of pollination.
NASA Astrophysics Data System (ADS)
González-Parrado, Zulima; Valencia-Barrera, Rosa Ma.; Vega-Maray, Ana Ma.; Fuertes-Rodríguez, Carmen Reyes; Fernández-González, Delia
2014-09-01
Plantago L. species are very common in nitrified areas such as roadsides and their pollen is a major cause of pollinosis in temperate regions. In this study, we sampled airborne pollen grains in the city of León (NW, Spain) from January 1995 to December 2011, by using a Burkard® 7-day-recording trap. The percentage of Plantago pollen compared to the total pollen count ranged from 11 % (1997) to 3 % (2006) in the period under study. Peak pollen concentrations were recorded in May and June. Our 17-year analysis failed to disclose significant changes in the seasonal trend of plantain pollen concentration. In addition, there were no important changes in the start dates of pollen release and the meteorological parameters analyzed did not show significant variations in their usual trends. We analyzed the influence of several meteorological parameters on Plantago pollen concentration to explain the differences in pollen concentration trends during the study. Our results show that temperature, sun hours, evaporation, and relative humidity are the meteorological parameters best correlated to the behavior of Plantago pollen grains. In general, the years with low pollen concentrations correspond to the years with less precipitation or higher temperatures. We calculated the approximate Plantago flowering dates using the cumulative sum of daily maximum temperatures and compared them with the real bloom dates. The differences obtained were 4 days in 2009, 3 days in 2010, and 1 day in 2011 considering the complete period of pollination.
Prognostic value of saturated prostate cryoablation for localized prostate cancer.
Chen, Chung-Hsin; Tai, Yi-Sheng; Pu, Yeong-Shiau
2015-10-01
To evaluate the oncological outcomes and complications of patients with saturated prostate cryoablation. A cohort of 208 patients cumulatively treated between June 2008 and December 2012 qualified for study inclusion, each undergoing total-gland cryoablation for prostate cancer. The degree of saturated prostate cryoablation was defined as the average prostate volume per cryoprobe (APVC), and divided into four groups (groups 1-4: <3 ml, 3 to <4 ml, 4 to <5 ml, ≧5 ml, respectively). Post-ablative complications were measured prospectively at weeks 1, 2, 4, 8, 12, and 24 by using the Common Terminology Criteria for Adverse Events. Biochemical failure was gauged by Phoenix criterion. The Kruskal-Wallis rank sum test and Chi-square test were used to compare clinical characteristics of therapeutic subsets. The Cox proportional hazard model was applied for comparison of recurrence risk between groups. APVC group 1 had the highest pre-operative PSA value and smallest prostate size among the groups. Multivariate analysis of risks of biochemical failures revealed that the larger the APVC, the higher the hazard (p for trend = 0.01). Compared to the group 1 patients, the hazard ratios of biochemical failures in groups 2-4 were 4.4 (confidence interval (CI): 0.5-37), 8.8 (CI 1.1-73), and 9.4 (CI 1.1-78), respectively. Nevertheless, the complication rate of APVC group 1 patients was similar to the other three groups. Saturated prostate cryoablation by reducing APVC would be beneficial for cancer control without compromising patient safety.
Okamura, Akihiko; Watanabe, Masayuki; Fukudome, Ian; Yamashita, Kotaro; Yuda, Masami; Hayami, Masaru; Imamura, Yu; Mine, Shinji
2018-04-01
Minimally invasive esophagectomy (MIE) is being increasingly performed; however, it is still associated with high morbidity and mortality. The correlation between surgical team proficiency and patient load lacks clarity. This study evaluates surgical outcomes during the first 3-year period after establishment of a new surgical team. A new surgical team was established in September 2013 by two expert surgeons having experience of performing more than 100 MIEs. We assessed 237 consecutive patients who underwent MIE for esophageal cancer and evaluated the impact of surgical team proficiency on postoperative outcomes, as well as the team learning curve. In the cumulative sum analysis, a point of downward inflection for operative time and blood loss was observed in case 175. After 175 cases, both operative time and blood loss significantly decreased (P < 0.001 and P < 0.001, respectively), and postoperative incidence of pneumonia significantly decreased from 18.9 to 6.5% (P = 0.024). Median postoperative hospital stay also decreased from 20 to 18 days (P = 0.022). Additionally, serum CRP levels on postoperative day 1 showed a significant, but weak inverse association with the number of cases (P = 0.024). After 175 cases, both operative time and blood loss significantly decreased. In addition, the incidence of pneumonia decreased significantly. Additionally, surgical team proficiency may decrease serum CRP levels immediately after MIE. Surgical team proficiency based on team experience had beneficial effects on patients undergoing MIE.
The Impact of Climatological Variables on Kelp Canopy Area in the Santa Barbara Channel
NASA Astrophysics Data System (ADS)
Zigner, K.; Bausell, J.; Kudela, R. M.
2015-12-01
Kelp canopy area (KCA), a proxy for kelp forest health, has important implications for small and large-scale processes pertaining to fisheries, near shore currents, and marine ecosystems. As part of the NASA Airborne Science Research Program (SARP), this study examines the impact of ocean chemistry and climatological variables on KCA in the Santa Barbara Channel through time series analysis. El Niño Southern Oscillation (ENSO), North Pacific Gyre Oscillation (NPGO), North Pacific Oscillation (NPO), and upwelling indices as well as sea surface temperature (SST), salinity, nitrate, and chlorophyll-a concentrations taken within the Santa Barbara channel (1990-2014) were acquired from the Climate Prediction Center (CPC), California Cooperative Oceanic Fisheries Investigation (CalCOFI), and Di Lorenzo's NPGO websites. These data were then averaged for winter (November-January) and summer (May-August) seasons and compared to KCA measurements derived from Landsat images via unsupervised classification. Regression, cumulative sum tests, and cross-correlation coefficients revealed a two year lag between KCA and the NPGO, indicating the presence of an additional factor driving both variables. Further analyses suggests that the NPO may be this driving factor, as indicated by the correlation (lag 0) with KCA. Comparing relationships between kelp and other variables over various time periods supports the acceleration of the NPGO and other variables in more recent years. Exploring relationships between KCA, NPGO, and NPO may provide insight into potential impacts of climate change on coastal marine ecosystems.
Langevin, Scott M; Ioannidis, John P A; Vineis, Paolo; Taioli, Emanuela
2010-10-01
There is an overwhelming abundance of genetic association studies available in the literature, which can often be collectively difficult to interpret. To address this issue, the Venice interim guidelines were established for determining the credibility of the cumulative evidence. The objective of this report is to evaluate the literature on the association of common glutathione S-transferase (GST) variants (GSTM1 null, GSTT1 null and GSTP1 Ile105Val polymorphism) and lung cancer, and to assess the credibility of the associations using the newly proposed cumulative evidence guidelines. Information from the literature was enriched with an updated meta-analysis and a pooled analysis using data from the Genetic Susceptibility to Environmental Carcinogens database. There was a significant association between GSTM1 null and lung cancer for the meta-analysis (meta odds ratio=1.17, 95% confidence interval: 1.10-1.25) and pooled analysis (adjusted odds ratio=1.10, 95% confidence interval: 1.04-1.16), although substantial heterogeneity was present. No overall association between lung cancer and GSTT1 null or GSTP1 Ile105Val was found. When the Venice criteria was applied, cumulative evidence for all associations were considered 'weak', with the exception of East Asian carriers of the G allele of GSTP1 Ile105Val, which was graded as 'moderate' evidence. Despite the large amounts of studies, and several statistically significant summary estimates produced by meta-analyses, the application of the Venice criteria suggests extensive heterogeneity and susceptibility to bias for the studies on association of common genetic polymorphisms, such as with GST variants and lung cancer.
Gislason, Maya K; Andersen, Holly K
2016-01-01
We consider the case of intensive resource extractive projects in the Blueberry River First Nations in Northern British Columbia, Canada, as a case study. Drawing on the parallels between concepts of cumulative environmental and cumulative health impacts, we highlight three axes along which to gauge the effects of intensive extraction projects. These are environmental, health, and social justice axes. Using an intersectional analysis highlights the way in which using individual indicators to measure impact, rather than considering cumulative effects, hides the full extent by which the affected First Nations communities are impacted by intensive extraction projects. We use the case study to contemplate several mechanisms at the intersection of these axes whereby the negative effects of each not only add but also amplify through their interactions. For example, direct impact along the environmental axis indirectly amplifies other health and social justice impacts separately from the direct impacts on those axes. We conclude there is significant work still to be done to use cumulative indicators to study the impacts of extractive industry projects—like liquefied natural gas—on peoples, environments, and health. PMID:27763548
Cumulative trauma, hyperarousal, and suicidality in the general population: a path analysis.
Briere, John; Godbout, Natacha; Dias, Colin
2015-01-01
Although trauma exposure and posttraumatic stress disorder (PTSD) both have been linked to suicidal thoughts and behavior, the underlying basis for this relationship is not clear. In a sample of 357 trauma-exposed individuals from the general population, younger participant age, cumulative trauma exposure, and all three Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, PTSD clusters (reexperiencing, avoidance, and hyperarousal) were correlated with clinical levels of suicidality. However, logistic regression analysis indicated that when all PTSD clusters were considered simultaneously, only hyperarousal continued to be predictive. A path analysis confirmed that posttraumatic hyperarousal (but not other components of PTSD) fully mediated the relationship between extent of trauma exposure and degree of suicidal thoughts and behaviors.
An Extension of SIC Predictions to the Wiener Coactive Model
Houpt, Joseph W.; Townsend, James T.
2011-01-01
The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form. PMID:21822333
An Extension of SIC Predictions to the Wiener Coactive Model.
Houpt, Joseph W; Townsend, James T
2011-06-01
The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form.
1984-12-01
total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
Automated nystagmus analysis. [on-line computer technique for eye data processing
NASA Technical Reports Server (NTRS)
Oman, C. M.; Allum, J. H. J.; Tole, J. R.; Young, L. R.
1973-01-01
Several methods have recently been used for on-line analysis of nystagmus: A digital computer program has been developed to accept sampled records of eye position, detect fast phase components, and output cumulative slow phase position, continuous slow phase velocity, instantaneous fast phase frequency, and other parameters. The slow phase velocity is obtained by differentiation of the calculated cumulative position rather than the original eye movement record. Also, a prototype analog device has been devised which calculates the velocity of the slow phase component during caloric testing. Examples of clinical and research eye movement records analyzed with these devices are shown.
An Analysis of Escort Formations
1992-03-01
error sum of squares" is denoted by SSPE and calculated by SSpE (yjj -) 2 J (5.13) where j = denotes unique design points, and i = denotes the...observations The difference between SSE and SSPE represents the deviation between the observations and the model due to inadequacies in the model. This...difference is called sum of squares due to lack of fit and denoted by SSLF. 5.18 The ratio of SSLF to SSPE , each divided by its respective degrees of freedom
Arnetz, Bengt B.; Broadbridge, Carissa L.; Jamil, Hikmet; Lumley, Mark A.; Pole, Nnamdi; Barkho, Evone; Fakhouri, Monty; Talia, Yousif Rofa; Arnetz, Judith E.
2014-01-01
Background Trauma exposure contributes to poor mental health among refugees, and exposure often is measured using a cumulative index of items from the Harvard Trauma Questionnaire (HTQ). Few studies, however, have asked whether trauma subtypes derived from the HTQ could be superior to this cumulative index in predicting mental health outcomes. Methods A community sample of recently arrived Iraqi refugees (N = 298) completed the HTQ and measures of posttraumatic stress disorder (PTSD) and depression symptoms. Results Principal components analysis of HTQ items revealed a 5-component subtype model of trauma that accounted for more item variance than a 1-component solution. These trauma subtypes also accounted for more variance in PTSD and depression symptoms (12% and 10%, respectively) than did the cumulative trauma index (7% and 3%, respectively). Discussion Trauma subtypes provided more information than cumulative trauma in the prediction of negative mental health outcomes. Therefore, use of these subtypes may enhance the utility of the HTQ when assessing at-risk populations. PMID:24549491
Hanson, Jamie L.; Chung, Moo K.; Avants, Brian B.; Rudolph, Karen D.; Shirtcliff, Elizabeth A.; Gee, James C.; Davidson, Richard J.; Pollak, Seth D.
2012-01-01
A large corpus of research indicates exposure to stress impairs cognitive abilities, specifically executive functioning dependent on the prefrontal cortex (PFC). We collected structural MRI scans (n=61), well-validated assessments of executive functioning, and detailed interviews assessing stress exposure in humans, to examine whether cumulative life stress affected brain morphometry and one type of executive functioning, spatial working memory, during adolescence—a critical time of brain development and reorganization. Analysis of variations in brain structure revealed that cumulative life stress and spatial working memory were related to smaller volumes in the PFC, specifically prefrontal gray and white matter between the anterior cingulate and the frontal poles. Mediation analyses revealed that individual differences in prefrontal volumes accounted for the association between cumulative life stress and spatial working memory. These results suggest that structural changes in the PFC may serve as a mediating mechanism through which greater cumulative life stress engenders decrements in cognitive functioning. PMID:22674267
Radiation exposure assessment for portsmouth naval shipyard health studies.
Daniels, R D; Taulbee, T D; Chen, P
2004-01-01
Occupational radiation exposures of 13,475 civilian nuclear shipyard workers were investigated as part of a retrospective mortality study. Estimates of annual, cumulative and collective doses were tabulated for future dose-response analysis. Record sets were assembled and amended through range checks, examination of distributions and inspection. Methods were developed to adjust for administrative overestimates and dose from previous employment. Uncertainties from doses below the recording threshold were estimated. Low-dose protracted radiation exposures from submarine overhaul and repair predominated. Cumulative doses are best approximated by a hybrid log-normal distribution with arithmetic mean and median values of 20.59 and 3.24 mSv, respectively. The distribution is highly skewed with more than half the workers having cumulative doses <10 mSv and >95% having doses <100 mSv. The maximum cumulative dose is estimated at 649.39 mSv from 15 person-years of exposure. The collective dose was 277.42 person-Sv with 96.8% attributed to employment at Portsmouth Naval Shipyard.
NASA Astrophysics Data System (ADS)
Farrow, Scott; Scott, Michael
2013-05-01
Floods are risky events ranging from small to catastrophic. Although expected flood damages are frequently used for economic policy analysis, alternative measures such as option price (OP) and cumulative prospect value exist. The empirical magnitude of these measures whose theoretical preference is ambiguous is investigated using case study data from Baltimore City. The outcome for the base case OP measure increases mean willingness to pay over the expected damage value by about 3%, a value which is increased with greater risk aversion, reduced by increased wealth, and only slightly altered by higher limits of integration. The base measure based on cumulative prospect theory is about 46% less than expected damages with estimates declining when alternative parameters are used. The method of aggregation is shown to be important in the cumulative prospect case which can lead to an estimate up to 41% larger than expected damages. Expected damages remain a plausible and the most easily computed measure for analysts.
Arnetz, Bengt B; Broadbridge, Carissa L; Jamil, Hikmet; Lumley, Mark A; Pole, Nnamdi; Barkho, Evone; Fakhouri, Monty; Talia, Yousif Rofa; Arnetz, Judith E
2014-12-01
Trauma exposure contributes to poor mental health among refugees, and exposure often is measured using a cumulative index of items from the Harvard Trauma Questionnaire (HTQ). Few studies, however, have asked whether trauma subtypes derived from the HTQ could be superior to this cumulative index in predicting mental health outcomes. A community sample of recently arrived Iraqi refugees (N = 298) completed the HTQ and measures of posttraumatic stress disorder (PTSD) and depression symptoms. Principal components analysis of HTQ items revealed a 5-component subtype model of trauma that accounted for more item variance than a 1-component solution. These trauma subtypes also accounted for more variance in PTSD and depression symptoms (12 and 10%, respectively) than did the cumulative trauma index (7 and 3%, respectively). Trauma subtypes provided more information than cumulative trauma in the prediction of negative mental health outcomes. Therefore, use of these subtypes may enhance the utility of the HTQ when assessing at-risk populations.
NASA Technical Reports Server (NTRS)
Alexander, W. M.; Goad, S.; Mcdonald, R. A.; Tanner, W. G., Jr.; Pollock, J. J.
1989-01-01
The Dust Impact Detection System (DIDSY) aboard the Giotto spacecraft provided the information on the dust flux, mass spectrum, and cumulative mass distribution flux in the coma of Comet Halley. Analysis of discrete pulse height data of cometary particles for the mass range of particles between 4.0 x 10 to the -10th g and 6.0 x 10 to the -6th g registered by the Giotto DIDSY detectors 2, 3, and 4 has been completed, and a cumulative flux has been determined for this size range of particles. Inside the cometopause, anomalous peaks have been identified as deviation from a 1/R-squared curve in both pre- and postencounter measurements.
Self-learning Monte Carlo method and cumulative update in fermion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Junwei; Shen, Huitao; Qi, Yang
2017-06-07
In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less
NASA Astrophysics Data System (ADS)
Wang, Xiang-qiu; Zhang, Huojun; Xie, Wen-xi
2017-08-01
Based on the similar material model test of full tunnel, the theory of elastic wave propagation and the testing technology of intelligent ultrasonic wave had been used to research the dynamic accumulative damage characteristics of tunnel’s lining structure under the dynamic loads of high speed train. For the more, the dynamic damage variable of lining structure of high speed railway’s tunnel was obtained. The results shown that the dynamic cumulative damage of lining structure increases nonlinearly with the times of cumulative vibration, the weakest part of dynamic cumulative damage is the arch foot of tunnel. Much more attention should be paid to the design and operation management of high speed railway’s tunnel.
Külahci, Fatih; Sen, Zekâi
2009-09-15
The classical solid/liquid distribution coefficient, K(d), for radionuclides in water-sediment systems is dependent on many parameters such as flow, geology, pH, acidity, alkalinity, total hardness, radioactivity concentration, etc. in a region. Considerations of all these effects require a regional analysis with an effective methodology, which has been based on the concept of the cumulative semivariogram concept in this paper. Although classical K(d) calculations are punctual and cannot represent regional pattern, in this paper a regional calculation methodology is suggested through the use of Absolute Point Cumulative SemiVariogram (APCSV) technique. The application of the methodology is presented for (137)Cs and (90)Sr measurements at a set of points in Keban Dam reservoir, Turkey.
Downie, Laura E; Naranjo Golborne, Cecilia; Chen, Merry; Ho, Ngoc; Hoac, Cam; Liyanapathirana, Dasun; Luo, Carol; Wu, Ruo Bing; Chinnery, Holly R
2018-06-01
Our aim was to compare regeneration of the sub-basal nerve plexus (SBNP) and superficial nerve terminals (SNT) following corneal epithelial injury. We also sought to compare agreement when quantifying nerve parameters using different image analysis techniques. Anesthetized, female C57BL/6 mice received central 1-mm corneal epithelial abrasions. Four-weeks post-injury, eyes were enucleated and processed for PGP9.5 to visualize the corneal nerves using wholemount immunofluorescence staining and confocal microscopy. The percentage area of the SBNP and SNT were quantified using: ImageJ automated thresholds, ImageJ manual thresholds and manual tracings in NeuronJ. Nerve sum length was quantified using NeuronJ and Imaris. Agreement between methods was considered with Bland-Altman analyses. Four-weeks post-injury, the sum length of nerve fibers in the SBNP, but not the SNT, was reduced compared with naïve eyes. In the periphery, but not central cornea, of both naïve and injured eyes, nerve fiber lengths in the SBNP and SNT were strongly correlated. For quantifying SBNP nerve axon area, all image analysis methods were highly correlated. In the SNT, there was poor correlation between manual methods and auto-thresholding, with a trend towards underestimating nerve fiber area using auto-thresholding when higher proportions of nerve fibers were present. In conclusion, four weeks after superficial corneal injury, there is differential recovery of epithelial nerve axons; SBNP sum length is reduced, however the sum length of SNTs is similar to naïve eyes. Care should be taken when selecting image analysis methods to compare nerve parameters in different depths of the corneal epithelium due to differences in background autofluorescence. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hasanian, Mostafa; Lissenden, Cliff J.
2017-08-01
The extraordinary sensitivity of nonlinear ultrasonic waves to the early stages of material degradation makes them excellent candidates for nondestructive material characterization. However, distinguishing weak material nonlinearity from instrumentation nonlinearity remains problematic for second harmonic generation approaches. A solution to this problem is to mix waves having different frequencies and to let their mutual interaction generate sum and difference harmonics at frequencies far from those of the instrumentation. Mixing of bulk waves and surface waves has been researched for some time, but mixing of guided waves has not yet been investigated in depth. A unique aspect of guided waves is their dispersive nature, which means we need to assure that a wave can propagate at the sum or difference frequency. A wave vector analysis is conducted that enables selection of primary waves traveling in any direction that generate phase matched secondary waves. We have tabulated many sets of primary waves and phase matched sum and difference harmonics. An example wave mode triplet of two counter-propagating collinear shear horizontal waves that interact to generate a symmetric Lamb wave at the sum frequency is simulated using finite element analysis and then laboratory experiments are conducted. The finite element simulation eliminates issues associated with instrumentation nonlinearities and signal-to-noise ratio. A straightforward subtraction method is used in the experiments to identify the material nonlinearity induced mutual interaction and show that the generated Lamb wave propagates on its own and is large enough to measure. Since the Lamb wave has different polarity than the shear horizontal waves the material nonlinearity is clearly identifiable. Thus, the mutual interactions of shear horizontal waves in plates could enable volumetric characterization of material in remote regions from transducers mounted on just one side of the plate.
Zenic, Natasa; Peric, Mia; Zubcevic, Nada Grcic; Ostojic, Zdenko; Ostojic, Ljerka
2010-06-01
There have been few studies comparing substance use and misuse (SU&M) in different performing arts forms. Herein, we identified and compared SU&M in women studying an art (ballet, n = 21), a non-Olympic sport (dance sport, n = 25), and an Olympic sport (synchronized swimming, n = 23). The sample of variables comprised general, educational, and sport factors, as well as SU&M data, including consumption of opiates, cigarettes, alcohol, nutritional supplements, doping behaviors, and beliefs. Using the Kruskal-Wallis test, we found no significant differences between study groups in potential doping behaviors. Most of the examinees reported that they did not rely on physicians' and/or coaches' opinions regarding doping. Only sport dancers recognized their consumption of cannabis as a violation of anti-doping rules. Those more convinced that doping habits are present in their sport (or art) have a certain tendency toward doping usage. In conclusion, a strong anti-doping campaign within the studied arts is suggested, focusing on the health-related problems of SU&M.
Ling, Hangjian; Katz, Joseph
2014-09-20
This paper deals with two issues affecting the application of digital holographic microscopy (DHM) for measuring the spatial distribution of particles in a dense suspension, namely discriminating between real and virtual images and accurate detection of the particle center. Previous methods to separate real and virtual fields have involved applications of multiple phase-shifted holograms, combining reconstructed fields of multiple axially displaced holograms, and analysis of intensity distributions of weakly scattering objects. Here, we introduce a simple approach based on simultaneously recording two in-line holograms, whose planes are separated by a short distance from each other. This distance is chosen to be longer than the elongated trace of the particle. During reconstruction, the real images overlap, whereas the virtual images are displaced by twice the distance between hologram planes. Data analysis is based on correlating the spatial intensity distributions of the two reconstructed fields to measure displacement between traces. This method has been implemented for both synthetic particles and a dense suspension of 2 μm particles. The correlation analysis readily discriminates between real and virtual images of a sample containing more than 1300 particles. Consequently, we can now implement DHM for three-dimensional tracking of particles when the hologram plane is located inside the sample volume. Spatial correlations within the same reconstructed field are also used to improve the detection of the axial location of the particle center, extending previously introduced procedures to suspensions of microscopic particles. For each cross section within a particle trace, we sum the correlations among intensity distributions in all planes located symmetrically on both sides of the section. This cumulative correlation has a sharp peak at the particle center. Using both synthetic and recorded particle fields, we show that the uncertainty in localizing the axial location of the center is reduced to about one particle's diameter.
Construct validity of the abbreviated mental test in older medical inpatients.
Antonelli Incalzi, R; Cesari, M; Pedone, C; Carosella, L; Carbonin, P U
2003-01-01
To evaluate validity and internal structure of the Abbreviated Mental Test (AMT), and to assess the dependence of the internal structure upon the characteristics of the patients examined. Cross-sectional examination using data from the Italian Group of Pharmacoepidemiology in the Elderly (GIFA) database. Twenty-four acute care wards of Geriatrics or General Medicine. Two thousand eight hundred and eight patients consecutively admitted over a 4-month period. Demographic characteristics, functional status, medical conditions and performance on AMT were collected at discharge. Sensitivity, specificity and predictive values of the AMT <7 versus a diagnosis of dementia made according to DSM-III-R criteria were computed. The internal structure of AMT was assessed by principal component analysis. The analysis was performed on the whole population and stratified for age (<65, 65-80 and >80 years), gender, education (<6 or >5 years) and presence of congestive heart failure (CHF). AMT achieved high sensitivity (81%), specificity (84%) and negative predictive value (99%), but a low positive predictive value of 25%. The principal component analysis isolated two components: the former component represents the orientation to time and space and explains 45% of AMT variance; the latter is linked to memory and attention and explains 13% of variance. Comparable results were obtained after stratification by age, gender or education. In patients with CHF, only 48.3% of the cumulative variance was explained; the factor accounting for most (34.6%) of the variance explained was mainly related to the three items assessing memory. AMT >6 rules out dementia very reliably, whereas AMT <7 requires a second level cognitive assessment to confirm dementia. AMT is bidimensional and maintains the same internal structure across classes defined by selected social and demographic characteristics, but not in CHF patients. It is likely that its internal structure depends on the type of patients. The use of a sum-score could conceal some part of the information provided by the AMT. Copyright 2003 S. Karger AG, Basel
Musaeva, T S; Karipidi, M K; Zabolotskikh, I B
2016-11-01
a comprehensive assessment of the water balance on the basis of daily, cumulative balance and 10% of the body weight gain and their role in the development of early complications after major abdominal surgery. A retrospective study of the perioperative period in 150 patients who underwent major abdomi- nal surgery was performed. The physical condition of the patients corresponded to ASA 3 class. The average age was 46 (38-62) years. The following stages ofresearch: an analysis of daily balance and cumulative balance in complicated and uncomplicated group and their role in the development of complications; the timing of development ofcomplications and possible relationship with fluid overload and the development of complications; changes in the level of albumin within 10 days of the postoperative period. The analysis of complications didn't show significant differences between complicated and uncomplicated groups according to the water balance during the surgery and by the end of the first day. When constructing the area under the ROC curve (A UROC) low resolution ofthe balance in intraoperative period and the first day and the balance on the second day to predict complications was shown. Significant diferences according to the cumulative balance was observed from the third day of the postoperative period Also with the third day of the postoperative period there is a good resolution for prediction ofpostoperative complications according to the cumulative balance with the cut-offpoint > of 50,7 ml/kg. the excessive infusion therapy is a predictor of adverse outcome in patients after major abdominal surgery. Therefore, after 3 days of postoperative period it is important to maintain mechanisms for the excretion of excess fluid or limitations of infusion therapy.
Bongers, Suzan; Slottje, Pauline; Kromhout, Hans
2018-07-01
To assess the association between long-term exposure to static magnetic fields (SMF) in a magnetic resonance imaging (MRI)-manufacturing environment and hypertension. In an occupational cohort of male workers (n = 538) of an MRI-manufacturing facility, the first and last available blood pressure measurements from the facility's medical surveillance scheme were associated with modeled cumulative exposure to SMF. Exposure modeling was based on linkage of individual job histories from the facility's personnel records with a facility specific historical job exposure matrix. Hypertension was defined as a systolic pressure of above 140 mm Hg and/or a diastolic blood pressure above 90 mm Hg. Logistic regression models were used to associate cumulative SMF exposure to hypertension while adjusting for age, body mass index and blood pressure at time of first blood pressure measurement. Stratified analysis by exposure duration was performed similarly. High cumulative exposure to SMF (≥ 7.4 K Tesla minutes) was positively associated with development of hypertension (Odds Ratio [OR] 2.32, 95% confidence interval [CI] 1.27 - 4.25, P = 0.006). Stratified analysis showed a stronger association for those with high cumulative SMF exposure within a period up to 10 years (OR 3.96, 95% CI 1.62 - 9.69, P = 0.003), but no significant association was found for (high) cumulative exposure accumulated in a period of 10 or more years. Our findings suggest SMF exposure intensity to be more important than exposure duration for the risk of developing hypertension. Our data revealed that exposure to high levels of MRI-related SMF during MRI-manufacturing might be associated with developing hypertension. Copyright © 2018 Elsevier Inc. All rights reserved.
Costa, Amine Farias; Hoek, Gerard; Brunekreef, Bert; Ponce de Leon, Antônio C M
2017-03-01
Evaluation of short-term mortality displacement is essential to accurately estimate the impact of short-term air pollution exposure on public health. We quantified mortality displacement by estimating single-day lag effects and cumulative effects of air pollutants on mortality using distributed lag models. We performed a daily time series of nonaccidental and cause-specific mortality among elderly residents of São Paulo, Brazil, between 2000 and 2011. Effects of particulate matter smaller than 10 μm (PM 10 ), nitrogen dioxide (NO 2 ) and carbon monoxide (CO) were estimated in Poisson generalized additive models. Single-day lag effects of air pollutant exposure were estimated for 0-, 1- and 2-day lags. Distributed lag models with lags of 0-10, 0-20 and 0-30 days were used to assess mortality displacement and potential cumulative exposure effects. PM 10 , NO 2 and CO were significantly associated with nonaccidental and cause-specific deaths in both single-day lag and cumulative lag models. Cumulative effect estimates for 0-10 days were larger than estimates for single-day lags. Cumulative effect estimates for 0-30 days were essentially zero for nonaccidental and circulatory deaths but remained elevated for respiratory and cancer deaths. We found evidence of mortality displacement within 30 days for nonaccidental and circulatory deaths in elderly residents of São Paulo. We did not find evidence of mortality displacement within 30 days for respiratory or cancer deaths. Citation: Costa AF, Hoek G, Brunekreef B, Ponce de Leon AC. 2017. Air pollution and deaths among elderly residents of São Paulo, Brazil: an analysis of mortality displacement. Environ Health Perspect 125:349-354; http://dx.doi.org/10.1289/EHP98.
Manipulations of Cartesian Graphs: A First Introduction to Analysis.
ERIC Educational Resources Information Center
Lowenthal, Francis; Vandeputte, Christiane
1989-01-01
Introduces an introductory module for analysis. Describes stock of basic functions and their graphs as part one and three methods as part two: transformations of simple graphs, the sum of stock functions, and upper and lower bounds. (YP)
Factor structure of the functional movement screen in marine officer candidates.
Kazman, Josh B; Galecki, Jeffrey M; Lisman, Peter; Deuster, Patricia A; OʼConnor, Francis G
2014-03-01
Functional movement screening (FMS) is a musculoskeletal assessment that is intended to fill a gap between preparticipation examinations and performance tests. Functional movement screening consists of 7 standardized movements involving multiple muscle groups that are rated 0-3 during performance; scores are combined into a final score, which is intended to predict injury risk. This use of a sum-score in this manner assumes that the items are unidimensional and scores are internally consistent, which are measures of internal reliability. Despite research into the FMS' predictive value and interrater reliability, research has not assessed its psychometric properties. The present study is a standard psychometric analysis of the FMS and is the first to assess the internal consistency and factor structure of the FMS, using Cronbach's alpha and exploratory factor analysis (EFA). Using a cohort of 877 male and 57 female Marine officer candidates who performed the FMS, EFA of polychoric correlations with varimax rotation was conducted to explore the structure of the FMS. Tests were repeated on the original scores, which integrated feelings of pain during movement (0-3), and then on scores discounting the pain instruction and based only on the performance (1-3), to determine whether pain ratings affected the factor structure. The average FMS score was 16.7 ± 1.8. Cronbach's alpha was 0.39. Exploratory factor analysis availed 2 components accounting for 21 and 17% and consisting of separate individual movements (shoulder mobility and deep squat, respectively). Analysis on scores discounting pain showed similar results. The factor structures were not interpretable, and the low Cronbach's alpha suggests a lack of internal consistency in FMS sum scores. Results do not offer support for validity of the FMS sum score as a unidimensional construct. In the absence of additional psychometric research, caution is warranted when using the FMS sum score.
NASA Astrophysics Data System (ADS)
Voynova, Y. G.; Petersen, W.; Brix, H.
2016-02-01
Due to a number of well documented and unusual atmospheric conditions, the late-spring of 2013 in Central and Eastern Europe was colder and wetter than usual, with saturated soils and higher than average cumulative precipitation. Additional precipitation at the end of May, and beginning of June 2013, caused widespread floods within the Danube and Elbe Rivers, and billions of euros in damages. Within the Elbe watershed, the discharge generated under these conditions was the largest among all summer floods and the second largest on record over the last 140 years (based on daily discharges). The high-frequency monitoring network of the Coastal Observing System for Northern and Arctic Seas (COSYNA) captured the influence of this major freshwater influx on the German Bight. Data from an Elbe Estuary (Cuxhaven) monitoring station, and from a FerryBox aboard a ferry travelling between Büsum and Helgoland, documented the salinity changes in the German Bight, which persisted for a month after the peak river discharge. The flood generated a large influx of dissolved and particulate organic carbon, associated with the freshwater plume, while surface dissolved oxygen between Büsum and Helgoland became undersaturated (not typical for the summer). The Federal Maritime and Hydrographic Agency (BSH) also reported unusually high nutrient concentrations in the German Bight caused by the flood. These conditions subsequently generated a month-long chlorophyll bloom, prolonged dissolved oxygen supersaturation, and higher than usual surface water pH within the German Bight. In the context of predicted increase in frequency of extreme discharge events due to climate change, the June 2013 flood-related biogeochemical changes could become more ubiquitous in the future, and should be considered in management and modeling efforts.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
The Cumulative Effect of Health Adversities on Children's Later Academic Achievement.
Quach, Jon; Nguyen, Cattram; O'Connor, Meredith; Wake, Melissa
We aimed to determine whether the accumulation of physical, psychosocial, and combined health adversities measured at age 8 to 9 years predicts worsening of academic scores cross-sectionally at 8 to 9 and longitudinally at 10 to 11 years. Design: Longitudinal data from Waves 3 and 4 in the Longitudinal Study of Australian Children (83% of 4983 retained). Exposures (8-9 years): Physical health adversities (yes/no; summed range, 0-5): overweight, special health care needs, chronic illness, PedsQL Physical, and global health. Psychosocial health adversities (yes/no; summed range, 0-4): parent- and teacher-reported behavior, PedsQL Psychosocial, sleep problems. Combined health adversities (range 0-9). Outcomes (8-9, and 10-11 years): National academic standardized test scores. Generalized estimating equations, accounting for multiple academic domains in each year and socioeconomic position and cognition. At 8 to 9 years, 23.9%, 9.9%, and 5.3% had 1, 2, or ≥3 physical health adversities, respectively, while 27.2%, 9.5%, and 4.9% had 1, 2, or ≥3 psychosocial health adversities. For each additional health adversity at 8 to 9 years, academic scores fell incrementally in year 3 and year 5 (both P < .001), with reductions of at least 0.4 SDs for ≥3 health adversities. Number was more important than type (physical, psychosocial) of adversity. The accumulation of health adversities predicts poorer academic achievement up to 2 years later. Interventions might need to address multiple domains to improve child academic outcomes and be delivered across the health-education interface. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Cost calculation and prediction in adult intensive care: a ground-up utilization study.
Moran, J L; Peisach, A R; Solomon, P J; Martin, J
2004-12-01
The ability of various proxy cost measures, including therapeutic activity scores (TISS and Omega) and cumulative daily severity of illness scores, to predict individual ICU patient costs was assessed in a prospective "ground-up" utilization costing study over a six month period in 1991. Daily activity (TISS and Omega scores) and utilization in consecutive admissions to three adult university associated ICUs was recorded by dedicated data collectors. Cost prediction used linear regression with determination (80%) and validation (20%) data sets. The cohort, 1333 patients, had a mean (SD) age 57.5 (19.4) years, (41% female) and admission APACHE III score of 58 (27). ICU length of stay and mortality were 3.9 (6.1) days and 17.6% respectively. Mean total TISS and Omega scores were 117 (157) and 72 (113) respectively. Mean patient costs per ICU episode (1991 dollar AUS) were dollar 6801 (dollar 10311), with median costs of dollar 2534, range dollar 106 to dollar 95,602. Dominant cost fractions were nursing 43.3% and overheads 16.9%. Inflation adjusted year 2002 (mean) costs were dollar 9343 (dollar AUS). Total costs in survivors were predicted by Omega score, summed APACHE III score and ICU length of stay; determination R2, 0.91; validation 0.88. Omega was the preferred activity score. Without the Omega score, predictors were age, summed APACHE III score and ICU length of stay; determination R2, 0.73; validation 0.73. In non-survivors, predictors were age and ICU length of stay (plus interaction), and Omega score (determination R2, 0.97; validation 0.91). Patient costs may be predicted by a combination of ICU activity indices and severity scores.
NASA Astrophysics Data System (ADS)
Fayache, M. S.; Sharma, S. Shelley; Zamick, L.
1996-10-01
Shell model calculations are performed for magnetic dipole excitations in8Be and10Be, first with a quadrupole-quadrupole interaction (Q·Q) and then with a realistic interaction. The calculations are performed both in a 0pspace and in a large space which includes all 2ℏωexcitations. In the 0pwithQ·Qwe have an analytic expression for the energies of all states. In this limit we find that in10Be theL=1S=0 scissors mode with isospinT=1 is degenerate with that ofT=2. By projection from an intrinsic state we can obtain simple expressions forB(M1) to the scissors modes in8Be and10Be. We plot cumulative sums for energy-weighted isovector orbital transitions fromJ=0+ground states to the 1+excited states. These have the structure of a low-energy plateau and a steep rise to a high-energy plateau. The relative magnitudes of these plateaux are discussed. By comparing8Be and10Be we find that contrary to the behaviour in heavy deformed nuclei,B(M1)orbitalis not proportional toB(E2). On the other hand, a sum rule which relatesB(M1) to the difference (B(E2)isoscalar-B(E2)isovector) succeeds in describing the difference in behaviours in the two nuclei. The results forQ·Qand the realistic interactions are compared, as are the results in the 0pspace and the large (0p+2ℏω) space. The Wigner supermultiplet scheme is a very useful guide in analyzing the shell model results.
"Compacted" procedures for adults' simple addition: A review and critique of the evidence.
Chen, Yalin; Campbell, Jamie I D
2018-04-01
We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (2016, Cognition, 146, 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model's predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts-the network interference theory (Campbell, 1995)-predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults' simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy.
Transient hyperprolactinemia in infertile women with luteal phase deficiency.
Huang, K E; Bonfiglio, T A; Muechler, E K
1991-10-01
This study was conducted to evaluate the prevalence of transient hyperprolactinemia in infertile women with luteal phase deficiency. One hundred fifty-one luteal phase deficiency patients and 11 controls had serum prolactin (PRL) measured daily for 3-4 days near ovulation. Thirty-three subjects (21.9%) had transient hyperprolactinemia, with PRL above 20 ng/mL for 1 or 2 days, and were studied further. The blood samples of these 33 subjects and of the controls were also analyzed for LH and FSH. Plasma progesterone was measured on the fourth, seventh, and tenth days after ovulation in both groups. The mean (+/- SD) of the mid-cycle integrated LH surge (125.0 +/- 23.0 mIU/mL; N = 26) and the sum of three plasma progesterone levels (23.8 +/- 4.5 ng/mL; N = 21) in the luteal phase deficiency women were significantly (P less than .001) lower than those of the controls (LH 158.7 +/- 13.8 mIU/mL; progesterone 33.8 +/- 6.5 ng/mL). All 33 luteal phase deficiency subjects with transient hyperprolactinemia were treated with bromocriptine at a dose ranging from 1.25-5 mg/day to maintain mid-cycle PRL levels between 5-15 ng/mL. Both the integrated LH surge and the sum of three progesterone levels increased significantly (P less than .05) during bromocriptine treatment, to 142.6 +/- 22.4 mIU/mL (N = 20) and 28.2 +/- 6.2 ng/mL (N = 18), respectively. Fourteen of the 33 patients conceived. The cumulative probability of conception was 31% for six cycles and 45% for 12 cycles of treatment.(ABSTRACT TRUNCATED AT 250 WORDS)
10 CFR 51.45 - Environmental report.
Code of Federal Regulations, 2010 CFR
2010-01-01
... an analysis of the cumulative impacts of the activities to be authorized by the limited work... the proposed action should it be implemented. (c) Analysis. The environmental report must include an analysis that considers and balances the environmental effects of the proposed action, the environmental...
CUMULATIVE PM2.5 EXPOSURE AND TELOMERE LENGTH IN WORKERS EXPOSED TO WELDING FUMES
Wong, Jason Y. Y.; De Vivo, Immaculata; Lin, Xihong; Christiani, David C.
2014-01-01
Telomeres are genomic structures that reflect both mitotic history and biochemical trauma to the genome. Metals inherent in fine particulate matter (PM2.5) were shown to be genotoxic via oxidative damage. However, few studies investigated the induction time of cumulative PM2.5 exposure on telomere length in a longitudinal setting. Therefore, the purpose of this study was to assess the association between occupational PM2.5 exposure in various time windows and telomere length. The study population consisted of 48 boilermakers and the follow-up period was 8 yr. The main exposures were cumulative occupational PM2.5 in the month, year, and career prior to each blood draw, assessed via work history questionnaires and area air measures. Repeated telomere length measurements from leukocytes were assessed via real-time quantitative polymerase chain reaction (qPCR). Analysis was performed using linear mixed models controlling for confounders and white blood cell differentials. Cumulative PM2.5 exposure was treated continuously and categorized into quartiles, in separate analyses. At any follow-up time, for each milligram per cubic meter per hour increase in cumulative PM2.5 exposure in the prior month, there was a statistically significant decrease in relative telomere length of −0.04 units. When categorizing the exposure into quartiles, there was a significant negative association between telomere length and highest quartile of cumulative PM2.5 exposure in the prior month (−0.16). These findings suggest that genomic trauma to leukocyte telomeres was more consistent with recent occupational PM2.5 exposure, as opposed to cumulative exposure extending into the distant past. PMID:24627998
Prognostic Factors of Uterine Serous Carcinoma-A Multicenter Study.
Zhong, Xiaozhu; Wang, Jianliu; Kaku, Tengen; Wang, Zhiqi; Li, Xiaoping; Wei, Lihui
2018-04-04
The prognostic factors of uterine serous carcinoma (USC) vary among studies, and there is no report of Chinese USC patients. The aim of this study was to investigate the clinicopathological characteristics and prognostic factors in Chinese patients with USC. Patients with USC from 13 authoritative university hospitals in China and treated between 2004 and 2014 were retrospectively reviewed. Three-year disease-free survival rate (DFSR), cumulative recurrence, and cumulative mortality were estimated by Kaplan-Meier analyses and log-rank tests. Multivariate Cox regression analysis was used to model the association of potential prognostic factors with clinical outcomes. Data of a total of 241 patients were reviewed. The median follow-up was 26 months (range, 1-128 months). Median age was 60 years (range, 39-84 years), and 58.0% had stages I-II disease. The 3-year DFSR and cumulative recurrence were 46.8% and 27.7%. Advanced stage (III and IV) (P = 0.004), myometrial invasion (P = 0.001), adnexal involvement (P < 0.001), lymph node metastasis (P = 0.025), and positive peritoneal cytology (P = 0.007) were independently associated with 3-year DFSR. Advanced stage (P = 0.017), myometrial invasion (P = 0.008), adnexal involvement (odds ratio, 2.987; P = 0.001), lymph node metastasis (P = 0.031), and positive peritoneal cytology (P = 0.001) were independently associated with the cumulative recurrence. Myometrial invasion (P = 0.004) and positive peritoneal cytology (P = 0.025) were independently associated with 3-year cumulative mortality. Peritoneal cytology and myometrial invasion could be independent prognostic factors for 3-year DFSR, cumulative recurrence, and cumulative mortality of patients with USC. Prospective studies are needed to confirm these results.
Lifetime cumulative number of menstrual cycles and serum sex hormone levels in postmenopausal women.
Chavez-MacGregor, Mariana; van Gils, Carla H; van der Schouw, Yvonne T; Monninkhof, Evelyn; van Noord, Paulus A H; Peeters, Petra H M
2008-03-01
Lifetime cumulative number of menstrual cycles is related to breast cancer risk. The aim of this study is to investigate the relation between this index and serum sex hormone levels in postmenopausal women. Cross-sectional study including 860 naturally postmenopausal Dutch participants of the European Prospective Investigation into Cancer and Nutrition. Lifetime cumulative number of menstrual cycles was computed using questionnaire data on ages at menarche and menopause, number of pregnancies, breastfeeding, oral contraceptive use (OC) and regularity pattern. Measurements of hormones included estrone (E1), estradiol (E2), andostrenedione, testosterone, sex-hormone binding globulin (SHBG) and dehydroepiandrostenedione sulfate (DHEAS). The relation between the lifetime cumulative number of menstrual cycles and hormone levels was assessed using analysis of covariance. Relations between reproductive characteristics and hormone levels were also studied. Adjustments for characteristics at blood collection included age, years since menopause, BMI, hormone replacement therapy use, OC use, smoking habits, alcohol intake and physical activity were done. Lifetime cumulative number of cycles was related with SHBG; participants in the lowest category had higher SHBG levels. For the separate characteristics, DHEAS and androstenedione increased significantly with increasing age at menarche, while androstenedione and testosterone decreased with increasing age at menopause. For the parity characteristics, SHBG levels increased according to the number of live births. Lifetime cumulative number menstrual cycles was related only to SHBG. Therefore, free levels of estrogens or androgens may be related to this number of menstrual cycles estimate, reflecting lifetime exposure to ovarian hormones.
Zoufaly, Alexander; Stellbrink, Hans-Jürgen; Heiden, Matthias An der; Kollan, Christian; Hoffmann, Christian; van Lunzen, Jan; Hamouda, Osamah
2009-07-01
AIDS-related lymphoma contributes to significant morbidity and mortality among human immunodeficiency virus (HIV)-infected patients receiving highly active antiretroviral therapy (HAART). We assessed the predictive role of cumulative HIV viremia and other risk factors in the development of AIDS-related non-Hodgkin lymphoma. Data from the Clinical Surveillance of HIV Disease (ClinSurv) study, an ongoing, observational, open cohort study of HIV-infected patients from different urban areas in Germany, were analyzed using a Cox proportional hazards model. In the Cox model, which comprised 6022 patients and 27,812 patient-years of follow-up while patients were receiving HAART from 1999 through 2006, cumulative HIV viremia was found to be independently associated with the risk of lymphoma (hazard ratio, [HR], 1.67 [95% confidence interval {CI}, 1.27-2.20]) (P < .001]). This association differed markedly between lymphoma subtypes. Although the association was more pronounced for Burkitt-type lymphoma (HR, 3.45 [95% CI, 1.52-7.85]) (P = .003), there was no association between cumulative HIV viremia and the incidence of primary central nervous system lymphoma (HR, 1.00 [95% CI, 0.39-2.57]) (P = .997). Other risk factors associated with an increased risk in a multivariable analysis included the latest CD4 T cell count as well as age per 10-year increment. Cumulative HIV viremia is an independent and strong predictor of AIDS-related lymphoma among patients receiving HAART. The influence of cumulative HIV viremia may differ between lymphoma subtypes.
NASA Astrophysics Data System (ADS)
Darin, M. H.; Dorsey, R. J.
2012-12-01
Development of a consistent and balanced tectonic reconstruction for the late Cenozoic San Andreas fault (SAF) in southern California has been hindered for decades by incompatible estimates of total dextral offset based on different geologic cross-fault markers. The older estimate of 240-270 km is based on offset fluvial conglomerates of the middle Miocene Mint Canyon and Caliente Formations west of the SAF from their presumed source area in the northern Chocolate Mountains NE of the SAF (Ehlig et al., 1975; Ehlert, 2003). The second widely cited offset marker is a distinctive Triassic megaporphyritic monzogranite that has been offset 160 ± 10 km between Liebre Mountain west of the SAF and the San Bernadino Mountains (Matti and Morton, 1993). In this analysis we use existing paleocurrent data and late Miocene clockwise rotation in the eastern Transverse Ranges (ETR) to re-assess the orientation of the piercing line used in the 240 km-correlation, and present a palinspastic reconstruction that satisfies all existing geologic constraints. Our reconstruction of the Mint Canyon piercing line reduces the original estimate of 240-270 km to 195 ± 15 km of cumulative right-lateral slip on the southern SAF (sensu stricto), which is consistent with other published estimates of 185 ± 20 km based on correlative basement terranes in the Salton Trough region. Our estimate of ~195 km is consistent with the lower estimate of ~160 km on the Mojave segment because transform-parallel extension along the southwestern boundary of the ETR during transrotation produces ~25-40 km of displacement that does not affect offset markers of the Liebre/San Bernadino correlation located northwest of the ETR rotating domain. Reconciliation of these disparate estimates places an important new constraint on the total plate boundary shear that is likely accommodated in the adjacent northern Gulf of California. Global plate circuit models require ~650 km of cumulative Pacific-North America (PAC-NAM) relative plate motion since ~12 Ma (Atwater and Stock, 1998). We propose that the continental component of PAC-NAM shear is accommodated by: (1) 195 ± 15 km on the southern SAF (this study); (2) 12 ± 2 km on the Whittier-Elsinore fault; (3) 75 ± 20 km of cumulative shear across the central Mojave in the eastern California shear zone; (4) 30 ± 4 km of post-13 Ma slip on the Stateline fault; and (5) 47 ± 18 km of NW-directed translation produced by north-south shortening. Together, these components sum to 359 ± 31 km of net dextral displacement on the SAF system (sensu lato) in southern California since ca. 12 Ma, or ~300 km less than what is required by the global plate circuit. This suggests that the continental component of post-12 Ma PAC-NAM transform motion can be no more than ~390 km in the adjacent northern Gulf of California, substantially less than the 450 km of shear proposed in some models. We suggest that the remaining ~270-300 km of NW-directed relative plate motion is accommodated by a small component of late Miocene extension and roughly 225 km of slip on the offshore borderland fault system west of Baja California.
Meta-analysis of expression and function of neprilysin in Alzheimer's disease.
Zhang, Huifeng; Liu, Dan; Wang, Yixing; Huang, Huanhuan; Zhao, Yujia; Zhou, Hui
2017-09-14
Neprilysin (NEP) is one of the most important Aβ-degrading enzymes, and its expression and activity in Alzheimer's brain have been widely reported, but the results remain debatable. Thus, the meta-analysis was performed to elucidate the role of NEP in Alzheimer's disease (AD). The relevant case-control or cohort studies were retrieved according to our inclusion/exclusion criteria. Six studies with 123 controls and 141 AD cases, seven studies with 102 controls and 90 AD cases, and four studies with 93 controls and 132 AD cases were included in meta-analysis of NEP's protein, mRNA, and enzyme activity respectively. We conducted Meta regression to detect the sources of heterogeneity and further performed cumulative meta-analysis or subgroup analysis. Our meta-analysis revealed a significantly lower level of NEP mRNA (SMD=-0.44, 95%CI: -0.87, -0.00, p=0.049) in AD cases than in non-AD cases, and such pattern was not altered over time in the cumulative meta-analysis. However, the decrease of NEP protein (SMD=-0.18, 95%CI: -0.62, 0.25) and enzyme activity (SMD=-0.35, 95%CI: -1.03, 0.32) in AD cases did not pass the significance check, while the cumulative meta-analysis by average age showed the pooled effect became insignificant as adding the studies with younger subjects, which indicates that the protein expression and enzyme activity of NEP in the cortex are affected by age. Therefore, the present meta-analysis suggests the need of further investigation of roles of NEP in AD pathogenesis and treatment. Copyright © 2017 Elsevier B.V. All rights reserved.
Why do we need three levels to understand the molecular optical response?
NASA Astrophysics Data System (ADS)
Perez-Moreno, Javier; Clays, Koen; Kuzyk, Mark G.
2011-10-01
Traditionally, the nonlinear optical response at the molecular level has been modeled using the two-level approximation, under the assumption that the behavior of the exact sum-over-states (SOS) expressions for the molecular polarizabilities is well represented by the contribution of only two levels. We show how, a rigorous application of the Thomas-Kuhn sum-rules over the SOS expression for the diagonal component of the first-hyperpolarziability proves that the two-level approximation is unphysical. In addition, we indicate how the contributions of potentially infinite number of states to the SOS expressions for the first-hyperpolarizability are well represented by the contributions of a generic three-level system. This explains why the analysis of the three-level model in conjugation with the sum rules has lead to successful paradigms for the optimization of organic chromophores.
Multimorbidity and the risk of restless legs syndrome in 2 prospective cohort studies.
Szentkirályi, András; Völzke, Henry; Hoffmann, Wolfgang; Trenkwalder, Claudia; Berger, Klaus
2014-06-03
Our aim was to evaluate the association between the cumulative effect of comorbidity and the risk of restless legs syndrome (RLS) in 2 population-based German cohort studies. The Dortmund Health Study (DHS) (n = 1,312; median follow-up time: 2.1 years) and the Study of Health in Pomerania (SHIP) (n = 4,308; median follow-up time: 5.0 years) were used for the analyses. RLS was assessed at baseline and follow-up according to the RLS minimal criteria. A comorbidity index was calculated as a sum of the following conditions: diabetes, hypertension, myocardial infarction, obesity, stroke, cancer, renal disease, anemia, depression, thyroid disease, and migraine. The relationship between comorbidity and incident RLS was analyzed with multivariate logistic regression models. An increase in the number of comorbid conditions at baseline predicted prevalent RLS (DHS: trend odds ratio [OR] = 1.24, 95% confidence interval [CI] 0.99-1.56; SHIP: trend OR = 1.34, 95% CI 1.18-1.52) and incident RLS (DHS: trend OR = 1.32, 95% CI 1.04-1.68; SHIP: trend OR = 1.59, 95% CI 1.37-1.85) after adjustment for several covariates. The ORs for incident RLS associated with 3 or more comorbid diseases (DHS: OR = 2.51, 95% CI 1.18-5.34; SHIP: OR = 4.30, 95% CI 2.60-7.11) were higher than the ORs for any single disease. Multimorbidity was a strong risk factor for RLS in these 2 population-based cohort studies. The results support the hypothesis that cumulative disease burden is more important than the presence of a specific single disease in the pathophysiology of RLS. © 2014 American Academy of Neurology.
Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.
ERIC Educational Resources Information Center
van Buuren, Stef; Heiser, Willem J.
1989-01-01
A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)
Analysis of Quadratic Diophantine Equations with Fibonacci Number Solutions
ERIC Educational Resources Information Center
Leyendekkers, J. V.; Shannon, A. G.
2004-01-01
An analysis is made of the role of Fibonacci numbers in some quadratic Diophantine equations. A general solution is obtained for finding factors in sums of Fibonacci numbers. Interpretation of the results is facilitated by the use of a modular ring which also permits extension of the analysis.
Bécares, Laia; Zhang, Nan
2018-01-01
Abstract Experiencing discrimination is associated with poor mental health, but how cumulative experiences of perceived interpersonal discrimination across attributes, domains, and time are associated with mental disorders is still unknown. Using data from the Study of Women’s Health Across the Nation (1996–2008), we applied latent class analysis and generalized linear models to estimate the association between cumulative exposure to perceived interpersonal discrimination and older women’s mental health. We found 4 classes of perceived interpersonal discrimination, ranging from cumulative exposure to discrimination over attributes, domains, and time to none or minimal reports of discrimination. Women who experienced cumulative perceived interpersonal discrimination over time and across attributes and domains had the highest risk of depression (Center for Epidemiologic Studies Depression Scale score ≥16) compared with women in all other classes. This was true for all women regardless of race/ethnicity, although the type and severity of perceived discrimination differed across racial/ethnic groups. Cumulative exposure to perceived interpersonal discrimination across attributes, domains, and time has an incremental negative long-term association with mental health. Studies that examine exposure to perceived discrimination due to a single attribute in 1 domain or at 1 point in time underestimate the magnitude and complexity of discrimination and its association with health. PMID:29036550
Zenic, Natasa; Ostojic, Ljerka; Sisic, Nedim; Pojskic, Haris; Peric, Mia; Uljevic, Ognjen; Sekulic, Damir
2015-01-01
Objective The community of residence (ie, urban vs rural) is one of the known factors of influence on substance use and misuse (SUM). The aim of this study was to explore the community-specific prevalence of SUM and the associations that exist between scholastic, familial, sports and sociodemographic factors with SUM in adolescents from Bosnia and Herzegovina. Methods In this cross-sectional study, which was completed between November and December 2014, the participants were 957 adolescents (aged 17 to 18 years) from Bosnia and Herzegovina (485; 50.6% females). The independent variables were sociodemographic, academic, sport and familial factors. The dependent variables consisted of questions on cigarette smoking and alcohol consumption. We have calculated differences between groups of participants (gender, community), while the logistic regressions were applied to define associations between the independent and dependent variables. Results In the urban community, cigarette smoking is more prevalent in girls (OR=2.05; 95% CI 1.27 to 3.35), while harmful drinking is more prevalent in boys (OR=2.07; 95% CI 1.59 to 2.73). When data are weighted by gender and community, harmful drinking is more prevalent in urban boys (OR=1.97; 95% CI 1.31 to 2.95), cigarette smoking is more frequent in rural boys (OR=1.61; 95% CI 1.04 to 2.39), and urban girls misuse substances to a greater extent than rural girls (OR=1.70; 95% CI 1.16 to 2.51,OR=2.85; 95% CI 1.88 to 4.31,OR=2.78; 95% CI 1.67 to 4.61 for cigarette smoking, harmful drinking and simultaneous smoking-drinking, respectively). Academic failure is strongly associated with a higher likelihood of SUM. The associations between parental factors and SUM are more evident in urban youth. Sports factors are specifically correlated with SUM for urban girls. Conclusions Living in an urban environment should be considered as a higher risk factor for SUM in girls. Parental variables are more strongly associated with SUM among urban youth, most probably because of the higher parental involvement in children’ personal lives in urban communities (ie, college plans, for example). Specific indicators should be monitored in the prevention of SUM. PMID:26546145
Pulmonary function of U.S. coal miners related to dust exposure estimates.
Attfield, M D; Hodous, T K
1992-03-01
This study of 7,139 U.S. coal miners used linear regression analysis to relate estimates of cumulative dust exposure to several pulmonary function variables measured during medical examinations undertaken between 1969 and 1971. The exposure data included newly derived cumulative dust exposure estimates for the period up to time of examination based on large data bases of underground airborne dust sampling measurements. Negative associations were found between measures of cumulative exposure and FEV1, FVC, and the FEV1/FVC ratio (p less than 0.001). In general, the relationships were similar to those reported for British coal miners. Overall, the results demonstrate an adverse effect of coal mine dust exposure on pulmonary function that occurs even in the absence of radiographically detected pneumoconiosis.
NASA Astrophysics Data System (ADS)
Struzik, Zbigniew R.; van Wijngaarden, Willem J.
We introduce a special purpose cumulative indicator, capturing in real time the cumulative deviation from the reference level of the exponent h (local roughness, Hölder exponent) of the fetal heartbeat during labour. We verify that the indicator applied to the variability component of the heartbeat coincides with the fetal outcome as determined by blood samples. The variability component is obtained from running real time decomposition of fetal heartbeat into independent components using an adaptation of an oversampled Haar wavelet transform. The particular filters used and resolutions applied are motivated by obstetricial insight/practice. The methodology described has the potential for real-time monitoring of the fetus during labour and for the prediction of the fetal outcome, allerting the attending staff in the case of (threatening) hypoxia.
Cumulative impacts of oil fields on northern Alaskan landscapes
Walker, D.A.; Webber, P.J.; Binnian, Emily F.; Everett, K.R.; Lederer, N.D.; Nordstrand, E.A.; Walker, M.D.
1987-01-01
Proposed further developments on Alaska's Arctic Coastal Plain raise questions about cumulative effects on arctic tundra ecosystems of development of multiple large oil fields. Maps of historical changes to the Prudhoe Bay Oil Field show indirect impacts can lag behind planned developments by many years and the total area eventually disturbed can greatly exceed the planned area of construction. For example, in the wettest parts of the oil field (flat thaw-lake plains), flooding and thermokarst covered more than twice the area directly affected by roads and other construction activities. Protecting critical wildlife habitat is the central issue for cumulative impact analysis in northern Alaska. Comprehensive landscape planning with the use of geographic information system technology and detailed geobotanical maps can help identify and protect areas of high wildlife use.
Miyoshi, S; Sakajiri, M; Ifukube, T; Matsushima, J
1997-01-01
We have proposed the Tripolar Electrode Stimulation Method (TESM) which may enable us to narrow the stimulation region and to move continuously the stimulation site for the cochlear implants. We evaluated whether or not TESM works according to a theory based on numerical analysis using the auditory nerve fiber model. In this simulation, the sum of the excited model fibers were compared with the compound actions potentials obtained from animal experiments. As a result, this experiment showed that TESM could narrow a stimulation region by controlling the sum of the currents emitted from the electrodes on both sides, and continuously move a stimulation site by changing the ratio of the currents emitted from the electrodes on both sides.
Kiesler, James L.
2002-01-01
An analysis of the application indicates that the selected data layers to be combined should be at the greatest spatial resolution possible; however, all data layers do not have to be at the same spatial resolution. The spatial variation of the data layers should be adequately defined. The size of each grid cell should be small enough to maintain the spatial definition of smaller features within the data layers. The most accurate results are shown to occur when the values for the grid cells representing the individual data layers are summed and the mean of the summed grid-cell values is used to describe the watershed of interest.
Comprehensive assessment of exposures to elongate mineral particles in the taconite mining industry.
Hwang, Jooyeon; Ramachandran, Gurumurthy; Raynor, Peter C; Alexander, Bruce H; Mandel, Jeffrey H
2013-10-01
Since the 1970s, concerns have been raised about elevated rates of mesothelioma in the vicinity of the taconite mines in the Mesabi Iron Range. However, insufficient quantitative exposure data have hampered investigations of the relationship between cumulative exposures to elongate mineral particles (EMP) in taconite dust and adverse health effects. Specifically, no research on exposure to taconite dust, which includes EMP, has been conducted since 1990. This article describes a comprehensive assessment of present-day exposures to total and amphibole EMP in the taconite mining industry. Similar exposure groups (SEGs) were established to assess present-day exposure levels and buttress the sparse historical data. Personal samples were collected to assess the present-day levels of worker exposures to EMP at six mines in the Mesabi Iron Range. The samples were analyzed using National Institute for Occupational Safety and Health (NIOSH) methods 7400 and 7402. For many SEGs in several mines, the exposure levels of total EMP were higher than the NIOSH Recommended Exposure Limit (REL). However, the total EMP classification includes not only the asbestiform EMP and their non-asbestiform mineral analogs but also other minerals because the NIOSH 7400 cannot differentiate between these. The concentrations of amphibole EMP were well controlled across all mines and were much lower than the concentrations of total EMP, indicating that amphibole EMP are not major components of taconite EMP. The levels are also well below the NIOSH REL of 0.1 EMP cc(-1). Two different approaches were used to evaluate the variability of exposure between SEGs, between workers, and within workers. The related constructs of contrast and homogeneity were calculated to characterize the SEGs. Contrast, which is a ratio of between-SEG variability to the sum of between-SEG and between-worker variability, provides an overall measure of whether there are distinctions between the SEGs. Homogeneity, which is the ratio of the within-worker variance component to the sum of the between-worker and within-worker variance components, provides an overall measure of how similar exposures are for workers within an SEG. Using these constructs, it was determined that the SEGs are formed well enough when grouped by mine for both total and amphibole EMP to be used for epidemiological analysis.
An Analysis of Airline Costs. Lecture Notes for MIT Courses. 16.73 Airline Management and Marketing
NASA Technical Reports Server (NTRS)
Simpson, R. W.
1972-01-01
The cost analyst must understand the operations of the airline and how the activities of the airline are measured, as well as how the costs are incurred and recorded. The data source is usually a cost accounting process. This provides data on the cumulated expenses in various categories over a time period like a quarter, or year, and must be correlated by the analyst with cumulated measures of airline activity which seem to be causing this expense.
Correspondence between quantization schemes for two-player nonzero-sum games and CNOT complexity
NASA Astrophysics Data System (ADS)
Vijayakrishnan, V.; Balakrishnan, S.
2018-05-01
The well-known quantization schemes for two-player nonzero-sum games are Eisert-Wilkens-Lewenstein scheme and Marinatto-Weber scheme. In this work, we establish the connection between the two schemes from the perspective of quantum circuits. Further, we provide the correspondence between any game quantization schemes and the CNOT complexity, where CNOT complexity is up to the local unitary operations. While CNOT complexity is known to be useful in the analysis of universal quantum circuit, in this work, we find its applicability in quantum game theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritboon, Atirach, E-mail: atirach.3.14@gmail.com; Department of Physics, Faculty of Science, Prince of Songkla University, Hat Yai 90112; Daengngam, Chalongrat, E-mail: chalongrat.d@psu.ac.th
2016-08-15
Biakynicki-Birula introduced a photon wave function similar to the matter wave function that satisfies the Schrödinger equation. Its second quantization form can be applied to investigate nonlinear optics at nearly full quantum level. In this paper, we applied the photon wave function formalism to analyze both linear optical processes in the well-known Mach–Zehnder interferometer and nonlinear optical processes for sum-frequency generation in dispersive and lossless medium. Results by photon wave function formalism agree with the well-established Maxwell treatments and existing experimental verifications.
Stability of the sum of two solitary waves for (gDNLS) in the energy space
NASA Astrophysics Data System (ADS)
Tang, Xingdong; Xu, Guixiang
2018-03-01
In this paper, we continue the study in [18]. We use the perturbation argument, modulational analysis and the energy argument in [15,16] to show the stability of the sum of two solitary waves with weak interactions for the generalized derivative Schrödinger equation (gDNLS) in the energy space. Here (gDNLS) hasn't the Galilean transformation invariance, the pseudo-conformal invariance and the gauge transformation invariance, and the case σ > 1 we considered corresponds to the L2-supercritical case.
Langevin, Scott M.; Ioannidis, John P.A.; Vineis, Paolo; Taioli, Emanuela
2010-01-01
There is an overwhelming abundance of genetic association studies available in the literature, which often can be collectively difficult to interpret. To address this issue, the Venice interim guidelines were established for determining the credibility of the cumulative evidence. The objective of this report is to evaluate the literature on the association of common GST variants (GSTM1 null, GSTT1 null and GSTP1 Ile105Val polymorphism) and lung cancer, and to assess the credibility of the associations using the newly proposed cumulative evidence guidelines. Information from the literature was enriched with an updated meta-analysis and a pooled analysis using data from the Genetic Susceptibility to Environmental Carcinogens (GSEC) database. There was a significant association between GSTM1 null and lung cancer for the meta- (meta OR = 1.17, 95% CI: 1.10–1.25) and pooled analysis (adjusted OR = 1.10, 95% CI: 1.04–1.16), although substantial heterogeneity was present. No overall association between lung cancer and GSTT1 null or GSTP1 Ile105Val was found. When the Venice criteria was applied, cumulative evidence for all associations were considered “weak”, with the exception of East Asian carriers of the G allele of GSTP1 Ile105Val, which was graded as “moderate” evidence. In spite of large amounts of studies, and several statistically significant summary estimates produced by meta-analyses, the application of the Venice criteria suggests extensive heterogeneity and susceptibility to bias for the studies on association of common genetic polymorphisms, such as with GST variants and lung cancer. PMID:20729793
Wu, Xiujuan; Li, Chunrong; Zhang, Bing; Shen, Donghui; Li, Ting; Liu, Kangding; Zhang, Hong-Liang
2015-09-02
Guillain-Barré syndrome (GBS) is an immune-mediated disorder of the peripheral nervous system. Respiratory failure requiring mechanical ventilation (MV) is a serious complication of GBS. Identification of modifiable risk factors for MV and poor short-term prognosis in mechanically ventilated patients with GBS may contribute to the individualized management and may help improve the outcome of the patients. We retrospectively analyzed the clinical data of 541 patients who were diagnosed with GBS from 2003 to 2014. Independent predictors for MV and short-term prognosis in mechanically ventilated patients were identified via multivariate logistic regression analysis. The mean age was 41.6 years with a male predilection (61.2%). Eighty patients (14.8%) required MV. Multivariate analysis revealed that shorter interval from onset to admission (p < 0.05), facial nerve palsy (p < 0.01), glossopharyngeal and vagal nerve deficits (p < 0.01) and lower Medical Research Council (MRC) sum score at nadir (p < 0.01) were risk factors for MV; disease occurrence in summer (p < 0.01) was a protective factor. As to prognostic factors, absence of antecedent infections (p < 0.01) and lower MRC sum score at nadir (p < 0.01) were predictors of poor short-term prognosis in mechanically ventilated patients regardless of treatment modality. We further investigated the predictors of poor short-term prognosis in patients requiring MV with different nadir MRC sum scores. Combined use of intravenous corticosteroids with intravenous immunoglobulin (odds ratio 10.200, 95% confidence interval 1.068-97.407, p < 0.05) was an independent predictor of poor short-term prognosis in mechanically ventilated patients with a nadir MRC sum score from 0 to 12 points, regardless of existence of antecedent infection. Clinical predictors of MV and poor short-term prognosis in mechanically ventilated GBS patients were distinct. Add-on use of intravenous corticosteroids was a risk factor for poor short-term prognosis in mechanically ventilated patients with a nadir MRC sum score from 0 to 12 points.
Schubert, C D; Leitsch, S; Haertnagl, F; Haas, E M; Giunta, R E
2015-08-01
Despite its recognition as an independent specialty, at German university hospitals the field of plastic surgery is still underrepresented in terms of independent departments with a dedicated research focus. The aim of this study was to analyse the publication performance within the German academic plastic surgery environment and to compare independent departments and dependent, subordinate organisational structures regarding their publication performance. Organisational structures and number of attending doctors in German university hospitals were examined via a website analysis. A pubmed analysis was applied to assess the publication performance (number of publications, cumulative impact factor, impact factor/publication, number of publications/MD, number of publications/unit) between 2009 and 2013. In a journal analysis the distribution of the cumulative impact factor and number of publications in different journals as well as the development of the impact factor in the top journals were analysed. Out of all 35 university hospitals there exist 12 independent departments for plastic surgery and 8 subordinate organisational structures. In 15 university hospitals there were no designated plastic surgery units. The number of attending doctors differed considerably between independent departments (3.6 attending doctors/unit) and subordinate organisational structures (1.1 attending doctors/unit). The majority of publications (89.0%) and of the cumulative impact factor (91.2%) as well as most of the publications/MD (54 publications/year) and publications/unit (61 publications/year) were created within the independent departments. Only in departments top publications with an impact factor > 5 were published. In general a negative trend regarding the number of publications (- 13.4%) and cumulative impact factor (- 28.9%) was observed. 58.4% of all publications were distributed over the top 10 journals. Within the latter the majority of articles were published in English journals (60% of publications, 79.9% of the cumulative impact factor). The average impact factor of the top 10 journals increased by 13.5% from 2009 to 2013. In contrast to subordinate and dependent organisational structures, independent departments of plastic surgery are the key performers within German academic plastic surgery which, however, suffers from a general declining publication performance. Hence, the type of organisational structure has a crucial influence on the research performance. © Georg Thieme Verlag KG Stuttgart · New York.
Maignien, Chloé; Gayet, Vanessa; Pocate-Cheriet, Khaled; Marcellin, Louis; Chapron, Charles
2018-01-01
Background Controlled ovarian stimulation in assisted reproduction technology (ART) may alters endometrial receptivity by an advancement of endometrial development. Recently, technical improvements in vitrification make deferred frozen-thawed embryo transfer (Def-ET) a feasible alternative to fresh embryo transfer (ET). In endometriosis-related infertility the eutopic endometrium is abnormal and its functional alterations are seen as likely to alter the quality of endometrial receptivity. One question in the endometriosis ART-management is to know whether Def-ET could restore optimal receptivity in endometriosis-affected women leading to increase in pregnancy rates. Objective To compare cumulative ART-outcomes between fresh versus Def-ET in endometriosis-infertile women. Materials and methods This matched cohort study compared def-ET strategy to fresh ET strategy between 01/10/2012 and 31/12/2014. One hundred and thirty-five endometriosis-affected women with a scheduled def-ET cycle and 424 endometriosis-affected women with a scheduled fresh ET cycle were eligible for matching. Matching criteria were: age, number of prior ART cycles, and endometriosis phenotype. Statistical analyses were conducted using univariable and multivariable logistic regression models. Results 135 in the fresh ET group and 135 in the def-ET group were included in the analysis. The cumulative clinical pregnancy rate was significantly increased in the def-ET group compared to the fresh ET group [58 (43%) vs. 40 (29.6%), p = 0.047]. The cumulative ongoing pregnancy rate was 34.8% (n = 47) and 17.8% (n = 24) respectively in the Def-ET and the fresh-ET groups (p = 0.005). After multivariable conditional logistic regression analysis, Def-ET was associated with a significant increase in the cumulative ongoing pregnancy rate as compared to fresh ET (OR = 1.76, CI95% 1.06–2.92, p = 0.028). Conclusion Def-ET in endometriosis-affected women was associated with significantly higher cumulative ongoing pregnancy rates. Our preliminary results suggest that Def-ET for endometriosis-affected women is an attractive option that could increase their ART success rates. Future studies, with a randomized design, should be conducted to further confirm those results. PMID:29630610
HISTORICAL ANALYSIS OF ECOLOGICAL EFFECTS: A USEFUL EDUCATIONAL TOOL
An historical analysis that presents the ecological consequences of development can be a valuable educational tool for citizens, students, and environmental managers. In highly impacted areas, the cumulative impacts of multiple stressors can result in complex environmental condit...
Preference, resistance to change, and the cumulative decision model.
Grace, Randolph C
2018-01-01
According to behavioral momentum theory (Nevin & Grace, 2000a), preference in concurrent chains and resistance to change in multiple schedules are independent measures of a common construct representing reinforcement history. Here I review the original studies on preference and resistance to change in which reinforcement variables were manipulated parametrically, conducted by Nevin, Grace and colleagues between 1997 and 2002, as well as more recent research. The cumulative decision model proposed by Grace and colleagues for concurrent chains is shown to provide a good account of both preference and resistance to change, and is able to predict the increased sensitivity to reinforcer rate and magnitude observed with constant-duration components. Residuals from fits of the cumulative decision model to preference and resistance to change data were positively correlated, supporting the prediction of behavioral momentum theory. Although some questions remain, the learning process assumed by the cumulative decision model, in which outcomes are compared against a criterion that represents the average outcome value in the current context, may provide a plausible model for the acquisition of differential resistance to change. © 2018 Society for the Experimental Analysis of Behavior.
Ostaszewski, Krzysztof; Zimmerman, Marc A
2006-12-01
Resiliency theory provides a conceptual framework for studying why some youth exposed to risk factors do not develop the negative behaviors they predict. The purpose of this study was to test compensatory and protective models of resiliency in a longitudinal sample of urban adolescents (80% African American). The data were from Years 1 (9th grade) and 4 (12th grade). The study examined effects of cumulative risk and promotive factors on adolescent polydrug use including alcohol, tobacco and marijuana. Cumulative measures of risk/promotive factors represented individual characteristics, peer influence, and parental/familial influences. After controlling for demographics, results of multiple regression of polydrug use support the compensatory model of resiliency both cross-sectionally and longitudinally. Promotive factors were also found to have compensatory effects on change in adolescent polydrug use. The protective model of resiliency evidenced cross-sectionally was not supported in longitudinal analysis. The findings support resiliency theory and the use of cumulative risk/promotive measures in resiliency research. Implications focused on utilizing multiple assets and resources in prevention programming are discussed.
Kim, Hyungjin; Choi, Seung Hong; Kim, Ji-Hoon; Ryoo, Inseon; Kim, Soo Chin; Yeom, Jeong A.; Shin, Hwaseon; Jung, Seung Chai; Lee, A. Leum; Yun, Tae Jin; Park, Chul-Kee; Sohn, Chul-Ho; Park, Sung-Hye
2013-01-01
Background Glioma grading assumes significant importance in that low- and high-grade gliomas display different prognoses and are treated with dissimilar therapeutic strategies. The objective of our study was to retrospectively assess the usefulness of a cumulative normalized cerebral blood volume (nCBV) histogram for glioma grading based on 3 T MRI. Methods From February 2010 to April 2012, 63 patients with astrocytic tumors underwent 3 T MRI with dynamic susceptibility contrast perfusion-weighted imaging. Regions of interest containing the entire tumor volume were drawn on every section of the co-registered relative CBV (rCBV) maps and T2-weighted images. The percentile values from the cumulative nCBV histograms and the other histogram parameters were correlated with tumor grades. Cochran’s Q test and the McNemar test were used to compare the diagnostic accuracies of the histogram parameters after the receiver operating characteristic curve analysis. Using the parameter offering the highest diagnostic accuracy, a validation process was performed with an independent test set of nine patients. Results The 99th percentile of the cumulative nCBV histogram (nCBV C99), mean and peak height differed significantly between low- and high-grade gliomas (P = <0.001, 0.014 and <0.001, respectively) and between grade III and IV gliomas (P = <0.001, 0.001 and <0.001, respectively). The diagnostic accuracy of nCBV C99 was significantly higher than that of the mean nCBV (P = 0.016) in distinguishing high- from low-grade gliomas and was comparable to that of the peak height (P = 1.000). Validation using the two cutoff values of nCBV C99 achieved a diagnostic accuracy of 66.7% (6/9) for the separation of all three glioma grades. Conclusion Cumulative histogram analysis of nCBV using 3 T MRI can be a useful method for preoperative glioma grading. The nCBV C99 value is helpful in distinguishing high- from low-grade gliomas and grade IV from III gliomas. PMID:23704910
A possible early sign of hydroxychloroquine macular toxicity.
Brandao, Livia M; Palmowski-Wolfe, Anja M
2016-02-01
Hydroxychloroquine (HCQ) has a low risk of retinal toxicity which increases dramatically with a cumulative dose of >1000 g. Here we report a case of HCQ macular toxicity presentation in a young patient with a cumulative dose of 438 g. A 15-year-old female started attending annual consultations for retinal toxicity screening in our clinic after 3 years of HCQ treatment for juvenile idiopathic dermatomyositis. She had been diagnosed at age 12 and had been on hydroxychloroquine 200 mg/day, cyclosporin 150 mg/day and vitamin D3 since. Screening consultations included: complete ophthalmologic examination, automated perimetry (AP, M Standard, Octopus 101, Haag-Streit), multifocal electroretinogram (VERIS 6.06™, FMSIII), optical coherence tomography (OCT, fast macular protocol, Cirrus SD-OCT, Carl Zeiss), fundus autofluorescence imaging (Spectralis OCT, Heidelberg Engineering Inc.) and color testing (Farnsworth-Panel-D-15). After 5 years of treatment, AP demonstrated reduced sensibility in only one extra-foveal point in each eye (p < 0.2). Even though other exams showed no alteration and the cumulative dose was only around 353 g, consultations were increased to every 6 months. After 2-year follow-up, that is, 7 years of HCQ, a bilateral paracentral macula thinning was evident on OCT, suggestive of bull's eye maculopathy. However, the retinal pigmented epithelium appeared intact and AP was completely normal in both eyes. Further evaluation with ganglion cell analysis (GCA = ganglion cell + inner plexiform layer, Cirrus SD-OCT, Carl Zeiss) showed a concentric thinning of this layer in the same area. Although daily and cumulative doses were still under the high toxicity risk parameters, HCQ was suspended. At a follow-up 1 year later, visual acuity was 20/16 without any further changes in OCT or on any other exam. This may be the first case report of insidious bull's eye maculopathy exclusively identified using OCT thickness analysis, in a patient in whom both cumulative and daily dosages were under the high-risk parameters for screening and the averages reported in studies. As ganglion cell analysis has only recently become available, further studies are needed to understand toxicity mechanisms and maybe review screening recommendations.
Steroid therapy and the risk of osteonecrosis in SARS patients: a dose-response meta-analysis.
Zhao, R; Wang, H; Wang, X; Feng, F
2017-03-01
This meta-analysis synthesized current evidence from 10 trials to evaluate the association between steroid therapy and osteonecrosis incidence in patients with severe acute respiratory syndrome (SARS). Our results suggest that higher cumulative doses and longer treatment durations of steroids are more likely to lead to the development of osteonecrosis in SARS patients. The link between steroid treatment and the risk of osteonecrosis in SARS patients remains unknown. The present meta-analysis aimed to examine the dose-response association between steroid therapy and osteonecrosis incidence in SARS patients. The sex differences in the development of steroid-induced osteonecrosis were also examined. We searched PubMed, Web of Science, CNKI, and WANFANG for studies that involved steroid therapy and reported osteonecrosis data in SARS patients. Two authors independently extracted the data from the individual studies, and the rate ratio (RR) of osteonecrosis was calculated using random-effect models. Ten studies with 1137 recovered SARS patients met the inclusion criteria. Close relationships between osteonecrosis incidence and both the cumulative dose and treatment duration of steroids were observed. The summary RR of osteonecrosis was 1.57 (95% confidence interval (CI) 1.30-1.89, p < 0.001) per 5.0 g increase in the cumulative dose of steroids and was 1.29 (95% CI 1.09-1.53, p = 0.003) for each 10-day increment of increase in treatment duration. The relationship was non-linear (p non-linear < 0.001 and p non-linear = 0.022). There were no significant differences in the risk of developing osteonecrosis between the male and female patients (RR 0.01, 95% CI -0.03 to 0.06, p = 0.582). SARS patients who received higher cumulative doses and longer treatment durations of steroids were more likely to develop osteonecrosis, and there were no sex differences in this dose-dependent side effect. Our findings suggest that it is important to reduce osteonecrosis risk by modifying the cumulative dose and the treatment duration of steroids in SARS patients.
Jung, Jiwon; Moon, Song Mi; Jang, Hee-Chang; Kang, Cheol-In; Jun, Jae-Bum; Cho, Yong Kyun; Kang, Seung-Ji; Seo, Bo-Jeong; Kim, Young-Joo; Park, Seong-Beom; Lee, Juneyoung; Yu, Chang Sik; Kim, Sung-Han
2018-01-01
The aim of this study was to investigate the incidence and risk factors of postoperative pneumonia (POP) within 1 year after cancer surgery in patients with the five most common cancers (gastric, colorectal, lung, breast cancer, and hepatocellular carcinoma [HCC]) in South Korea. This was a multicenter and retrospective cohort study performed at five nationwide cancer centers. The number of cancer patients in each center was allocated by the proportion of cancer surgery. Adult patients were randomly selected according to the allocated number, among those who underwent cancer surgery from January to December 2014 within 6 months after diagnosis of cancer. One-year cumulative incidence of POP was estimated using Kaplan-Meier analysis. An univariable Cox's proportional hazard regression analysis was performed to identify risk factors for POP development. As a multivariable analysis, confounders were adjusted using multiple Cox's PH regression model. Among the total 2000 patients, the numbers of patients with gastric cancer, colorectal cancer, lung cancer, breast cancer, and HCC were 497 (25%), 525 (26%), 277 (14%), 552 (28%), and 149 (7%), respectively. Overall, the 1-year cumulative incidence of POP was 2.0% (95% CI, 1.4-2.6). The 1-year cumulative incidences in each cancer were as follows: lung 8.0%, gastric 1.8%, colorectal 1.0%, HCC 0.7%, and breast 0.4%. In multivariable analysis, older age, higher Charlson comorbidity index (CCI) score, ulcer disease, history of pneumonia, and smoking were related with POP development. In conclusions, the 1-year cumulative incidence of POP in the five most common cancers was 2%. Older age, higher CCI scores, smoker, ulcer disease, and previous pneumonia history increased the risk of POP development in cancer patients. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
Variation in Lake Michigan alewife (Alosa pseudoharengus) thiaminase and fatty acids composition
Honeyfield, D.C.; Tillitt, D.E.; Fitzsimons, J.D.; Brown, S.B.
2010-01-01
Thiaminase activity of alewife (Alosa pseudoharengus) is variable across Lake Michigan, yet factors that contribute to the variability in alewife thiaminase activity are unknown. The fatty acid content of Lake Michigan alewife has not been previously reported. Analysis of 53 Lake Michigan alewives found a positive correlation between thiaminase activity and the following fatty acid: C22:ln9, sum of omega-6 fatty acids (Sw6), and sum of the polyunsaturated fatty acids. Thiaminase activity was negatively correlated with C15:0, C16:0, C17:0, C18:0, C20:0, C22:0, C24:0, C18:ln9t, C20:3n3, C22:2, and the sum of all saturated fatty acids (SAFA). Multi-variant regression analysis resulted in three variables (C18:ln9t, Sw6, SAFA) that explained 71% (R2=0.71, P<0.0001) of the variation in thiaminase activity. Because the fatty acid content of an organism is related is food source, diet may be an important factor modulating alewife thiaminase activity. These data suggest there is an association between fatty acids and thiaminase activity in Lake Michigan alewife.
Fincel, Mark J.; James, Daniel A.; Chipps, Steven R.; Davis, Blake A.
2014-01-01
Diet studies have traditionally been used to determine prey use and food web dynamics, while stable isotope analysis provides for a time-integrated approach to evaluate food web dynamics and characterize energy flow in aquatic systems. Direct comparison of the two techniques is rare and difficult to conduct in large, species rich systems. We compared changes in walleye Sander vitreus trophic position (TP) derived from paired diet content and stable isotope analysis. Individual diet-derived TP estimates were dissimilar to stable isotope-derived TP estimates. However, cumulative diet-derived TP estimates integrated from May 2001 to May 2002 corresponded to May 2002 isotope-derived estimates of TP. Average walleye TP estimates from the spring season appear representative of feeding throughout the entire previous year.
Xu, Wenti; Chen, Tianmu; Dong, Xiaochun; Kong, Mei; Lv, Xiuzhi; Li, Lin
2017-01-01
School-based influenza-like-illness (ILI) syndromic surveillance can be an important part of influenza community surveillance by providing early warnings for outbreaks and leading to a fast response. From September 2012 to December 2014, syndromic surveillance of ILI was carried out in 4 county-level schools. The cumulative sum methods(CUSUM) was used to detect abnormal signals. A susceptible-exposed-infectious/asymptomatic-recovered (SEIAR) model was fit to the influenza outbreak without control measures and compared with the actual influenza outbreak to evaluate the effectiveness of early control efforts. The ILI incidence rates in 2014 (14.51%) was higher than the incidence in 2013 (5.27%) and 2012 (3.59%). Ten school influenza outbreaks were detected by CUSUM. Each outbreak had high transmissibility with a median Runc of 4.62. The interventions in each outbreak had high effectiveness and all Rcon were 0. The early intervention had high effectiveness within the school-based ILI syndromic surveillance. Syndromic surveillance within schools can play an important role in controlling influenza outbreaks.
A comparison of interteaching and lecture in the college classroom.
Saville, Bryan K; Zinn, Tracy E; Neef, Nancy A; Van Norman, Renee; Ferreri, Summer J
2006-01-01
Interteaching is a new method of classroom instruction that is based on behavioral principles but offers more flexibility than other behaviorally based methods. We examined the effectiveness of interteaching relative to a traditional form of classroom instruction-the lecture. In Study 1, participants in a graduate course in special education took short quizzes after alternating conditions of interteaching and lecture. Quiz scores following interteaching were higher than quiz scores following lecture, although both methods improved performance relative to pretest measures. In Study 2, we also alternated interteaching and lecture but counterbalanced the conditions across two sections of an undergraduate research methods class. After each unit of information, participants from both sections took the same test. Again, test scores following interteaching were higher than test scores following lecture. In addition, students correctly answered more interteaching-based questions than lecture-based questions on a cumulative final test. In both studies, the majority of students reported a preference for interteaching relative to traditional lecture. In sum, the results suggest that interteaching may be an effective alternative to traditional lecture-based methods of instruction.
Launch pad lightning protection effectiveness
NASA Technical Reports Server (NTRS)
Stahmann, James R.
1991-01-01
Using the striking distance theory that lightning leaders will strike the nearest grounded point on their last jump to earth corresponding to the striking distance, the probability of striking a point on a structure in the presence of other points can be estimated. The lightning strokes are divided into deciles having an average peak current and striking distance. The striking distances are used as radii from the points to generate windows of approach through which the leader must pass to reach a designated point. The projections of the windows on a horizontal plane as they are rotated through all possible angles of approach define an area that can be multiplied by the decile stroke density to arrive at the probability of strokes with the window average striking distance. The sum of all decile probabilities gives the cumulative probability for all strokes. The techniques can be applied to NASA-Kennedy launch pad structures to estimate the lightning protection effectiveness for the crane, gaseous oxygen vent arm, and other points. Streamers from sharp points on the structure provide protection for surfaces having large radii of curvature. The effects of nearby structures can also be estimated.
Organizing pneumonia and occupational and environmental risk factors: a case-control study.
Jobard, Stéphanie; Chaigne, Benjamin; Marchand-Adam, Sylvain; Lasfargues, Gérard; Diot, Elisabeth
2017-11-01
A single-center case-control study was carried out to investigate the relationship between occupational and environmental exposure and organizing pneumonia (OP). Thirty-seven cases of OP, including 25 cases of cryptogenic OP, and 111 controls were included. Occupational exposure was assessed retrospectively by an industrial hygienist and an occupational physician, through semi-quantitative estimates of exposure. An exposure score was calculated for each subject, based on probability, intensity, daily frequency, and duration of exposure for each period of employment. The final cumulative exposure score was obtained by summing exposure scores for all periods of employment. Significant associations with all-cause OP were observed for exposure to tetrachloroethylene (OR 13.33, CI 95% 1.44-123.5) and silica (OR 6.61, CI 95% 1.16-37.71). A significant association with cryptogenic OP was observed only for tetrachloroethylene (OR 31.6, CI 95% 1.64-610.8). No associations were found for environmental exposure. Despite its low statistical power, this work suggests that occupational risk factors could be involved in OP.
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
Quantum Dynamics Study of the Isotopic Effect on Capture Reactions: HD, D2 + CH3
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
Time-dependent wave-packet-propagation calculations are reported for the isotopic reactions, HD + CH3 and D2 + CH3, in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probabilities for different initial rotational-vibrational states are presented in this study. This study shows that excitations of the HD(D2) enhances the reactivities; whereas the excitations of the CH3 umbrella mode have the opposite effects. This is consistent with the reaction of H2 + CH3. The comparison of these three isotopic reactions also shows the isotopic effects in the initial-state-selected reaction probabilities. The cumulative reaction probabilities (CRP) are obtained by summing over initial-state-selected reaction probabilities. The energy-shift approximation to account for the contribution of degrees of freedom missing in the six dimensionality calculation is employed to obtain approximate full-dimensional CRPs. The rate constant comparison shows H2 + CH3 reaction has the biggest reactivity, then HD + CH3, and D2 + CH3 has the smallest.
Dynamic probability control limits for risk-adjusted Bernoulli CUSUM charts.
Zhang, Xiang; Woodall, William H
2015-11-10
The risk-adjusted Bernoulli cumulative sum (CUSUM) chart developed by Steiner et al. (2000) is an increasingly popular tool for monitoring clinical and surgical performance. In practice, however, the use of a fixed control limit for the chart leads to a quite variable in-control average run length performance for patient populations with different risk score distributions. To overcome this problem, we determine simulation-based dynamic probability control limits (DPCLs) patient-by-patient for the risk-adjusted Bernoulli CUSUM charts. By maintaining the probability of a false alarm at a constant level conditional on no false alarm for previous observations, our risk-adjusted CUSUM charts with DPCLs have consistent in-control performance at the desired level with approximately geometrically distributed run lengths. Our simulation results demonstrate that our method does not rely on any information or assumptions about the patients' risk distributions. The use of DPCLs for risk-adjusted Bernoulli CUSUM charts allows each chart to be designed for the corresponding particular sequence of patients for a surgeon or hospital. Copyright © 2015 John Wiley & Sons, Ltd.
The respiration pattern as an indicator of the anaerobic threshold.
Mirmohamadsadeghi, Leila; Vesin, Jean-Marc; Lemay, Mathieu; Deriaz, Olivier
2015-08-01
The anaerobic threshold (AT) is a good index of personal endurance but needs a laboratory setting to be determined. It is important to develop easy AT field measurements techniques in order to rapidly adapt training programs. In the present study, it is postulated that the variability of the respiratory parameters decreases with exercise intensity (especially at the AT level). The aim of this work was to assess, on healthy trained subjects, the putative relationships between the variability of some respiration parameters and the AT. The heart rate and respiratory variables (volume, rate) were measured during an incremental exercise performed on a treadmill by healthy moderately trained subjects. Results show a decrease in the variance of 1/tidal volume with the intensity of exercise. Consequently, the cumulated variance (sum of the variance measured at each level of the exercise) follows an exponential relationship with respect to the intensity to reach eventually a plateau. The amplitude of this plateau is closely related to the AT (r=-0.8). It is concluded that the AT is related to the variability of the respiration.
A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.
Tang, Liansheng Larry; Balakrishnan, N
2011-01-01
The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.
Renormalization group analysis of B →π form factors with B -meson light-cone sum rules
NASA Astrophysics Data System (ADS)
Shen, Yue-Long; Wei, Yan-Bing; Lü, Cai-Dian
2018-03-01
Within the framework of the B -meson light-cone sum rules, we review the calculation of radiative corrections to the three B →π transition form factors at leading power in Λ /mb. To resum large logarithmic terms, we perform the complete renormalization group evolution of the correlation function. We employ the integral transformation which diagonalizes evolution equations of the jet function and the B -meson light-cone distribution amplitude to solve these evolution equations and obtain renormalization group improved sum rules for the B →π form factors. Results of the form factors are extrapolated to the whole physical q2 region and are compared with that of other approaches. The effect of B -meson three-particle light-cone distribution amplitudes, which will contribute to the form factors at next-to-leading power in Λ /mb at tree level, is not considered in this paper.
Male chromosomal polymorphisms reduce cumulative live birth rate for IVF couples.
Ni, Tianxiang; Li, Jing; Chen, Hong; Gao, Yuan; Gao, Xuan; Yan, Junhao; Chen, Zi-Jiang
2017-08-01
Chromosomal polymorphisms are associated with infertility, but their effects on assisted reproductive outcomes are still quite conflicting, especially after IVF treatment. This study evaluated the role of chromosomal polymorphisms of different genders in IVF pregnancy outcomes. Four hundred and twenty-five infertile couples undergoing IVF treatment were divided into three groups: 214 couples with normal chromosomes (group A, control group), 86 couples with female polymorphisms (group B), and 125 couples with male polymorphisms (group C). The pregnancy outcomes after the first and cumulative transfer cycles were analyzed, and the main outcome measures were live birth rate (LBR) after the first transfer cycle and cumulative LBR after a complete IVF cycle. Comparison of pregnancy outcomes after the first transfer cycle within group A, group B, and group C demonstrated a similar LBR as well as other rates of implantation, clinical pregnancy, early miscarriage, and ongoing pregnancy (P > 0.05). However, the analysis of cumulative pregnancy outcomes indicated that compared with group A, group C had a significantly lower LBR per cycle (80.4 vs 68.00%), for a rate ratio of 1.182 (95% CI 1.030 to 1.356, P = 0.01) and a significantly higher cumulative early miscarriage rate (EMR) among clinical pregnancies (7.2 vs 14.7%), for a rate ratio of 0.489 (95% CI 0.248 to 0.963, P = 0.035). Couples with chromosomal polymorphisms in only male partners have poor pregnancy outcomes after IVF treatment manifesting as high cumulative EMR and low LBR after a complete cycle.
Evidence-based evaluation of the cumulative effects of ecosystem restoration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diefenderfer, Heida L.; Johnson, Gary E.; Thom, Ronald M.
Evaluating the cumulative effects of large-scale ecological restoration programs is necessary to inform adaptive ecosystem management and provide society with resilient and sustainable services. However, complex linkages between restorative actions and ecosystem responses make evaluations problematic. Despite long-term federal investments in restoring aquatic ecosystems, no standard evaluation method has been adopted and most programs focus on monitoring and analysis, not synthesis and evaluation. In this paper, we demonstrate a new transdisciplinary approach integrating techniques from evidence-based medicine, critical thinking, and cumulative effects assessment. Tiered hypotheses are identified using an ecosystem conceptual model. The systematic literature review at the core ofmore » evidence-based assessment becomes one of many lines of evidence assessed collectively, using critical thinking strategies and causal criteria from a cumulative effects perspective. As a demonstration, we analyzed data from 166 locations on the Columbia River and estuary representing 12 indicators of habitat and fish response to floodplain restoration actions intended to benefit threatened and endangered salmon. Synthesis of seven lines of evidence showed that hydrologic reconnection promoted macrodetritis export, prey availability, and fish access and feeding. The evidence was sufficient to infer cross-boundary, indirect, compounding and delayed cumulative effects, and suggestive of nonlinear, landscape-scale, and spatial density effects. On the basis of causal inferences regarding food web functions, we concluded that the restoration program has a cumulative beneficial effect on juvenile salmon. As a result, this evidence-based approach will enable the evaluation of restoration in complex coastal and riverine ecosystems where data have accumulated without sufficient synthesis.« less
An effective measure of childhood adversity that is valid with older adults.
Danielson, Ramona; Sanders, Gregory F
2018-06-12
Childhood adversity (CA) has life-long effects that we are just beginning to understand. The Midlife in the United States (MIDUS) data is a rich resource that could contribute to the knowledge of the impact of CA in the later years but, while a number of CA items are included in MIDUS, a cumulative CA scale based on those items has not been created. Development of a CA scale would create numerous research opportunities within MIDUS and overcome some of the challenges of using a secondary dataset. The present study aimed to demonstrate that a cumulative measure of CA that is valid with older adults could be created using retrospective MIDUS Refresher study data (Ryff et al., 2016); analysis included data collected from 2011 to 2014 from 2542 adults ages 23-76 (1017 adults 55-76). The present study provided a rationale for which measures of CA to include in a cumulative scale. The distribution of eight types of CA and the cumulative CA scale were consistent with findings from past studies of CA. The factor structure of the cumulative CA scale was similar to the original ACE study and included two factors: household dynamics and child abuse/neglect. Consistent with past studies, the CA scale predicted a negative association with life satisfaction and a positive association with number of chronic conditions. This study demonstrated that an effective cumulative measure of CA could be created that would be of value to other studies using MIDUS data to explore outcomes with older adults. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evidence-based evaluation of the cumulative effects of ecosystem restoration
Diefenderfer, Heida L.; Johnson, Gary E.; Thom, Ronald M.; ...
2016-03-18
Evaluating the cumulative effects of large-scale ecological restoration programs is necessary to inform adaptive ecosystem management and provide society with resilient and sustainable services. However, complex linkages between restorative actions and ecosystem responses make evaluations problematic. Despite long-term federal investments in restoring aquatic ecosystems, no standard evaluation method has been adopted and most programs focus on monitoring and analysis, not synthesis and evaluation. In this paper, we demonstrate a new transdisciplinary approach integrating techniques from evidence-based medicine, critical thinking, and cumulative effects assessment. Tiered hypotheses are identified using an ecosystem conceptual model. The systematic literature review at the core ofmore » evidence-based assessment becomes one of many lines of evidence assessed collectively, using critical thinking strategies and causal criteria from a cumulative effects perspective. As a demonstration, we analyzed data from 166 locations on the Columbia River and estuary representing 12 indicators of habitat and fish response to floodplain restoration actions intended to benefit threatened and endangered salmon. Synthesis of seven lines of evidence showed that hydrologic reconnection promoted macrodetritis export, prey availability, and fish access and feeding. The evidence was sufficient to infer cross-boundary, indirect, compounding and delayed cumulative effects, and suggestive of nonlinear, landscape-scale, and spatial density effects. On the basis of causal inferences regarding food web functions, we concluded that the restoration program has a cumulative beneficial effect on juvenile salmon. As a result, this evidence-based approach will enable the evaluation of restoration in complex coastal and riverine ecosystems where data have accumulated without sufficient synthesis.« less
Assessing cumulative impacts within state environmental review frameworks in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Zhao, E-mail: zma@nrc.umass.ed; Becker, Dennis R., E-mail: drbecker@umn.ed; Kilgore, Michael A., E-mail: mkilgore@umn.ed
Cumulative impact assessment (CIA) is the process of systematically assessing a proposed action's cumulative environmental effects in the context of past, present, and future actions, regardless of who undertakes such actions. Previous studies have examined CIA efforts at the federal level but little is known about how states assess the cumulative impacts of nonfederal projects. By examining state environmental review statutes, administrative rules, agency-prepared materials, and a national survey of the administrators of state environmental review programs, this study identifies the legal and administrative frameworks for CIA. It examines current CIA practice, discusses the relationship between CIA policy and itsmore » implementation, and explores the opportunities for improvement. The results of the study show that twenty-nine state environmental review programs across twenty-six states required the assessment of cumulative environmental impacts. More than half of these programs have adopted specific procedures for implementing their policies. Some programs assessed cumulative impacts using a standard review document, and others have created their own documentations incorporated into applications for state permits or funding. The majority of programs have adopted various scales, baselines, significance criteria, and coordination practices in their CIA processes. Mixed methods were generally used for data collection and analysis; qualitative methods were more prevalent than quantitative methods. The results also suggest that a program with comprehensive and consistent environmental review policies and procedures does not always imply extensive CIA requirements and practices. Finally, this study discusses the potential for improving existing CIA processes and promoting CIA efforts in states without established environmental review programs.« less
Gavino, V C; Milo, G E; Cornwell, D G
1982-03-01
Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
Bicen, A Ozan; Lehtomaki, Janne J; Akyildiz, Ian F
2018-03-01
Molecular communication (MC) over a microfluidic channel with flow is investigated based on Shannon's channel capacity theorem and Fick's laws of diffusion. Specifically, the sum capacity for MC between a single transmitter and multiple receivers (broadcast MC) is studied. The transmitter communicates by using different types of signaling molecules with each receiver over the microfluidic channel. The transmitted molecules propagate through microfluidic channel until reaching the corresponding receiver. Although the use of different types of molecules provides orthogonal signaling, the sum broadcast capacity may not scale with the number of the receivers due to physics of the propagation (interplay between convection and diffusion based on distance). In this paper, the performance of broadcast MC on a microfluidic chip is characterized by studying the physical geometry of the microfluidic channel and leveraging the information theory. The convergence of the sum capacity for microfluidic broadcast channel is analytically investigated based on the physical system parameters with respect to the increasing number of molecular receivers. The analysis presented here can be useful to predict the achievable information rate in microfluidic interconnects for the biochemical computation and microfluidic multi-sample assays.
Nonlinear zero-sum differential game analysis by singular perturbation methods
NASA Technical Reports Server (NTRS)
Sinar, J.; Farber, N.
1982-01-01
A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.
Storage flux uncertainty impact on eddy covariance net ecosystem exchange measurements
NASA Astrophysics Data System (ADS)
Nicolini, Giacomo; Aubinet, Marc; Feigenwinter, Christian; Heinesch, Bernard; Lindroth, Anders; Mamadou, Ossénatou; Moderow, Uta; Mölder, Meelis; Montagnani, Leonardo; Rebmann, Corinna; Papale, Dario
2017-04-01
Complying with several assumption and simplifications, most of the carbon budget studies based on eddy covariance (EC) measurements, quantify the net ecosystem exchange (NEE) by summing the flux obtained by EC (Fc) and the storage flux (Sc). Sc is the rate of change of CO2, within the so called control volume below the EC measurement level, given by the difference in the instantaneous profiles of concentration at the beginning and end of the EC averaging period, divided by the averaging period. While cumulating over time led to a nullification of Sc, it can be significant at short time periods. The approaches used to estimate Sc fluxes largely vary, from measurements based only on a single sampling point (usually located at the EC measurement height) to measurements based on several sampling profiles distributed within the control volume. Furthermore, the number of sampling points within each profile vary, according to their height and the ecosystem typology. It follows that measurement accuracy increases with the sampling intensity within the control volume. In this work we use the experimental dataset collected during the ADVEX campaign in which Sc flux has been measured in three similar forest sites by the use of 5 sampling profiles (towers). Our main objective is to quantify the impact of Sc measurement uncertainty on NEE estimates. Results show that different methods may produce substantially different Sc flux estimates, with problematic consequences in case high frequency (half-hourly) data are needed for the analysis. However, the uncertainty on long-term estimates may be tolerate.
Identification and analysis of recent temporal temperature trends for Dehradun, Uttarakhand, India
NASA Astrophysics Data System (ADS)
Piyoosh, Atul Kant; Ghosh, Sanjay Kumar
2018-05-01
Maximum and minimum temperatures (T max and T min) are indicators of changes in climate. In this study, observed and gridded T max and T min data of Dehradun are analyzed for the period 1901-2014. Observed data obtained from India Meteorological Department and National Institute of Hydrology, whereas gridded data from Climatic Research Unit (CRU) were used. Efficacy of elevation-corrected CRU data was checked by cross validation using data of various stations at different elevations. In both the observed and gridded data, major change points were detected using Cumulative Sum chart. For T max, change points occur in the years 1974 and 1997, while, for T min, in 1959 and 1986. Statistical significance of trends was tested in three sub-periods based on change points using Mann-Kendall (MK) test, Sen's slope estimator, and linear regression (LR) method. It has been found that both the T max and T min have a sequence of rising, falling, and rising trends in sub-periods. Out of three different methods used for trend tests, MK and SS have indicated similar results, while LR method has also shown similar results for most of the cases. Root-mean-square error for actual and anomaly time series of CRU data was found to be within one standard deviation of observed data which indicates that the CRU data are very close to the observed data. The trends exhibited by CRU data were also found to be similar to the observed data. Thus, CRU temperature data may be quite useful for various studies in the regions of scarcity of observational data.
NASA Astrophysics Data System (ADS)
Kim, Garam; Kang, Hyung-Ku; Myoung, Jung-Goo
2017-03-01
Mesozooplankton community structure and environmental factors were monitored monthly at a fixed station off Tongyeong, southeastern coast of Korea, from 2011 to 2014 to better understand the variability of the mesozooplankton community in relation to changes in the marine environment. Total mesozooplankton density varied from 747 to 8,945 inds. m-3 with peaks in summer. The surface water temperature ( r = 0.338, p < 0.05) and chlorophyll- a (Chl- a) concentration ( r = 0.505, p < 0.001) were parts of the factors that may have induced the mesozooplankton peaks in summer. Copepods accounted for 71% of total mesozooplankton. Total copepod density, particularly cyclopoid copepods, increased during the study period. Cumulative sum plots and anomalies of the cyclopoid copepod density revealed a change of the cyclopoid density from negative to positive in June 2013. A positive relationship between cyclopoid copepods and the Chl- a concentration ( r = 0.327, p < 0.05) appeared to be one of the reasons for the increase in cyclopoids. Dominant mesozooplankton species such as Paracalanus parvus s.l., Oikopleura spp., Evadne tergestina, Cirripedia larvae, Corycaeus affinis, Calanus sinicus, and Oithona similis accounted for 60% of total mesozooplankton density. Based on cluster analysis of the mesozooplankton community by year, the seasonal distinction among groups was different in 2014 compared to other years. P. parvus s.l. and its copepodites contributed most in all groups in all four years. Our results suggest that the high Chl- a concentration since 2013 may have caused the changes in mesozooplankton community structure in the study area.
Real-world utilization of darbepoetin alfa in cancer chemotherapy patients.
Pan, Xiaoyun Lucy; Nordstrom, Beth L; MacLachlan, Sharon; Lin, Junji; Xu, Hairong; Sharma, Anjali; Chandler, David; Li, Xiaoyan Shawn
2017-01-01
Objectives To provide an understanding of darbepoetin alfa dose patterns in cancer patients undergoing myelosuppressive chemotherapy starting from 2011. Study design This is a retrospective cohort study using a proprietary outpatient oncology database. Methods Metastatic, solid tumor cancer patients receiving concomitant myelosuppressive chemotherapy and darbepoetin alfa with an associated hemoglobin <10 g/dL during 2011-2015 were identified. The analysis was restricted to the first continuous exposure to chemotherapy agents (maximum allowable gap of 90 days between consecutive exposures) with darbepoetin alfa for each eligible patient. Initial, maintenance, weekly, and cumulative doses of darbepoetin alfa were examined across all darbepoetin alfa users. Subgroup analyses were conducted by chemotherapy type, baseline hemoglobin level, year of chemotherapy, solid tumor type, and initial dosing schedule. Differences in weekly doses across subgroups were evaluated using Wilcoxon rank-sum tests. Results Among 835 eligible patients, over 90% were 50 years or older. Mean chemotherapy course duration was 248 days, and mean duration of darbepoetin alfa treatment was 106 days. The mean weekly darbepoetin alfa dose was 110 µg. Patients received a mean of 4.3 darbepoetin alfa injections in the first chemotherapy course. There were no statistically significant differences (all P values > .05) in weekly dose by chemotherapy type, baseline hemoglobin level, year of chemotherapy, or solid tumor type. Conclusion The average weekly darbepoetin alfa dose among metastatic cancer patients with chemotherapy-induced anemia from this study was 110 µg, which was lower than the labeled dosage for most adults. This estimate did not differ over time, across chemotherapy regimens, baseline hemoglobin levels, or solid tumor types.
Roadway safety analysis methodology for Utah : final report.
DOT National Transportation Integrated Search
2016-12-01
This research focuses on the creation of a three-part Roadway Safety Analysis methodology that applies and automates the cumulative work of recently-completed roadway safety research. The first part is to prepare the roadway and crash data for analys...
Analysis of the strong coupling form factors of ΣbNB and ΣcND in QCD sum rules
NASA Astrophysics Data System (ADS)
Yu, Guo-Liang; Wang, Zhi-Gang; Li, Zhen-Yu
2017-08-01
In this article, we study the strong interaction of the vertices Σ b NB and Σ c ND using the three-point QCD sum rules under two different Dirac structures. Considering the contributions of the vacuum condensates up to dimension 5 in the operation product expansion, the form factors of these vertices are calculated. Then, we fit the form factors into analytical functions and extrapolate them into time-like regions, which gives the coupling constants. Our analysis indicates that the coupling constants for these two vertices are G ΣbNB = 0.43±0.01 GeV-1 and G ΣcND = 3.76±0.05 GeV-1. Supported by Fundamental Research Funds for the Central Universities (2016MS133)
Incremental Upgrade of Legacy Systems (IULS)
2001-04-01
analysis task employed SEI’s Feature-Oriented Domain Analysis methodology (see FODA reference) and included several phases: • Context Analysis • Establish...Legacy, new Host and upgrade system and software. The Feature Oriented Domain Analysis approach ( FODA , see SUM References) was used for this step...Feature-Oriented Domain Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR- 21, ESD-90-TR-222); Software Engineering Institute, Carnegie Mellon University
Cumulative trauma and symptom complexity in children: a path analysis.
Hodges, Monica; Godbout, Natacha; Briere, John; Lanktree, Cheryl; Gilbert, Alicia; Kletzka, Nicole Taylor
2013-11-01
Multiple trauma exposures during childhood are associated with a range of psychological symptoms later in life. In this study, we examined whether the total number of different types of trauma experienced by children (cumulative trauma) is associated with the complexity of their subsequent symptomatology, where complexity is defined as the number of different symptom clusters simultaneously elevated into the clinical range. Children's symptoms in six different trauma-related areas (e.g., depression, anger, posttraumatic stress) were reported both by child clients and their caretakers in a clinical sample of 318 children. Path analysis revealed that accumulated exposure to multiple different trauma types predicts symptom complexity as reported by both children and their caretakers. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cumulative Effects of Short-Term Polymetal Contamination on Soil Bacterial Community Structure
Ranjard, L.; Lignier, L.; Chaussod, R.
2006-01-01
In this study we evaluated the short-term effects of copper, cadmium, and mercury, added singly or in combination at different doses, on soil bacterial community structure using the bacterial automated ribosomal intergenic spacer analysis (B-ARISA) fingerprinting technique. Principal-component analysis of B-ARISA profiles allowed us to deduce the following order of impact: (Cu + Cd + Hg) >> Hg ≥ Cd > Cu. These results demonstrated that there was a cumulative effect of metal toxicity. Furthermore, the trend of modifications was consistent with the “hump-backed” relationships between biological diversity and disturbance described by Giller et al. (K. E. Giller, E. Witler, and S. P. McGrath, Soil Biol. Biochem. 30:1389-1414, 1998). PMID:16461728
Demetriades-Shah, T.H.; Fuchs, M.; Kanemasu, E.T.; Flitcroft, I.D.
1994-01-01
A strong correlation exists between intercepted solar radiation and crop growth. We cautioned that many derivations of the functional relationship between solar energy and biomass use cumulated data, and therefore have logical and arithmetic weaknesses. We examined the growth response of plants to solar energy by using rates of change, of both interception and growth. Our analysis revealed that measurements of light interception can only establish the relationship a posteriori. Replacing interception data with normalized random numbers did not change the quality of the relations. Several scientists have contested our views. This article reconfirms the general validity of our analysis and of our conclusions, that it is not possible to determine plant growth on the sole basis of intercepted solar energy.
Williams-Sether, Tara; Asquith, William H.; Thompson, David B.; Cleveland, Theodore G.; Fang, Xing
2004-01-01
A database of incremental cumulative-rainfall values for storms that occurred in small urban and rural watersheds in north and south central Texas during the period from 1959 to 1986 was used to develop empirical, dimensionless, cumulative-rainfall hyetographs. Storm-quartile classifications were determined from the cumulative-rainfall values, which were divided into data groups on the basis of storm-quartile classification (first, second, third, fourth, and first through fourth combined), storm duration (0 to 6, 6 to 12, 12 to 24, 24 to 72, and 0 to 72 hours), and rainfall amount (1 inch or more). Removal of long leading tails, in effect, shortened the storm duration and, in some cases, affected the storm-quartile classification. Therefore, two storm groups, untrimmed and trimmed, were used for analysis. The trimmed storms generally are preferred for interpretation. For a 12-hour or less trimmed storm duration, approximately 49 percent of the storms are first quartile. For trimmed storm durations of 12 to 24 and 24 to 72 hours, 47 and 38 percent, respectively, of the storms are first quartile. For a trimmed storm duration of 0 to 72 hours, the first-, second-, third-, and fourth-quartile storms accounted for 46, 21, 20, and 13 percent of all storms, respectively. The 90th-percentile curve for first-quartile storms indicated about 90 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. The 10th-percentile curve for first-quartile storms indicated about 30 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. The 90th-percentile curve for fourth-quartile storms indicated about 33 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. The 10th-percentile curve for fourth-quartile storms indicated less than 5 percent of the cumulative rainfall occurs during the first 20 percent of the storm duration. Statistics for the empirical, dimensionless, cumulative-rainfall hyetographs are presented in the report along with hyetograph curves and tables. The curves and tables presented do not present exact mathematical relations but can be used to estimate distributions of rainfall with time for small drainage areas of less than about 160 square miles in urban and small rural watersheds in north and south central Texas.
CUSUM analysis of learning curves for the head-mounted microscope in phonomicrosurgery.
Chen, Ting; Vamos, Andrew C; Dailey, Seth H; Jiang, Jack J
2016-10-01
To observe the learning curve of the head-mounted microscope in a phonomicrosurgery simulator using cumulative summation (CUSUM) analysis, which incorporates a magnetic phonomicrosurgery instrument tracking system (MPTS). Retrospective case series. Eight subjects (6 medical students and 2 surgeons inexperienced in phonomicrosurgery) operated on phonomicrosurgical simulation cutting tasks while using the head-mounted microscope for 400 minutes total. Two 20-minute sessions occurred each day for 10 total days, with operation quality (Qs ) and completion time (T) being recorded after each session. Cumulative summation analysis of Qs and T was performed by using subjects' performance data from trials completed using a traditional standing microscope as success criteria. The motion parameters from the head-mounted microscope were significantly better than the standing microscope (P < 0.01), but T was longer than that from the standing microscope (P < 0.01). No subject successfully adapted to the head-mounted microscope, as assessed by CUSUM analysis. Cumulative summation analysis can objectively monitor the learning process associated with a phonomicrosurgical simulator system, ultimately providing a tool to assess learning status. Also, motion parameters determined by our MPTS showed that, although the head-mounted microscope provides better motion control, worse Qs and longer T resulted. This decrease in Qs is likely a result of the relatively unstable visual environment that it provides. Overall, the inexperienced surgeons participating in this study failed to adapt to the head-mounted microscope in our simulated phonomicrosurgery environment. 4 Laryngoscope, 126:2295-2300, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
Derks, Laura S M; Veenstra, Hidde J; Oomen, Karin P Q; Speleman, Lucienne; Stegeman, Inge
2016-01-01
To systematically review the current literature on treatment of third and fourth branchial pouch sinuses with endoscopic cauterization, including chemocauterization and electrocauterization, in comparison to surgical treatment. PubMed, Embase, and the Cochrane Library. We conducted a systematic search. Studies reporting original study data were included. After assessing the directness of evidence and risk of bias, studies with a low directness of evidence or a high risk of bias were excluded from analysis. Cumulative success rates after initial and recurrent treatments were calculated for both methods. A meta-analysis was conducted comparing the success rate of electrocauterization and surgery. A total of 2,263 articles were retrieved, of which seven retrospective and one prospective article were eligible for analysis. The cumulative success rate after primary treatment with cauterization ranged from 66.7% to 100%, and ranged from 77.8% to 100% after a second cauterization. The cumulative success rate after the first surgical treatment ranged from 50% to 100% and was 100% after the second surgical attempt. Meta-analysis on electrocauterization showed a nonsignificant risk ratio of 1.35 (95% confidence interval: 0.78-2.33). The effectiveness of cauterization in preventing recurrence seems to be comparable to surgical treatment. However, we suggest endoscopic cauterization as the treatment of choice for third and fourth branchial pouch sinuses because of the lower morbidity rate. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
An Analysis of the Effects Housing Improvements Have on the Retention of Air Force Personnel
1990-09-01
facilities for preteens 3. Avaiablrity of recreational facilities for preteens 03. ionvenience of residence to playyards/playgrounds 4> Convenience of...t, but insulazion wou%, definitely save 3n c.ergy costs. No oiaygroundE c- preteen activity center is avai’a"’ -. se to base housing. ’..uui_ ’ r m...recreational facilities for preteens Cumulative Cumulative HSAT25 Frequency Percent Frequency Percent 17 5.1 17 5 a9 14.7 66 19. 3 40 12.0 106 31.3 69 20.7
ERIC Educational Resources Information Center
Merrell, Wayne L.; Zeimet, Denis E.
1994-01-01
Lists 15 principles for working safely with equipment. Describes phases of an ergonomic hazards program to identify and prevent problems and causes of cumulative trauma disorders in the workplace. (SK)
A new approach for computing a flood vulnerability index using cluster analysis
NASA Astrophysics Data System (ADS)
Fernandez, Paulo; Mourato, Sandra; Moreira, Madalena; Pereira, Luísa
2016-08-01
A Flood Vulnerability Index (FloodVI) was developed using Principal Component Analysis (PCA) and a new aggregation method based on Cluster Analysis (CA). PCA simplifies a large number of variables into a few uncorrelated factors representing the social, economic, physical and environmental dimensions of vulnerability. CA groups areas that have the same characteristics in terms of vulnerability into vulnerability classes. The grouping of the areas determines their classification contrary to other aggregation methods in which the areas' classification determines their grouping. While other aggregation methods distribute the areas into classes, in an artificial manner, by imposing a certain probability for an area to belong to a certain class, as determined by the assumption that the aggregation measure used is normally distributed, CA does not constrain the distribution of the areas by the classes. FloodVI was designed at the neighbourhood level and was applied to the Portuguese municipality of Vila Nova de Gaia where several flood events have taken place in the recent past. The FloodVI sensitivity was assessed using three different aggregation methods: the sum of component scores, the first component score and the weighted sum of component scores. The results highlight the sensitivity of the FloodVI to different aggregation methods. Both sum of component scores and weighted sum of component scores have shown similar results. The first component score aggregation method classifies almost all areas as having medium vulnerability and finally the results obtained using the CA show a distinct differentiation of the vulnerability where hot spots can be clearly identified. The information provided by records of previous flood events corroborate the results obtained with CA, because the inundated areas with greater damages are those that are identified as high and very high vulnerability areas by CA. This supports the fact that CA provides a reliable FloodVI.
Rasch Analysis of the General Self-Efficacy Scale in Workers with Traumatic Limb Injuries.
Wu, Tzu-Yi; Yu, Wan-Hui; Huang, Chien-Yu; Hou, Wen-Hsuan; Hsieh, Ching-Lin
2016-09-01
Purpose The purpose of this study was to apply Rasch analysis to examine the unidimensionality and reliability of the General Self-Efficacy Scale (GSE) in workers with traumatic limb injuries. Furthermore, if the items of the GSE fitted the Rasch model's assumptions, we transformed the raw sum ordinal scores of the GSE into Rasch interval scores. Methods A total of 1076 participants completed the GSE at 1 month post injury. Rasch analysis was used to examine the unidimensionality and person reliability of the GSE. The unidimensionality of the GSE was verified by determining whether the items fit the Rasch model's assumptions: (1) item fit indices: infit and outfit mean square (MNSQ) ranged from 0.6 to 1.4; and (2) the eigenvalue of the first factor extracted from principal component analysis (PCA) for residuals was <2. Person reliability was calculated. Results The unidimensionality of the 10-item GSE was supported in terms of good item fit statistics (infit and outfit MNSQ ranging from 0.92 to 1.32) and acceptable eigenvalues (1.6) of the first factor of the PCA, with person reliability = 0.89. Consequently, the raw sum scores of the GSE were transformed into Rasch scores. Conclusions The results indicated that the items of GSE are unidimensional and have acceptable person reliability in workers with traumatic limb injuries. Additionally, the raw sum scores of the GSE can be transformed into Rasch interval scores for prospective users to quantify workers' levels of self-efficacy and to conduct further statistical analyses.
NASA Astrophysics Data System (ADS)
Maguen, Ezra I.; Papaioannou, Thanassis; Nesburn, Anthony B.; Salz, James J.; Warren, Cathy; Grundfest, Warren S.
1996-05-01
Multivariable regression analysis was used to evaluate the combined effects of some preoperative and operative variables on the change of refraction following excimer laser photorefractive keratectomy for myopia (PRK). This analysis was performed on 152 eyes (at 6 months postoperatively) and 156 eyes (at 12 months postoperatively). The following variables were considered: intended refractive correction, patient age, treatment zone, central corneal thickness, average corneal curvature, and intraocular pressure. At 6 months after surgery, the cumulative R2 was 0.43 with 0.38 attributed to the intended correction and 0.06 attributed to the preoperative corneal curvature. At 12 months, the cumulative R2 was 0.37 where 0.33 was attributed to the intended correction, 0.02 to the preoperative corneal curvature, and 0.01 to both preoperative corneal thickness and to the patient age. Further model augmentation is necessary to account for the remaining variability and the behavior of the residuals.
Sahiner, Ilgin; Akdemir, Umit O; Kocaman, Sinan A; Sahinarslan, Asife; Timurkaynak, Timur; Unlu, Mustafa
2013-02-01
Myocardial perfusion SPECT (MPS) is a noninvasive method commonly used for assessment of the hemodynamic significance of intermediate coronary stenoses. Fractional flow reserve (FFR) measurement is a well-validated invasive method used for the evaluation of intermediate stenoses. We aimed to determine the association between MPS and FFR findings in intermediate degree stenoses and evaluate the added value of quantification in MPS. Fifty-eight patients who underwent intracoronary pressure measurement in the catheterization laboratory to assess the physiological significance of intermediate (40-70%) left anterior descending (LAD) artery lesions, and who also underwent stress myocardial perfusion SPECT either for the assessment of an intermediate stenosis or for suspected coronary artery disease were analyzed retrospectively in the study. Quantitative analysis was performed using the 4DMSPECT program, with visual assessment performed by two experienced nuclear medicine physicians blinded to the angiographic findings. Summed stress scores (SSS) and summed difference scores (SDS) in the LAD artery territory according to the 20 segment model were calculated. A summed stress score of ≥ 3 and an SDS of ≥ 2 were assumed as pathologic, indicating significance of the lesion; a cutoff value of 0.75 was used to define abnormal FFR. Both visual and quantitative assessment results were compared with FFR using Chi-square (χ²) test. The mean time interval between two studies was 13 ± 11 days. FFR was normal in 45 and abnormal in 13 patients. Considering the FFR results as the gold standard method for assessing the significance of the lesion, the sensitivity and specificity of quantitative analysis determining the abnormal flow reserve were 85 and 84%, respectively, while visual analysis had a sensitivity of 77% and a specificity of 51%. There was a good agreement between the observers (κ = 0.856). Summed stress and difference scores demonstrated moderate inverse correlations with FFR values (r = -0.542, p < 0.001 and r = -0.506, p < 0.001, respectively). Quantitative analysis of the myocardial perfusion SPECT increases the specificity in evaluating the significance of intermediate degree coronary lesions.
Factor Analysis for Clustered Observations.
ERIC Educational Resources Information Center
Longford, N. T.; Muthen, B. O.
1992-01-01
A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)
Worm, Margitta; Higenbottam, Tim; Pfaar, Oliver; Mösges, Ralph; Aberer, Werner; Gunawardena, Kulasiri; Wessiepe, Dorothea; Lee, Denise; Kramer, Matthias F; Skinner, Murray; Lees, Bev; Zielen, Stefan
2018-05-19
The Birch Allergoid, Tyrosine Adsorbate, Monophosphoryl Lipid A (POLLINEX ® Quattro Plus 1.0 ml Birch 100%) is an effective, well-tolerated short course subcutaneous immunotherapy. We performed two phase II studies to determine its optimal cumulative dose. The studies were conducted in Germany, Austria and Poland (EudraCT numbers: 2012-004336-28 PQBirch203 and 2015-000984-15 PQBirch204) using a wide range of cumulative doses. In both studies, subjects were administered 6 therapy injections weekly outside the pollen season. Conjunctival Provocation Tests were performed at screening, baseline and 3-4 weeks after completing treatment, to quantify the reduction of Total Symptom Scores (as the primary endpoint) with each cumulative dose. Multiple Comparison Procedure and Modelling analysis was used to test for the dose-response, shape of the curve, and estimation of the median effective dose (ED 50 ), a measure of potency. Statistically significant dose-responses (p<0.01 & 0.001) were seen respectively. The highest cumulative dose in PQBirch204 (27300 standardised units [SU]) approached a plateau. Potency of the PQ Birch was demonstrated by an ED 50 2723 SU, just over half the current dose. Prevalence of treatment-emergent adverse events was similar for active doses, most being short-lived and mild. Compliance was over 85% in all groups. Increasing the cumulative dose of PQ Birch 5.5-fold from 5100 to 27300 SU achieved an absolute point difference from placebo of 1.91, a relative difference 32.3% and an increase of efficacy of 50%, without compromising safety. The cumulative dose-response was confirmed to be curvilinear in shape. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
System approach to the analysis of an integrated oxy-fuel combustion power plant
NASA Astrophysics Data System (ADS)
Ziębik, Andrzej; Gładysz, Paweł
2014-09-01
Oxy-fuel combustion (OFC) belongs to one of the three commonly known clean coal technologies for power generation sector and other industry sectors responsible for CO2 emissions (e.g., steel or cement production). The OFC capture technology is based on using high-purity oxygen in the combustion process instead of atmospheric air. Therefore flue gases have a high concentration of CO2. Due to the limited adiabatic temperature of combustion some part of CO2 must be recycled to the boiler in order to maintain a proper flame temperature. An integrated oxy-fuel combustion power plant constitutes a system consisting of the following technological modules: boiler, steam cycle, air separation unit, cooling water and water treatment system, flue gas quality control system and CO2 processing unit. Due to the interconnections between technological modules, energy, exergy and ecological analyses require a system approach. The paper present the system approach based on the `input-output' method to the analysis of the: direct energy and material consumption, cumulative energy and exergy consumption, system (local and cumulative) exergy losses, and thermoecological cost. Other measures like cumulative degree of perfection or index of sustainable development are also proposed. The paper presents a complex example of the system analysis (from direct energy consumption to thermoecological cost) of an advanced integrated OFC power plant.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
Cumulative impact assessment: Application of a methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witmer, G.W.; Bain, M.B.; Irving, J.S.
We expanded upon the Federal Energy Regulatory Commission's (FERC) Cluster Impact Assessment Procedure (CIAP) to provide a practical methodology for assessing potential cumulative impacts from multiple hydroelectric projects within a river basin. The objectives in designing the methodology were to allow the evaluation of a large number of combinations of proposed projects and to minimize constraints on the use of ecological knowledge for planning and regulating hydroelectric development at the river basin level. Interactive workshops and evaluative matrices were used to identify preferred development scenarios in the Snohomish (Washington) and Salmon (Idaho) River Basins. Although the methodology achieved its basicmore » objectives, some difficulties were encountered. These revolved around issues of (1) data quality and quantity, (2) alternatives analysis, (3) determination of project interactions, (4) determination of cumulative impact thresholds, and (5) the use of evaluative techniques to express degrees of impact. 8 refs., 1 fig., 2 tabs.« less
NASA Astrophysics Data System (ADS)
Abdul Hakeem, Z.; Noorsuhada, M. N.; Azmi, I.; Noor Syafeekha, M. S.; Soffian Noor, M. S.
2017-12-01
In this study, steel fibre reinforced concrete (SFRC) beams strengthened with carbon fibre reinforced polymer (CFRP) were investigated using acoustic emission (AE) technique. Three beams with dimension of 150 mm width, 200 mm depth and 1500 mm length were fabricated. The results generated from AE parameters were analysed as well as signal strength and cumulative signal strength. Three relationships were produced namely load versus deflection, signal strength versus time and cumulative signal strength with respect to time. Each relationship indicates significant physical behaviour as the crack propagated in the beams. It is found that an addition of steel fibre in the concrete mix and strengthening of CFRP increase the ultimate load of the beam and the activity of signal strength. Moreover, the highest signal strength generated can be identified. From the study, the occurrence of crack in the beam can be predicted using AE signal strength.
The Validation of a Case-Based, Cumulative Assessment and Progressions Examination
Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David
2016-01-01
Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435
Ji, John S.; Schwartz, Joel; Sparrow, David; Hu, Howard; Weisskopf, Marc G.
2014-01-01
Objectives To examine the relation between occupation and cumulative lead exposure—assessed by measuring bone lead—in a community-dwelling population Method We measured bone lead concentration with K-shell X-Ray Fluorescence in 1,320 men in the Normative Aging Study. We categorized job titles into 14 broad US Census Bureau categories. We used ordinary least squares regression to estimate bone lead by job categories adjusted for other predictors. Results Service Workers, Construction and Extractive Craft Workers, and Installation, Maintenance and Repair Craft Workers had the highest bone lead concentrations. Including occupations significantly improved the overall model (p<0.001) and reduced by −15% to −81% the association between bone lead and education categories. Conclusion Occupation significantly predicts cumulative lead exposure in a community-dwelling population, and accounts for a large proportion of the association between education and bone lead. PMID:24709766
[Relationships between horqin meadow NDVI and meteorological factors].
Qu, Cui-ping; Guan, De-xin; Wang, An-zhi; Jin, Chang-jie; Wu, Jia-bing; Wang, Ji-jun; Ni, Pan; Yuan, Feng-hui
2009-01-01
Based on the 2000-2006 MODIS 8-day composite NDVI and day-by-day meteorological data, the seasonal and inter-annual variations of Horqin meadow NDVI as well as the relationships between the NDVI and relevant meteorological factors were studied. The results showed that as for the seasonal variation, Horqin meadow NDVI was more related to water vapor pressure than to precipitation. Cumulated temperature and cumulated precipitation together affected the inter-annual turning-green period significantly, and the precipitation in growth season (June and July), compared with that in whole year, had more obvious effects on the annual maximal NDVI. The analysis of time lag effect indicated that water vapor pressure had a persistent (about 12 days) prominent effect on the NDVI. The time lag effect of mean air temperature was 11-15 days, and the cumulated dual effect of the temperature and precipitation was 36-52 days.
Seror, Valerie
2008-05-01
Choices regarding prenatal diagnosis of Down syndrome - the most frequent chromosomal defect - are particularly relevant to decision analysis, since women's decisions are based on the assessment of their risk of carrying a child with Down syndrome, and involve tradeoffs (giving birth to an affected child vs procedure-related miscarriage). The aim of this study, based on face-to-face interviews with 78 women aged 25-35 with prior experience of pregnancy, was to compare the women' expressed choices towards prenatal diagnosis with those derived from theoretical models of choice (expected utility theory, rank-dependent theory, and cumulative prospect theory). The main finding obtained in this study was that the cumulative prospect model fitted the observed choices best: both subjective transformation of probabilities and loss aversion, which are basic features of the cumulative prospect model, have to be taken into account to make the observed choices consistent with the theoretical ones.
McAuley, Andrew; Bouttell, Janet; Barnsdale, Lee; Mackay, Daniel; Lewsey, Jim; Hunter, Carole; Robinson, Mark
2017-02-01
It has been suggested that distributing naloxone to people who inject drugs (PWID) will lead to fewer attendances by emergency medical services at opioid-related overdose incidents if peer administration of naloxone was perceived to have resuscitated the overdose victim successfully. This study evaluated the impact of a national naloxone programme (NNP) on ambulance attendance at opioid-related overdose incidents throughout Scotland. Specifically, we aimed to answer the following research questions: is there evidence of an association between ambulance call-outs to opioid-related overdose incidents and the cumulative number of 'take-home naloxone' (THN) kits in issue; and is there evidence of an association between ambulance call-outs to opioid-related overdose incidents in early adopter (pilot) or later adopting (non-pilot) regions and the cumulative number of THN kits issued in those areas? Controlled time-series analysis. Scotland, UK, 2008-15. Pre-NNP implementation period for the evaluation was defined as 1 April 2008 to 31 March 2011 and the post-implementation period as 1 April 2011 to 31 March 2015. In total, 3721 ambulance attendances at opioid-related overdose were recorded for the pre-NNP implementation period across 158 weeks (mean 23.6 attendances per week) and 5258 attendances across 212 weeks in the post-implementation period (mean 24.8 attendances per week). Scotland's NNP; formally implemented on 1 April 2011. Primary outcome measure was weekly incidence (counts) of call-outs to opioid-related overdoses at national and regional Health Board level. Data were acquired from the Scottish Ambulance Service (SAS). Models were adjusted for opioid replacement therapy using data acquired from the Information Services Division on monthly sums of all dispensed methadone and buprenorphine in the study period. Models were adjusted further for a control group: weekly incidence (counts) of call-outs to heroin-related overdose in the London Borough area acquired from the London Ambulance Service. There was no significant association between SAS call-outs to opioid-related overdose incidents and THN kits in issue for Scotland as a whole (coefficient 0.009, 95% confidence intervals = -0.01, 0.03, P = 0.39). In addition, the magnitude of association between THN kits and SAS call-outs did not differ significantly between pilot and non-pilot regions (interaction test, P = 0.62). The supply of take-home naloxone kits through a National Naloxone Programme in Scotland was not associated clearly with a decrease in ambulance attendance at opioid-related overdose incidents in the 4-year period after it was implemented in April 2011. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Bouttell, Janet; Barnsdale, Lee; Mackay, Daniel; Lewsey, Jim; Hunter, Carole; Robinson, Mark
2016-01-01
Abstract Background and Aims It has been suggested that distributing naloxone to people who inject drugs (PWID) will lead to fewer attendances by emergency medical services at opioid‐related overdose incidents if peer administration of naloxone was perceived to have resuscitated the overdose victim successfully. This study evaluated the impact of a national naloxone programme (NNP) on ambulance attendance at opioid‐related overdose incidents throughout Scotland. Specifically, we aimed to answer the following research questions: is there evidence of an association between ambulance call‐outs to opioid‐related overdose incidents and the cumulative number of ‘take‐home naloxone’ (THN) kits in issue; and is there evidence of an association between ambulance call‐outs to opioid‐related overdose incidents in early adopter (pilot) or later adopting (non‐pilot) regions and the cumulative number of THN kits issued in those areas? Design Controlled time–series analysis. Setting Scotland, UK, 2008–15. Participants Pre‐NNP implementation period for the evaluation was defined as 1 April 2008 to 31 March 2011 and the post‐implementation period as 1 April 2011 to 31 March 2015. In total, 3721 ambulance attendances at opioid‐related overdose were recorded for the pre‐NNP implementation period across 158 weeks (mean 23.6 attendances per week) and 5258 attendances across 212 weeks in the post‐implementation period (mean 24.8 attendances per week). Intervention Scotland's NNP; formally implemented on 1 April 2011. Measurements Primary outcome measure was weekly incidence (counts) of call‐outs to opioid‐related overdoses at national and regional Health Board level. Data were acquired from the Scottish Ambulance Service (SAS). Models were adjusted for opioid replacement therapy using data acquired from the Information Services Division on monthly sums of all dispensed methadone and buprenorphine in the study period. Models were adjusted further for a control group: weekly incidence (counts) of call‐outs to heroin‐related overdose in the London Borough area acquired from the London Ambulance Service. Findings There was no significant association between SAS call‐outs to opioid‐related overdose incidents and THN kits in issue for Scotland as a whole (coefficient 0.009, 95% confidence intervals = −0.01, 0.03, P = 0.39). In addition, the magnitude of association between THN kits and SAS call‐outs did not differ significantly between pilot and non‐pilot regions (interaction test, P = 0.62). Conclusions The supply of take‐home naloxone kits through a National Naloxone Programme in Scotland was not associated clearly with a decrease in ambulance attendance at opioid‐related overdose incidents in the 4‐year period after it was implemented in April 2011. PMID:27614084
Serrano-Fernandez, Pablo; Dymerska, Dagmara; Kurzawski, Grzegorz; Derkacz, Róża; Sobieszczańska, Tatiana; Banaszkiewicz, Zbigniew; Roomere, Hanno; Oitmaa, Eneli; Metspalu, Andres; Janavičius, Ramūnas; Elsakov, Pavel; Razumas, Mindaugas; Petrulis, Kestutis; Irmejs, Arvīds; Miklaševičs, Edvīns; Scott, Rodney J.; Lubiński, Jan
2015-01-01
The continued identification of new low-penetrance genetic variants for colorectal cancer (CRC) raises the question of their potential cumulative effect among compound carriers. We focused on 6 SNPs (rs380284, rs4464148, rs4779584, rs4939827, rs6983267, and rs10795668), already described as risk markers, and tested their possible independent and combined contribution to CRC predisposition. Material and Methods. DNA was collected and genotyped from 2330 unselected consecutive CRC cases and controls from Estonia (166 cases and controls), Latvia (81 cases and controls), Lithuania (123 cases and controls), and Poland (795 cases and controls). Results. Beyond individual effects, the analysis revealed statistically significant linear cumulative effects for these 6 markers for all samples except of the Latvian one (corrected P value = 0.018 for the Estonian, corrected P value = 0.0034 for the Lithuanian, and corrected P value = 0.0076 for the Polish sample). Conclusions. The significant linear cumulative effects demonstrated here support the idea of using sets of low-risk markers for delimiting new groups with high-risk of CRC in clinical practice that are not carriers of the usual CRC high-risk markers. PMID:26101521
Serrano-Fernandez, Pablo; Dymerska, Dagmara; Kurzawski, Grzegorz; Derkacz, Róża; Sobieszczańska, Tatiana; Banaszkiewicz, Zbigniew; Roomere, Hanno; Oitmaa, Eneli; Metspalu, Andres; Janavičius, Ramūnas; Elsakov, Pavel; Razumas, Mindaugas; Petrulis, Kestutis; Irmejs, Arvīds; Miklaševičs, Edvīns; Scott, Rodney J; Lubiński, Jan
2015-01-01
The continued identification of new low-penetrance genetic variants for colorectal cancer (CRC) raises the question of their potential cumulative effect among compound carriers. We focused on 6 SNPs (rs380284, rs4464148, rs4779584, rs4939827, rs6983267, and rs10795668), already described as risk markers, and tested their possible independent and combined contribution to CRC predisposition. Material and Methods. DNA was collected and genotyped from 2330 unselected consecutive CRC cases and controls from Estonia (166 cases and controls), Latvia (81 cases and controls), Lithuania (123 cases and controls), and Poland (795 cases and controls). Results. Beyond individual effects, the analysis revealed statistically significant linear cumulative effects for these 6 markers for all samples except of the Latvian one (corrected P value = 0.018 for the Estonian, corrected P value = 0.0034 for the Lithuanian, and corrected P value = 0.0076 for the Polish sample). Conclusions. The significant linear cumulative effects demonstrated here support the idea of using sets of low-risk markers for delimiting new groups with high-risk of CRC in clinical practice that are not carriers of the usual CRC high-risk markers.