Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe370mcfarlane
Riihimaki, Laura; Shippert, Timothy
2014-11-05
The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.
Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe1mcfarlane
Riihimaki, Laura; Shippert, Timothy
2014-11-05
The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.
Broadband Heating Rate Profile Project (BBHRP) - SGP 1bbhrpripbe1mcfarlane
Riihimaki, Laura; Shippert, Timothy
2014-11-05
The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.
Waldinger, Marcel D; Venema, Pieter L; van Gils, Ad P G; Schweitzer, Dave H
2009-10-01
Systematic study of dysesthetic and paresthetic regions contributing to persistent genital arousal in women with restless genital syndrome (ReGS) is needed for its clinical management. To investigate distinct localizations of ReGS. Twenty-three women, fulfilling all five criteria of persistent genital arousal disorder were included into the study. In-depth interviews, routine and hormonal investigations, electroencephalographs, and magnetic resonance imaging (MRI) of brain and pelvis were performed in all women. The localizations of genital sensations were investigated by physical examination of the ramus inferior of the pubic bone (RIPB) and by sensory testing of the skin of the genital area with a cotton swab (genital tactile mapping test or GTM test). Sensitivity of RIPB, GTM test. Of 23 women included in the study, 18(78%), 16(69%), and 12(52%) reported restless legs syndrome, overactive bladder syndrome, and urethra hypersensitivity. Intolerance of tight clothes and underwear (allodynia or hyperpathia) was reported by 19 (83%) women. All women were diagnosed with ReGS. Sitting aggravated ReGS in 20(87%) women. In all women, MRI showed pelvic varices of different degree in the vagina (91%), labia minora and/or majora (35%), and uterus (30%). Finger touch investigation of the dorsal nerve of the clitoris (DNC) along the RIPB provoked ReGS in all women. Sensory testing showed unilateral and bilateral static mechanical Hyperesthesia on various trigger points in the dermatome of the pudendal nerve, particularly in the part innervated by DNC, including pelvic bone. In three women, sensory testing induced an uninhibited orgasm during physical examination. ReGS is highly associated with pelvic varices and with sensory neuropathy of the pudendal nerve and DNC, whose symptoms are suggestive for small fiber neuropathy (SFN). Physical examination for static mechanical Hyperesthesia is a diagnostic test for ReGS and is recommended for all individuals with complaints of persistent restless genital arousal in absence of sexual desire.
ERIC Educational Resources Information Center
Finney, Sara J.; Sundre, Donna L.; Swain, Matthew S.; Williams, Laura M.
2016-01-01
Accountability mandates often prompt assessment of student learning gains (e.g., value-added estimates) via achievement tests. The validity of these estimates have been questioned when performance on tests is low stakes for students. To assess the effects of motivation on value-added estimates, we assigned students to one of three test consequence…
Value-Added Models for the Pittsburgh Public Schools
ERIC Educational Resources Information Center
Johnson, Matthew; Lipscomb, Stephen; Gill, Brian; Booker, Kevin; Bruch, Julie
2012-01-01
At the request of Pittsburgh Public Schools (PPS) and the Pittsburgh Federation of Teachers (PFT), Mathematica has developed value-added models (VAMs) that aim to estimate the contributions of individual teachers, teams of teachers, and schools to the achievement growth of their students. The authors' work in estimating value-added in Pittsburgh…
Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables
ERIC Educational Resources Information Center
Johnson, Matthew T.; Lipscomb, Stephen; Gill, Brian
2015-01-01
Teacher value-added models (VAMs) must isolate teachers' contributions to student achievement to be valid. Well-known VAMs use different specifications, however, leaving policymakers with little clear guidance for constructing a valid model. We examine the sensitivity of teacher value-added estimates under different models based on whether they…
ERIC Educational Resources Information Center
Kersting, Nicole B.; Chen, Mei-kuang; Stigler, James W.
2013-01-01
If teacher value-added estimates (VAEs) are to be used as indicators of individual teacher performance in teacher evaluation and accountability systems, it is important to understand how much VAEs are affected by the data and model specifications used to estimate them. In this study we explored the effects of three conditions on the stability of…
ERIC Educational Resources Information Center
Goldhaber, Dan; Quince, Vanessa; Theobald, Roddy
2016-01-01
This policy brief reviews evidence about the extent to which disadvantaged students are taught by teachers with lower value-added estimates of performance, and seeks to reconcile differences in findings from different studies. We demonstrate that much of the inequity in teacher value added in Washington state is due to differences across different…
ERIC Educational Resources Information Center
Rothstein, Jesse
2009-01-01
Non-random assignment of students to teachers can bias value added estimates of teachers' causal effects. Rothstein (2008a, b) shows that typical value added models indicate large counter-factual effects of 5th grade teachers on students' 4th grade learning, indicating that classroom assignments are far from random. This paper quantifies the…
An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.
2015-01-01
Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…
Value-Added Analysis and Education Policy. Brief 1
ERIC Educational Resources Information Center
Rivkin, Steven G.
2007-01-01
This brief describes estimation and measurement issues relevant to estimating the quality of instruction in the context of a cumulative model of learning. It also discusses implications for the use of value-added estimates in personnel and compensation matters. The discussion highlights the importance of accounting for student differences and the…
Can Value-Added Measures of Teacher Performance Be Trusted?
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.
2015-01-01
We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures…
Schuster, Cornelia; Brosi, Helen; Stifter, Katja; Boehm, Bernhard O.; Schirmbeck, Reinhold
2013-01-01
Coinhibitory PD-1/PD-L1 (B7-H1) interactions provide critical signals for the regulation of autoreactive T-cell responses. We established mouse models, expressing the costimulator molecule B7.1 (CD80) on pancreatic beta cells (RIP-B7.1 tg mice) or are deficient in coinhibitory PD-L1 or PD-1 molecules (PD-L1−/− and PD-1−/− mice), to study induction of preproinsulin (ppins)-specific CD8 T-cell responses and experimental autoimmune diabetes (EAD) by DNA-based immunization. RIP-B7.1 tg mice allowed us to identify two CD8 T-cell specificities: pCI/ppins DNA exclusively induced Kb/A12–21-specific CD8 T-cells and EAD, whereas pCI/ppinsΔA12–21 DNA (encoding ppins without the COOH-terminal A12–21 epitope) elicited Kb/B22–29-specific CD8 T-cells and EAD. Specific expression/processing of mutant ppinsΔA12–21 (but not ppins) in non-beta cells, targeted by intramuscular DNA-injection, thus facilitated induction of Kb/B22–29-specific CD8 T-cells. The A12–21 epitope binds Kb molecules with a very low avidity as compared with B22–29. Interestingly, immunization of coinhibition-deficient PD-L1−/− or PD-1−/− mice with pCI/ppins induced Kb/A12–21-monospecific CD8 T-cells and EAD but injections with pCI/ppinsΔA12–21 did neither recruit Kb/B22–29-specific CD8 T-cells into the pancreatic target tissue nor induce EAD. PpinsΔA12–21/(Kb/B22–29)-mediated EAD was efficiently restored in RIP-B7.1+/PD-L1−/− mice, differing from PD-L1−/− mice only in the tg B7.1 expression in beta cells. Alternatively, an ongoing beta cell destruction and tissue inflammation, initiated by ppins/(Kb/A12–21)-specific CD8 T-cells in pCI/ppins+pCI/ppinsΔA12–21 co-immunized PD-L1−/− mice, facilitated the expansion of ppinsΔA12–21/(Kb/B22–29)-specific CD8 T-cells. CD8 T-cells specific for the high-affinity Kb/B22–29- (but not the low-affinity Kb/A12–21)-epitope thus require stimulatory ´help from beta cells or inflamed islets to expand in PD-L1-deficient mice. The new PD-1/PD-L1 diabetes models may be valuable tools to study under well controlled experimental conditions distinct hierarchies of autoreactive CD8 T-cell responses, which trigger the initial steps of beta cell destruction or emerge during the pathogenic progression of EAD. PMID:23977133
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-11-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.
Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08
Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian
2015-01-01
This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007–08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications. PMID:26273127
Can Value-Added Measures of Teacher Performance Be Trusted? Working Paper #18
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Woolridge, Jeffrey M.
2012-01-01
We investigate whether commonly used value-added estimation strategies can produce accurate estimates of teacher effects. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. No one method accurately captures true teacher effects in all scenarios,…
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M.
2014-01-01
Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…
The value added by sawmilling in the Appalachian hill country of Ohio and Kentucky
Orris D. McCauley; James C. Whittaker
1967-01-01
The difference between log costs and lumber values at 40 sawmills in the Appalachian hill country of Ohio and Kentucky provides an estimate of the value added by sawmill production. Based on these estimates, sawmilling contributed about $12.8 million to the region's economy in 1962.
ERIC Educational Resources Information Center
Ferrão, Maria Eugénia; Couto, Alcino Pinto
2014-01-01
This article focuses on the use of a value-added approach for promoting school improvement. It presents yearly value-added estimates, analyses their stability over time, and discusses the contribution of this methodological approach for promoting school improvement programmes in the Portuguese system of evaluation. The value-added model is applied…
Robustness of Value-Added Analysis of School Effectiveness. Research Report. ETS RR-08-22
ERIC Educational Resources Information Center
Braun, Henry; Qu, Yanxuan
2008-01-01
This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…
ERIC Educational Resources Information Center
Koedel, Cory; Betts, Julian
2009-01-01
Value-added measures of teacher quality may be sensitive to the quantitative properties of the student tests upon which they are based. This paper focuses on the sensitivity of value- added to test-score-ceiling effects. Test-score ceilings are increasingly common in testing instruments across the country as education policy continues to emphasize…
Impediments to the Estimation of Teacher Value Added
ERIC Educational Resources Information Center
Ishii, Jun; Rivkin, Steven G.
2009-01-01
This article considers potential impediments to the estimation of teacher quality caused primarily by the purposeful behavior of families, administrators, and teachers. The discussion highlights the benefits of accounting for student and school differences through a value-added modeling approach that incorporates a student's history of family,…
Disentangling Disadvantage: Can We Distinguish Good Teaching from Classroom Composition?
ERIC Educational Resources Information Center
Zamarro, Gema; Engberg, John; Saavedra, Juan Esteban; Steele, Jennifer
2015-01-01
This article investigates the use of teacher value-added estimates to assess the distribution of effective teaching across students of varying socioeconomic disadvantage in the presence of classroom composition effects. We examine, via simulations, how accurately commonly used teacher value-added estimators recover the rank correlation between…
ERIC Educational Resources Information Center
Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas
2014-01-01
Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…
Value Added Based on Educational Positions in Dutch Secondary Education
ERIC Educational Resources Information Center
Timmermans, Anneke C.; Bosker, Roel J.; de Wolf, Inge F.; Doolaard, Simone; van der Werf, Margaretha P. C.
2014-01-01
Estimating added value as an indicator of school effectiveness in the context of educational accountability often occurs using test or examination scores of students. This study investigates the possibilities for using scores of educational positions as an alternative indicator. A number of advantages of a value added indicator based on…
Rethinking Teacher Evaluation: A Conversation about Statistical Inferences and Value-Added Models
ERIC Educational Resources Information Center
Callister Everson, Kimberlee; Feinauer, Erika; Sudweeks, Richard R.
2013-01-01
In this article, the authors provide a methodological critique of the current standard of value-added modeling forwarded in educational policy contexts as a means of measuring teacher effectiveness. Conventional value-added estimates of teacher quality are attempts to determine to what degree a teacher would theoretically contribute, on average,…
The Implications of Summer Learning Loss for Value-Added Estimates of Teacher Effectiveness
ERIC Educational Resources Information Center
Gershenson, Seth; Hayes, Michael S.
2018-01-01
School districts across the United States increasingly use value-added models (VAMs) to evaluate teachers. In practice, VAMs typically rely on lagged test scores from the previous academic year, which necessarily conflate summer with school-year learning and potentially bias estimates of teacher effectiveness. We investigate the practical…
ERIC Educational Resources Information Center
Perry, Thomas
2017-01-01
Value-added (VA) measures are currently the predominant approach used to compare the effectiveness of schools. Recent educational effectiveness research, however, has developed alternative approaches including the regression discontinuity (RD) design, which also allows estimation of absolute school effects. Initial research suggests RD is a viable…
ERIC Educational Resources Information Center
Goldhaber, Dan; Hansen, Michael
2010-01-01
Reforming teacher tenure is an idea that appears to be gaining traction with the underlying assumption being that one can infer to a reasonable degree how well a teacher will perform over her career based on estimates of her early-career effectiveness. Here we explore the potential for using value-added models to estimate performance and inform…
Perrone, Lorena; Grant, William B
2015-01-01
Considerable evidence indicates that diet is an important risk-modifying factor for Alzheimer's disease (AD). Evidence is also mounting that dietary advanced glycation end products (AGEs) are important risk factors for AD. This study strives to determine whether estimated dietary AGEs estimated from national diets and epidemiological studies are associated with increased AD incidence. We estimated values of dietary AGEs using values in a published paper. We estimated intake of dietary AGEs from the Washington Heights-Inwood Community Aging Project (WHICAP) 1992 and 1999 cohort studies, which investigated how the Mediterranean diet (MeDi) affected AD incidence. Further, AD prevalence data came from three ecological studies and included data from 11 countries for 1977-1993, seven developing countries for 1995-2005, and Japan for 1985-2008. The analysis used dietary AGE values from 20 years before the AD prevalence data. Meat was always the food with the largest amount of AGEs. Other foods with significant AGEs included fish, cheese, vegetables, and vegetable oil. High MeDi adherence results in lower meat and dairy intake, which possess high AGE content. By using two different models to extrapolate dietary AGE intake in the WHICAP 1992 and 1999 cohort studies, we showed that reduced dietary AGE significantly correlates with reduced AD incidence. For the ecological studies, estimates of dietary AGEs in the national diets corresponded well with AD prevalence data even though the cooking methods were not well known. Dietary AGEs appear to be important risk factors for AD.
Value-Added Models and the Measurement of Teacher Productivity. CALDER Working Paper No. 54
ERIC Educational Resources Information Center
Harris, Douglas; Sass, Tim; Semykina, Anastasia
2010-01-01
Research on teacher productivity, and recently developed accountability systems for teachers, rely on value-added models to estimate the impact of teachers on student performance. The authors test many of the central assumptions required to derive value-added models from an underlying structural cumulative achievement model and reject nearly all…
What's the Difference? A Model for Measuring the Value Added by Higher Education in Australia
ERIC Educational Resources Information Center
Coates, Hamish
2009-01-01
Measures of student learning are playing an increasingly significant role in determining the quality and productivity of higher education. This paper evaluates approaches for estimating the value added by university education, and proposes a methodology for use by institutions and systems. The paper argues that value-added measures of learning are…
Value-Added Modeling and Educational Accountability: Are We Answering the Real Questions?
ERIC Educational Resources Information Center
Everson, Kimberlee C.
2017-01-01
Value-added estimates of teacher or school quality are increasingly used for both high- and low-stakes accountability purposes, making understanding of their limitations critical. A review of the recent value-added literature suggests three concerns with the state of the research. First, the issues receiving the most research attention have not…
A Value-Added Estimate of Higher Education Quality of US States
ERIC Educational Resources Information Center
Zhang, Lei
2009-01-01
States differ substantially in higher education policies. Little is known about the effects of state policies on the performance of public colleges and universities, largely because no clear measures of college quality exist. In this paper, I estimate the average quality of public colleges of US states based on the value-added to individuals'…
Evaluating Specification Tests in the Context of Value-Added Estimation
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M.
2015-01-01
We study the properties of two specification tests that have been applied to a variety of estimators in the context of value-added measures (VAMs) of teacher and school quality: the Hausman test for choosing between student-level random and fixed effects, and a test for feedback (sometimes called a "falsification test"). We discuss…
ERIC Educational Resources Information Center
Lipscomb, Stephen; Gill, Brian; Booker, Kevin; Johnson, Matthew
2010-01-01
At the request of Pittsburgh Public Schools (PPS) and the Pittsburgh Federation of Teachers (PFT), Mathematica is developing value-added models (VAMs) that aim to estimate the contributions of individual teachers, teams of teachers, and schools to the achievement growth of their students. The analyses described in this report are intended as an…
Evaluating Specification Tests in the Context of Value-Added Estimation. Working Paper #38
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M.
2014-01-01
We study the properties of two specification tests that have been applied to a variety of estimators in the context of value-added measures (VAMs) of teacher and school quality: the Hausman test for choosing between random and fixed effects and a test for feedback (sometimes called a "falsification test"). We discuss theoretical…
ERIC Educational Resources Information Center
Epstein, Diana; Miller, Raegen T.
2011-01-01
In August 2010 the "Los Angeles Times" published a special report on their website featuring performance ratings for nearly 6,000 Los Angeles Unified School District teachers. The move was controversial because the ratings were based on so-called value-added estimates of teachers' contributions to student learning. As with most…
ERIC Educational Resources Information Center
Chetty, Raj; Friedman, John N.; Rockoff, Jonah E.
2011-01-01
Are teachers' impacts on students' test scores ("value-added") a good measure of their quality? This question has sparked debate largely because of disagreement about (1) whether value-added (VA) provides unbiased estimates of teachers' impacts on student achievement and (2) whether high-VA teachers improve students' long-term outcomes.…
ERIC Educational Resources Information Center
Franco, M. Suzanne; Seidel, Kent
2014-01-01
Value-added approaches for attributing student growth to teachers often use weighted estimates of building-level factors based on "typical" schools to represent a range of community, school, and other variables related to teacher and student work that are not easily measured directly. This study examines whether such estimates are likely…
Can Value Added Add Value to Teacher Evaluation?
ERIC Educational Resources Information Center
Darling-Hammond, Linda
2015-01-01
The five thoughtful papers included in this issue of "Educational Researcher" ("ER") raise new questions about the use of value-added methods (VAMs) to estimate teachers' contributions to students' learning as part of personnel evaluation. The papers address both technical and implementation concerns, considering potential…
School system evaluation by value added analysis under endogeneity.
Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien
2014-01-01
Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.
Getting Value out of Value-Added: Report of a Workshop
ERIC Educational Resources Information Center
Braun, Henry, Ed.; Chudowsky, Naomi, Ed.; Koenig, Judith, Ed.
2010-01-01
Value-added methods refer to efforts to estimate the relative contributions of specific teachers, schools, or programs to student test performance. In recent years, these methods have attracted considerable attention because of their potential applicability for educational accountability, teacher pay-for-performance systems, school and teacher…
The Misattribution of Summers in Teacher Value-Added
ERIC Educational Resources Information Center
Atteberry, Allison
2012-01-01
This paper investigates the extent to which spring-to-spring testing timelines bias teacher value-added as a result of conflating summer and school-year learning. Using a unique dataset that contains both fall and spring standardized test scores, the author examines the patterns in school-year versus summer learning. She estimates value-added…
The Impact of Alzheimer's Disease on the Chinese Economy.
Keogh-Brown, Marcus R; Jensen, Henning Tarp; Arrighi, H Michael; Smith, Richard D
2016-02-01
Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. AD-related prevalence, morbidity and mortality for 2011-2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Simulated Chinese AD prevalence quadrupled during 2011-50 from 6-28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Health and macroeconomic models predict an unfolding 2011-2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies.
The Impact of Alzheimer's Disease on the Chinese Economy
Keogh-Brown, Marcus R.; Jensen, Henning Tarp; Arrighi, H. Michael; Smith, Richard D.
2015-01-01
Background Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. Methods AD-related prevalence, morbidity and mortality for 2011–2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Findings Simulated Chinese AD prevalence quadrupled during 2011–50 from 6–28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Interpretation Health and macroeconomic models predict an unfolding 2011–2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies. PMID:26981556
Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N
2016-05-01
An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK/RK/AK values, indicating substantial anatomical variability of these discrepancies. In the HCP dataset, the median voxelwise percentage differences across the whole white matter skeleton were (nonlinear least squares algorithm) 14.5% (8.2%-23.1%) for MD, 4.3% (1.4%-17.3%) for FA, -5.2% (-48.7% to -0.8%) for MO, 12.5% (6.4%-21.2%) for RD, and 16.1% (9.9%-25.6%) for AD (all ranges computed as 0.01 and 0.99 quantiles). All differences/trends were consistent between the discovery (HCP) and replication (local) datasets and between estimation algorithms. However, the relationships between such trends, estimated diffusion tensor invariants, and kurtosis estimates were impacted by the choice of fitting routine. Model-dependent differences in the estimation of conventional indexes of MD/FA/MO/RD/AD can be well beyond commonly seen disease-related alterations. While estimating diffusion tensor-derived indexes using the DKI model may be advantageous in terms of mitigating b-value dependence of diffusivity estimates, such estimates should not be referred to as conventional DTI-derived indexes in order to avoid confusion in interpretation as well as multicenter comparisons. In order to assess the potential and advantages of DKI with respect to DTI as well as to standardize diffusion-weighted imaging methods between centers, both conventional DTI-derived indexes and diffusion tensor invariants derived by fitting the non-Gaussian DKI model should be separately estimated and analyzed using the same combination of fitting routines.
Nurse Value-Added and Patient Outcomes in Acute Care
Yakusheva, Olga; Lindrooth, Richard; Weiss, Marianne
2014-01-01
Objective The aims of the study were to (1) estimate the relative nurse effectiveness, or individual nurse value-added (NVA), to patients’ clinical condition change during hospitalization; (2) examine nurse characteristics contributing to NVA; and (3) estimate the contribution of value-added nursing care to patient outcomes. Data Sources/Study Setting Electronic data on 1,203 staff nurses matched with 7,318 adult medical–surgical patients discharged between July 1, 2011 and December 31, 2011 from an urban Magnet-designated, 854-bed teaching hospital. Study Design Retrospective observational longitudinal analysis using a covariate-adjustment value-added model with nurse fixed effects. Data Collection/Extraction Methods Data were extracted from the study hospital's electronic patient records and human resources databases. Principal Findings Nurse effects were jointly significant and explained 7.9 percent of variance in patient clinical condition change during hospitalization. NVA was positively associated with having a baccalaureate degree or higher (0.55, p = .04) and expertise level (0.66, p = .03). NVA contributed to patient outcomes of shorter length of stay and lower costs. Conclusions Nurses differ in their value-added to patient outcomes. The ability to measure individual nurse relative value-added opens the possibility for development of performance metrics, performance-based rankings, and merit-based salary schemes to improve patient outcomes and reduce costs. PMID:25256089
Investigation of flow and transport processes at the MADE site using ensemble Kalman filter
Liu, Gaisheng; Chen, Y.; Zhang, Dongxiao
2008-01-01
In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection-dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. ?? 2008 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yongming; Li, Fan; Wang, Pin; Zhu, Xueru; Liu, Shujun; Qiu, Mingguo; Zhang, Jingna; Zeng, Xiaoping
2016-10-01
Traditional age estimation methods are based on the same idea that uses the real age as the training label. However, these methods ignore that there is a deviation between the real age and the brain age due to accelerated brain aging. This paper considers this deviation and searches for it by maximizing the separability distance value rather than by minimizing the difference between the estimated brain age and the real age. Firstly, set the search range of the deviation as the deviation candidates according to prior knowledge. Secondly, use the support vector regression (SVR) as the age estimation model to minimize the difference between the estimated age and the real age plus deviation rather than the real age itself. Thirdly, design the fitness function based on the separability distance criterion. Fourthly, conduct age estimation on the validation dataset using the trained age estimation model, put the estimated age into the fitness function, and obtain the fitness value of the deviation candidate. Fifthly, repeat the iteration until all the deviation candidates are involved and get the optimal deviation with maximum fitness values. The real age plus the optimal deviation is taken as the brain pathological age. The experimental results showed that the separability was apparently improved. For normal control-Alzheimer’s disease (NC-AD), normal control-mild cognition impairment (NC-MCI), and MCI-AD, the average improvements were 0.178 (35.11%), 0.033 (14.47%), and 0.017 (39.53%), respectively. For NC-MCI-AD, the average improvement was 0.2287 (64.22%). The estimated brain pathological age could be not only more helpful to the classification of AD but also more precisely reflect accelerated brain aging. In conclusion, this paper offers a new method for brain age estimation that can distinguish different states of AD and can better reflect the extent of accelerated aging.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
An Empirical Analysis of Teacher Spillover Effects in Secondary School
ERIC Educational Resources Information Center
Koedel, Cory
2009-01-01
This paper examines whether educational production in secondary school involves joint production among teachers across subjects. In doing so, it also provides insights into the reliability of value-added modeling. Teacher value-added to reading test scores is estimated for four different teacher types: English, math, science and social-studies.…
MICROREFINING OF WASTE GLYCEROL FOR THE PRODUCTION OF A VALUE-ADDED PRODUCT
As a result of Phase I, a process to refine crude glycerin waste to value-added products was designed. An economic analysis was performed to determine the capital and operating costs for a commercial facility that implements this design. Using the estimated 1,800 gallons of ra...
The Intertemporal Stability of Teacher Effect Estimates. Working Paper 2008-22
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Sass, Tim R.; Lockwood, J.R.
2008-01-01
Recently, a number of school districts have begun using measures of teachers' contributions to student test scores or teacher "value added" to determine salaries and other monetary rewards. In this paper we investigate the precision of value-added measures by analyzing their inter-temporal stability. We find that these measures of…
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi
2010-10-01
The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.
Dodge, Hiroko H; Zhu, Jian; Harvey, Danielle; Saito, Naomi; Silbert, Lisa C; Kaye, Jeffrey A; Koeppe, Robert A; Albin, Roger L
2014-11-01
It is unknown which commonly used Alzheimer disease (AD) biomarker values-baseline or progression-best predict longitudinal cognitive decline. 526 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). ADNI composite memory and executive scores were the primary outcomes. Individual-specific slope of the longitudinal trajectory of each biomarker was first estimated. These estimates and observed baseline biomarker values were used as predictors of cognitive declines. Variability in cognitive declines explained by baseline biomarker values was compared with variability explained by biomarker progression values. About 40% of variability in memory and executive function declines was explained by ventricular volume progression among mild cognitive impairment patients. A total of 84% of memory and 65% of executive function declines were explained by fluorodeoxyglucose positron emission tomography (FDG-PET) score progression and ventricular volume progression, respectively, among AD patients. For most biomarkers, biomarker progressions explained higher variability in cognitive decline than biomarker baseline values. This has important implications for clinical trials targeted to modify AD biomarkers. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Haertel, Edward H.
2013-01-01
Policymakers and school administrators have embraced value-added models of teacher effectiveness as tools for educational improvement. Teacher value-added estimates may be viewed as complicated scores of a certain kind. This suggests using a test validation model to examine their reliability and validity. Validation begins with an interpretive…
ARM KAZR-ARSCL Value Added Product
Jensen, Michael
2012-09-28
The Ka-band ARM Zenith Radars (KAZRs) have replaced the long-serving Millimeter Cloud Radars, or MMCRs. Accordingly, the primary MMCR Value Added Product (VAP), the Active Remote Sensing of CLouds (ARSCL) product, is being replaced by a KAZR-based version, the KAZR-ARSCL VAP. KAZR-ARSCL provides cloud boundaries and best-estimate time-height fields of radar moments.
ERIC Educational Resources Information Center
Pride, Bryce L.
2012-01-01
The Adequate Yearly Progress (AYP) Model has been used to make many high-stakes decisions concerning schools, though it does not provide a complete assessment of student academic achievement and school effectiveness. To provide a clearer perspective, many states have implemented various Growth and Value Added Models, in addition to AYP. The…
Consumer preferences and willingness to pay for value-added chicken product attributes.
Martínez Michel, Lorelei; Anders, Sven; Wismer, Wendy V
2011-10-01
A growing demand for convenient and ready-to-eat products has increased poultry processors' interest in developing consumer-oriented value-added chicken products. In this study, a conjoint analysis survey of 276 chicken consumers in Edmonton was conducted during the summer of 2009 to assess the importance of the chicken part, production method, processing method, storage method, the presence of added flavor, and cooking method on consumer preferences for different value-added chicken product attributes. Estimates of consumer willingness to pay (WTP) premium prices for different combinations of value-added chicken attributes were also determined. Participants'"ideal" chicken product was a refrigerated product made with free-range chicken breast, produced with no additives or preservatives and no added flavor, which could be oven heated or pan heated. Half of all participants on average were willing to pay 30% more for a value-added chicken product over the price of a conventional product. Overall, young consumers, individuals who shop at Farmers' Markets and those who prefer free-range or organic products were more likely to pay a premium for value-added chicken products. As expected, consumers' WTP was affected negatively by product price. Combined knowledge of consumer product attribute preferences and consumer WTP for value-added chicken products can help the poultry industry design innovative value-added chicken products. Practical Application: An optimum combination of product attributes desired by consumers for the development of a new value-added chicken product, as well as the WTP for this product, have been identified in this study. This information is relevant to the poultry industry to enhance consumer satisfaction of future value-added chicken products and provide the tools for future profit growth. © 2011 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
In this work, we present an analytic estimation of recycled products added value in order to provide a means for determining the degree of recycling that maximizes profit, taking also into account the social interest by including the subsidy of the corresponding investment. A methodology has been developed based on Life Cycle Product (LCP) with emphasis on added values H, R as fractions of production and recycle cost, respectively (H, R >1, since profit is included), which decrease by the corresponding rates h, r in the recycle course, due to deterioration of quality. At macrolevel, the claim that "an increase of exergy price, as a result of available cheap energy sources becoming more scarce, leads to less recovered quantity of any recyclable material" is proved by means of the tradeoff between the partial benefits due to material saving and resources degradation/consumption (assessed in monetary terms).
ERIC Educational Resources Information Center
Fagioli, Loris P.
2014-01-01
This study compared a value-added approach to school accountability to the currently used metrics of accountability in California of Adequate Yearly Progress (AYP) and Academic Performance Index (API). Five-year student panel data (N?=?53,733) from 29 elementary schools in a large California school district were used to address the research…
Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper
ERIC Educational Resources Information Center
Hock, Heinrich; Isenberg, Eric
2012-01-01
Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…
Tradeoffs in the Use of Value-Added Estimates of Teacher Effectiveness by School Districts
ERIC Educational Resources Information Center
Baxter, Andrew David
2011-01-01
A new capacity to track the inputs and outcomes of individual students' education production function has spurred a growing number of school districts to attempt to measure the productivity of their teachers in terms of student outcomes. The use of these value-added measures of teacher effectiveness is at the center of current education reform.…
The Effect of Summer on Value-Added Assessments of Teacher and School Performance
ERIC Educational Resources Information Center
Palardy, Gregory J.; Peng, Luyao
2015-01-01
This study examines the effects of including the summer period on value-added assessments (VAA) of teacher and school performance at the early grades. The results indicate that 40-62% of the variance in VAA estimates originates from the summer period, depending on the outcome (i.e., reading or math achievement gains). Furthermore, when summer is…
ERIC Educational Resources Information Center
Kelly, Sean; Monczunski, Laura
2007-01-01
Traditionally, state accountability systems have measured school-level achievement gains using cross-sectional data, for example, by comparing scores of one year's eighth graders to scores of the next year's eighth graders. This approach produces extremely volatile estimates of value added from year to year. This volatility suggests that the…
Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25
ERIC Educational Resources Information Center
Kane, Michael T.
2017-01-01
By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…
EPRI/NRC-RES fire human reliability analysis guidelines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Stuart R.; Cooper, Susan E.; Najafi, Bijan
2010-03-01
During the 1990s, the Electric Power Research Institute (EPRI) developed methods for fire risk analysis to support its utility members in the preparation of responses to Generic Letter 88-20, Supplement 4, 'Individual Plant Examination - External Events' (IPEEE). This effort produced a Fire Risk Assessment methodology for operations at power that was used by the majority of U.S. nuclear power plants (NPPs) in support of the IPEEE program and several NPPs overseas. Although these methods were acceptable for accomplishing the objectives of the IPEEE, EPRI and the U.S. Nuclear Regulatory Commission (NRC) recognized that they required upgrades to support currentmore » requirements for risk-informed, performance-based (RI/PB) applications. In 2001, EPRI and the USNRC's Office of Nuclear Regulatory Research (RES) embarked on a cooperative project to improve the state-of-the-art in fire risk assessment to support a new risk-informed environment in fire protection. This project produced a consensus document, NUREG/CR-6850 (EPRI 1011989), entitled 'Fire PRA Methodology for Nuclear Power Facilities' which addressed fire risk for at power operations. NUREG/CR-6850 developed high level guidance on the process for identification and inclusion of human failure events (HFEs) into the fire PRA (FPRA), and a methodology for assigning quantitative screening values to these HFEs. It outlined the initial considerations of performance shaping factors (PSFs) and related fire effects that may need to be addressed in developing best-estimate human error probabilities (HEPs). However, NUREG/CR-6850 did not describe a methodology to develop best-estimate HEPs given the PSFs and the fire-related effects. In 2007, EPRI and RES embarked on another cooperative project to develop explicit guidance for estimating HEPs for human failure events under fire generated conditions, building upon existing human reliability analysis (HRA) methods. This document provides a methodology and guidance for conducting a fire HRA. This process includes identification and definition of post-fire human failure events, qualitative analysis, quantification, recovery, dependency, and uncertainty. This document provides three approaches to quantification: screening, scoping, and detailed HRA. Screening is based on the guidance in NUREG/CR-6850, with some additional guidance for scenarios with long time windows. Scoping is a new approach to quantification developed specifically to support the iterative nature of fire PRA quantification. Scoping is intended to provide less conservative HEPs than screening, but requires fewer resources than a detailed HRA analysis. For detailed HRA quantification, guidance has been developed on how to apply existing methods to assess post-fire fire HEPs.« less
Patel, Samir
2015-03-01
Health care is in a state of transition, shifting from volume-based success to value-based success. Hospital executives and referring physicians often do not understand the total value a radiology group provides. A template for easy, cost-effective implementation in clinical practice for most radiology groups to demonstrate the value they provide to their clients (patients, physicians, health care executives) has not been well described. A value management program was developed to document all of the value-added activities performed by on-site radiologists, quantify them in terms of time spent on each activity (investment), and present the benefits to internal and external stakeholders (outcomes). The radiology value-added matrix is the platform from which value-added activities are categorized and synthesized into a template for defining investments and outcomes. The value management program was first implemented systemwide in 2013. Across all serviced locations, 9,931.75 hours were invested. An annual executive summary report template demonstrating outcomes is given to clients. The mean and median individual value-added hours per radiologist were 134.52 and 113.33, respectively. If this program were extrapolated to the entire field of radiology, approximately 30,000 radiologists, this would have resulted in 10,641,161 uncompensated value-added hours documented in 2013, with an estimated economic value of $2.21 billion. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
Disentangling Disadvantage: Can We Distinguish Good Teaching from Classroom Composition?
Zamarro, Gema; Engberg, John; Saavedra, Juan Esteban; Steele, Jennifer
This paper investigates the use of teacher value-added estimates to assess the distribution of effective teaching across students of varying socioeconomic disadvantage in the presence of classroom composition effects. We examine, via simulations, how accurately commonly-used teacher-value added estimators recover the rank correlation between true and estimated teacher effects and a parameter representing the distribution of effective teaching. We consider various scenarios of teacher assignment, within-teacher variability in classroom composition, importance of classroom composition effects, and presence of student unobserved heterogeneity. No single model recovers without bias estimates of the distribution parameter in all the scenarios we consider. Models that rank teacher effectiveness most accurately do not necessarily recover distribution parameter estimates with less bias. Since true teacher sorting in real data is seldom known, we recommend that analysts incorporate contextual information into their decisions about model choice and we offer some guidance on how to do so.
ERIC Educational Resources Information Center
Grossman, Pam; Loeb, Susanna; Cohen, Julia; Hammerness, Karen; Wyckoff, James; Boyd, Donald; Lankford, Hamilton
2010-01-01
Even as research has begun to document that teachers matter, there is less certainty about what attributes of teachers make the most difference in raising student achievement. Numerous studies have estimated the relationship between teachers' characteristics, such as work experience and academic performance, and their value-added to student…
ERIC Educational Resources Information Center
Lopez-Martin, Esther; Kuosmanen, Timo; Gaviria, Jose Luis
2014-01-01
Value-added models are considered one of the best alternatives not only for accountability purposes but also to improve the school system itself. The estimates provided by these models measure the contribution of schools to students' academic progress, once the effect of other factors outside school control are eliminated. The functional form for…
Silva, Maria Inês Barreto; Lemos, Carla Cavalheiro da Silva; Torres, Márcia Regina Simas Gonçalves; Bregman, Rachel
2014-03-01
Chronic kidney disease (CKD) is associated with metabolic disorders, including insulin resistance (IR), mainly when associated with obesity and characterized by high abdominal adiposity (AbAd). Anthropometric measures are recommended for assessing AbAd in clinical settings, but their accuracies need to be evaluated. The aim of this study was to evaluate the precision of different anthropometric measures of AbAd in patients with CKD. We also sought to determine the AbAd association with high homeostasis model assessment index of insulin resistance (HOMA-IR) values and the cutoff point for AbAd index to predict high HOMA-IR values. A subset of clinically stable nondialyzed patients with CKD followed at a multidisciplinary outpatient clinic was enrolled in this cross-sectional study. The accuracy of the following anthropometric indices: waist circumference, waist-to-hip ratio, conicity index and waist-to-height ratio (WheiR) to assess AbAd, was evaluated using trunk fat, by dual x-ray absorptiometry (DXA), as a reference method. HOMA-IR was estimated to stratify patients in high and low HOMA-IR groups. The total area under the receiver-operating characteristic curves (AUC-ROC; sensitivity/specificity) was calculated: AbAd with high HOMA-IR values (95% confidence interval [CI]). We studied 134 patients (55% males; 54% overweight/obese, body mass index ≥ 25 kg/m(2), age 64.9 ± 12.5 y, estimated glomerular filtration rate 29.0 ± 12.7 mL/min). Among studied AbAd indices, WheiR was the only one to show correlation with DXA trunk fat after adjusting for confounders (P < 0.0001). Thus, WheiR was used to evaluate the association between AbAd with HOMA-IR values (r = 0.47; P < 0.0001). The cutoff point for WheiR as a predictor for high HOMA-IR values was 0.55 (AUC-ROC = 0.69 ± 0.05; 95% CI, 0.60-0.77; sensitivity/specificity, 68.9/61.9). WheiR is recommended as an effective and precise anthropometric index to assess AbAd and to predict high HOMA-IR values in nondialyzed patients with CKD. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Soland, James
2017-01-01
Research shows that assuming a test scale is equal-interval can be problematic, especially when the assessment is being used to achieve a policy aim like evaluating growth over time. However, little research considers whether teacher value added is sensitive to the underlying test scale, and in particular whether treating an ordinal scale as…
A Comparison of Growth Percentile and Value-Added Models of Teacher Performance. Working Paper #39
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M.
2014-01-01
School districts and state departments of education frequently must choose between a variety of methods to estimating teacher quality. This paper examines under what circumstances the decision between estimators of teacher quality is important. We examine estimates derived from student growth percentile measures and estimates derived from commonly…
Mellado-Gil, José Manuel; Jiménez-Moreno, Carmen María; Martin-Montalvo, Alejandro; Alvarez-Mercado, Ana Isabel; Fuente-Martin, Esther; Cobo-Vuilleumier, Nadia; Lorenzo, Petra Isabel; Bru-Tari, Eva; Herrera-Gómez, Irene de Gracia; López-Noriega, Livia; Pérez-Florido, Javier; Santoyo-López, Javier; Spyrantis, Andreas; Meda, Paolo; Boehm, Bernhard O; Quesada, Ivan; Gauthier, Benoit R
2016-04-01
A strategy to enhance pancreatic islet functional beta cell mass (BCM) while restraining inflammation, through the manipulation of molecular and cellular targets, would provide a means to counteract the deteriorating glycaemic control associated with diabetes mellitus. The aims of the current study were to investigate the therapeutic potential of such a target, the islet-enriched and diabetes-linked transcription factor paired box 4 (PAX4), to restrain experimental autoimmune diabetes (EAD) in the RIP-B7.1 mouse model background and to characterise putative cellular mechanisms associated with preserved BCM. Two groups of RIP-B7.1 mice were genetically engineered to: (1) conditionally express either PAX4 (BPTL) or its diabetes-linked mutant variant R129W (mutBPTL) using doxycycline (DOX); and (2) constitutively express luciferase in beta cells through the use of RIP. Mice were treated or not with DOX, and EAD was induced by immunisation with a murine preproinsulin II cDNA expression plasmid. The development of hyperglycaemia was monitored for up to 4 weeks following immunisation and alterations in the BCM were assessed weekly by non-invasive in vivo bioluminescence intensity (BLI). In parallel, BCM, islet cell proliferation and apoptosis were evaluated by immunocytochemistry. Alterations in PAX4- and PAX4R129W-mediated islet gene expression were investigated by microarray profiling. PAX4 preservation of endoplasmic reticulum (ER) homeostasis was assessed using thapsigargin, electron microscopy and intracellular calcium measurements. PAX4 overexpression blunted EAD, whereas the diabetes-linked mutant variant PAX4R129W did not convey protection. PAX4-expressing islets exhibited reduced insulitis and decreased beta cell apoptosis, correlating with diminished DNA damage and increased islet cell proliferation. Microarray profiling revealed that PAX4 but not PAX4R129W targeted expression of genes implicated in cell cycle and ER homeostasis. Consistent with the latter, islets overexpressing PAX4 were protected against thapsigargin-mediated ER-stress-related apoptosis. Luminal swelling associated with ER stress induced by thapsigargin was rescued in PAX4-overexpressing beta cells, correlating with preserved cytosolic calcium oscillations in response to glucose. In contrast, RNA interference mediated repression of PAX4-sensitised MIN6 cells to thapsigargin cell death. The coordinated regulation of distinct cellular pathways particularly related to ER homeostasis by PAX4 not achieved by the mutant variant PAX4R129W alleviates beta cell degeneration and protects against diabetes mellitus. The raw data for the RNA microarray described herein are accessible in the Gene Expression Omnibus database under accession number GSE62846.
On the added value of forensic science and grand innovation challenges for the forensic community.
van Asten, Arian C
2014-03-01
In this paper the insights and results are presented of a long term and ongoing improvement effort within the Netherlands Forensic Institute (NFI) to establish a valuable innovation programme. From the overall perspective of the role and use of forensic science in the criminal justice system, the concepts of Forensic Information Value Added (FIVA) and Forensic Information Value Efficiency (FIVE) are introduced. From these concepts the key factors determining the added value of forensic investigations are discussed; Evidential Value, Relevance, Quality, Speed and Cost. By unravelling the added value of forensic science and combining this with the future needs and scientific and technological developments, six forensic grand challenges are introduced: i) Molecular Photo-fitting; ii) chemical imaging, profiling and age estimation of finger marks; iii) Advancing Forensic Medicine; iv) Objective Forensic Evaluation; v) the Digital Forensic Service Centre and vi) Real time In-Situ Chemical Identification. Finally, models for forensic innovation are presented that could lead to major international breakthroughs on all these six themes within a five year time span. This could cause a step change in the added value of forensic science and would make forensic investigative methods even more valuable than they already are today. © 2013. Published by Elsevier Ireland Ltd on behalf of Forensic Science Society. All rights reserved.
Why the Short-War Scenario is Wrong for Naval Planning.
1982-07-01
34On Singular Chaaecteristic Initial Value In Springer Verlag Lecture Notes In Physics. 106, (1979), Proble m with Unique Solution," 20 pp., Jun 1978...34 Huntzinger, R. L&ar, " Market Analysis wilth Rational Epc’ 50 pp., Jan 1978, AD A08 541 tations: Theory end Estimation," 60 Pp., Wg 78, AD A054 422 PP 230 PP...1979 (Presanted at the Nlow Conference on -Low AD A077 636 Inome Lor Markets ," Cuicago. Jun 1978), AD AD%6 629 pp 246 Thie, Jawse A., Jr., "The
Sood, Neeraj; Ghosh, Arkadipta; Escarce, José J
2009-10-01
To estimate the effect of growth in health care costs that outpaces gross domestic product (GDP) growth ("excess" growth in health care costs) on employment, gross output, and value added to GDP of U.S. industries. We analyzed data from 38 U.S. industries for the period 1987-2005. All data are publicly available from various government agencies. We estimated bivariate and multivariate regressions. To develop the regression models, we assumed that rapid growth in health care costs has a larger effect on economic performance for industries where large percentages of workers receive employer-sponsored health insurance (ESI). We used the estimated regression coefficients to simulate economic outcomes under alternative scenarios of health care cost inflation. Faster growth in health care costs had greater adverse effects on economic outcomes for industries with larger percentages of workers who had ESI. We found that a 10 percent increase in excess growth in health care costs would have resulted in 120,803 fewer jobs, US$28,022 million in lost gross output, and US$14,082 million in lost value added in 2005. These declines represent 0.17 to 0.18 percent of employment, gross output, and value added in 2005. Excess growth in health care costs is adversely affecting the economic performance of U.S. industries.
Measuring the Value of Externalities from Higher Education
ERIC Educational Resources Information Center
Chapman, Bruce; Lounkaew, Kiatanantha
2015-01-01
This paper takes an innovative approach. We have used the idea of converting international evidence of the size of higher education externalities as a proportion of GDP into Australian-specific dollar equivalents and added these estimates to estimates of lifetime fiscal returns to graduates. This allows us to estimate the expected spillovers over…
32 CFR 750.47 - Measure of damages for property claims.
Code of Federal Regulations, 2010 CFR
2010-07-01
... appreciation in value effected through the repair shall be deducted from the actual or estimated gross cost of repairs. The amount of any net depreciation in the value of the property shall be added to such gross cost...
Design of production process main shaft process with lean manufacturing to improve productivity
NASA Astrophysics Data System (ADS)
Siregar, I.; Nasution, A. A.; Andayani, U.; Anizar; Syahputri, K.
2018-02-01
This object research is one of manufacturing companies that produce oil palm machinery parts. In the production process there is delay in the completion of the Main shaft order. Delays in the completion of the order indicate the low productivity of the company in terms of resource utilization. This study aimed to obtain a draft improvement of production processes that can improve productivity by identifying and eliminating activities that do not add value (non-value added activity). One approach that can be used to reduce and eliminate non-value added activity is Lean Manufacturing. This study focuses on the identification of non-value added activity with value stream mapping analysis tools, while the elimination of non-value added activity is done with tools 5 whys and implementation of pull demand system. Based on the research known that non-value added activity on the production process of the main shaft is 9,509.51 minutes of total lead time 10,804.59 minutes. This shows the level of efficiency (Process Cycle Efficiency) in the production process of the main shaft is still very low by 11.89%. Estimation results of improvement showed a decrease in total lead time became 4,355.08 minutes and greater process cycle efficiency that is equal to 29.73%, which indicates that the process was nearing the concept of lean production.
Methodology for adding glycemic index and glycemic load values to 24-hour dietary recall database.
Olendzki, Barbara C; Ma, Yunsheng; Culver, Annie L; Ockene, Ira S; Griffith, Jennifer A; Hafner, Andrea R; Hebert, James R
2006-01-01
We describe a method of adding the glycemic index (GI) and glycemic load (GL) values to the nutrient database of the 24-hour dietary recall interview (24HR), a widely used dietary assessment. We also calculated daily GI and GL values from the 24HR. Subjects were 641 healthy adults from central Massachusetts who completed 9067 24HRs. The 24HR-derived food data were matched to the International Table of Glycemic Index and Glycemic Load Values. The GI values for specific foods not in the table were estimated against similar foods according to physical and chemical factors that determine GI. Mixed foods were disaggregated into individual ingredients. Of 1261 carbohydrate-containing foods in the database, GI values of 602 foods were obtained from a direct match (47.7%), accounting for 22.36% of dietary carbohydrate. GI values from 656 foods (52.1%) were estimated, contributing to 77.64% of dietary carbohydrate. The GI values from three unknown foods (0.2%) could not be assigned. The average daily GI was 84 (SD 5.1, white bread as referent) and the average GL was 196 (SD 63). Using this methodology for adding GI and GL values to nutrient databases, it is possible to assess associations between GI and/or GL and body weight and chronic disease outcomes (diabetes, cancer, heart disease). This method can be used in clinical and survey research settings where 24HRs are a practical means for assessing diet. The implications for using this methodology compel a broader evaluation of diet with disease outcomes.
Lean manufacturing analysis to reduce waste on production process of fan products
NASA Astrophysics Data System (ADS)
Siregar, I.; Nasution, A. A.; Andayani, U.; Sari, R. M.; Syahputri, K.; Anizar
2018-02-01
This research is based on case study that being on electrical company. One of the products that will be researched is the fan, which when running the production process there is a time that is not value-added, among others, the removal of material which is not efficient in the raw materials and component molding fan. This study aims to reduce waste or non-value added activities and shorten the total lead time by using the tools Value Stream Mapping. Lean manufacturing methods used to analyze and reduce the non-value added activities, namely the value stream mapping analysis tools, process mapping activity with 5W1H, and tools 5 whys. Based on the research note that no value-added activities in the production process of a fan of 647.94 minutes of total lead time of 725.68 minutes. Process cycle efficiency in the production process indicates that the fan is still very low at 11%. While estimates of the repair showed a decrease in total lead time became 340.9 minutes and the process cycle efficiency is greater by 24%, which indicates that the production process has been better.
Sood, Neeraj; Ghosh, Arkadipta; Escarce, José J
2009-01-01
Objective To estimate the effect of growth in health care costs that outpaces gross domestic product (GDP) growth (“excess” growth in health care costs) on employment, gross output, and value added to GDP of U.S. industries. Study Setting We analyzed data from 38 U.S. industries for the period 1987–2005. All data are publicly available from various government agencies. Study Design We estimated bivariate and multivariate regressions. To develop the regression models, we assumed that rapid growth in health care costs has a larger effect on economic performance for industries where large percentages of workers receive employer-sponsored health insurance (ESI). We used the estimated regression coefficients to simulate economic outcomes under alternative scenarios of health care cost inflation. Results Faster growth in health care costs had greater adverse effects on economic outcomes for industries with larger percentages of workers who had ESI. We found that a 10 percent increase in excess growth in health care costs would have resulted in 120,803 fewer jobs, US$28,022 million in lost gross output, and US$14,082 million in lost value added in 2005. These declines represent 0.17 to 0.18 percent of employment, gross output, and value added in 2005. Conclusion Excess growth in health care costs is adversely affecting the economic performance of U.S. industries. PMID:19500165
Three Essays on Estimating Causal Treatment Effects
ERIC Educational Resources Information Center
Deutsch, Jonah
2013-01-01
This dissertation is composed of three distinct chapters, each of which addresses issues of estimating treatment effects. The first chapter empirically tests the Value-Added (VA) model using school lotteries. The second chapter, co-authored with Michael Wood, considers properties of inverse probability weighting (IPW) in simple treatment effect…
15 CFR 400.23 - Application for production authority.
Code of Federal Regulations, 2013 CFR
2013-01-01
... procedures); (4) Domestic inputs, foreign inputs, and plant value added as percentages of finished product value; (5) Projected shipments to domestic market and export market (percentages); (6) Estimated total... explanation of its anticipated economic effects; (b) Identity of the user and its corporate affiliation; (c) A...
15 CFR 400.23 - Application for production authority.
Code of Federal Regulations, 2014 CFR
2014-01-01
... procedures); (4) Domestic inputs, foreign inputs, and plant value added as percentages of finished product value; (5) Projected shipments to domestic market and export market (percentages); (6) Estimated total... explanation of its anticipated economic effects; (b) Identity of the user and its corporate affiliation; (c) A...
Tobler, Amy L; Komro, Kelli A; Dabroski, Alexis; Aveyard, Paul; Markham, Wolfgang A
2011-06-01
We examined whether schools achieving better than expected educational outcomes for their students influence the risk of drug use and delinquency among urban, racial/ethnic minority youth. Adolescents (n = 2,621), who were primarily African American and Hispanic and enrolled in Chicago public schools (n = 61), completed surveys in 6th (aged 12) and 8th (aged 14) grades. Value-added education was derived from standardized residuals of regression equations predicting school-level academic achievement and attendance from students' sociodemographic profiles and defined as having higher academic achievement and attendance than that expected given the sociodemographic profile of the schools' student composition. Multilevel logistic regression estimated the effects of value-added education on students' drug use and delinquency. After considering initial risk behavior, value-added education was associated with lower incidence of alcohol, cigarette and marijuana use; stealing; and participating in a group-against-group fight. Significant beneficial effects of value-added education remained for cigarette and marijuana use, stealing and participating in a group-against-group fight after adjustment for individual- and school-level covariates. Alcohol use (past month and heavy episodic) showed marginally significant trends in the hypothesized direction after these adjustments. Inner-city schools may break the links between social disadvantage, drug use and delinquency. Identifying the processes related to value-added education in order to improve school environments is warranted given the high costs associated with individual-level interventions.
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.
2018-02-01
Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.
Toward Empirical Estimation of the Total Value of Protecting Rivers
NASA Astrophysics Data System (ADS)
Sanders, Larry D.; Walsh, Richard G.; Loomis, John B.
1990-07-01
The purpose of this paper is to develop and apply a procedure to estimate a statistical demand function for the protection of rivers in the Rocky Mountains of Colorado. Other states and nations around the world face a similar problem of estimating how much they can afford to pay for the protection of rivers. The results suggest that in addition to the direct consumption benefits of onsite recreation, total value includes offsite consumption of the flow of information about these activities and resources consumed as preservation benefits. A sample of the general population of the state reports a willingness to pay rather than forego both types of utility. We recommended that offsite values be added to the value of onsite recreation use to determine the total value of rivers to society.
Leiva-Candia, D E; Tsakona, S; Kopsahelis, N; García, I L; Papanikolaou, S; Dorado, M P; Koutinas, A A
2015-08-01
This study focuses on the valorisation of crude glycerol and sunflower meal (SFM) from conventional biodiesel production plants for the separation of value-added co-products (antioxidant-rich extracts and protein isolate) and for enhancing biodiesel production through microbial oil synthesis. Microbial oil production was evaluated using three oleaginous yeast strains (Rhodosporidium toruloides, Lipomyces starkeyi and Cryptococcus curvatus) cultivated on crude glycerol and nutrient-rich hydrolysates derived from either whole SFM or SFM fractions that remained after separation of value-added co-products. Fed-batch bioreactor cultures with R. toruloides led to the production of 37.4gL(-1) of total dry weight with a microbial oil content of 51.3% (ww(-1)) when a biorefinery concept based on SFM fractionation was employed. The estimated biodiesel properties conformed with the limits set by the EN 14214 and ASTM D 6751 standards. The estimated cold filter plugging point (7.3-8.6°C) of the lipids produced by R. toruloides is closer to that of biodiesel derived from palm oil. Copyright © 2015 Elsevier Ltd. All rights reserved.
2012-01-01
Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184
77 FR 74421 - Approval and Promulgation of Air Quality Implementation Plans for PM2.5
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... calculation of future year PM 2.5 design values using the SMAT assumptions contained in the modeled guidance\\4... components. Future PM 2.5 design values at specified monitoring sites were estimated by adding the future... nonattainment area, all future site-specific PM 2.5 design values were below the concentration specified in the...
The Effect of Primary School Size on Academic Achievement
ERIC Educational Resources Information Center
Gershenson, Seth; Langbein, Laura
2015-01-01
Evidence on optimal school size is mixed. We estimate the effect of transitory changes in school size on the academic achievement of fourth-and fifth-grade students in North Carolina using student-level longitudinal administrative data. Estimates of value-added models that condition on school-specific linear time trends and a variety of…
The Treatment Effect of Grade Repetitions
ERIC Educational Resources Information Center
Mahjoub, Mohamed-Badrane
2017-01-01
This paper estimates the treatment effect of grade repetitions in French junior high schools, using a value-added test score as outcome and quarter of birth as instrument. With linear two-stage least squares, local average treatment effect is estimated at around 1.6 times the standard deviation of the achievement gain. With non-linear…
Estimating the value of volunteer-assisted community-based aging services: a case example.
Scharlach, Andrew E
2015-01-01
This study demonstrates the use of a social return on investment (SROI) approach in estimating the financial and social value created by volunteer-assisted community-based aging services. An expanded value added statement (EVAS) analysis found that the total value of outputs produced by the Concierge Club of San Diego substantially exceeded the cost of the program, after considering likely secondary and tertiary benefits for a range of affected stakeholders-including elderly service recipients, family members, volunteers, and societal institutions. Additional research is needed regarding the direct and indirect costs and benefits of volunteer support services for vulnerable older adults and their families.
Economic implications of current systems
NASA Technical Reports Server (NTRS)
Daniel, R. E.; Aster, R. W.
1983-01-01
The primary goals of this study are to estimate the value of R&D to photovoltaic (PV) metallization systems cost, and to provide a method for selecting an optimal metallization method for any given PV system. The value-added cost and relative electrical performance of 25 state-of-the-art (SOA) and advanced metallization system techniques are compared.
Estimating the probability for major gene Alzheimer disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrer, L.A.; Cupples, L.A.
1994-02-01
Alzheimer disease (AD) is a neuropsychiatric illness caused by multiple etiologies. Prediction of whether AD is genetically based in a given family is problematic because of censoring bias among unaffected relatives as a consequence of the late onset of the disorder, diagnostic uncertainties, heterogeneity, and limited information in a single family. The authors have developed a method based on Bayesian probability to compute values for a continuous variable that ranks AD families as having a major gene form of AD (MGAD). In addition, they have compared the Bayesian method with a maximum-likelihood approach. These methods incorporate sex- and age-adjusted riskmore » estimates and allow for phenocopies and familial clustering of age on onset. Agreement is high between the two approaches for ranking families as MGAD (Spearman rank [r] = .92). When either method is used, the numerical outcomes are sensitive to assumptions of the gene frequency and cumulative incidence of the disease in the population. Consequently, risk estimates should be used cautiously for counseling purposes; however, there are numerous valid applications of these procedures in genetic and epidemiological studies. 41 refs., 4 figs., 3 tabs.« less
NASA Astrophysics Data System (ADS)
Zhumadilov, Kassym; Ivannikov, Alexander; Khailov, Artem; Orlenko, Sergei; Skvortsov, Valeriy; Stepanenko, Valeriy; Kuterbekov, Kairat; Toyoda, Shin; Kazymbet, Polat; Hoshi, Masaharu
2017-11-01
In order to estimate radiation effects on uranium enterprise staff and population teeth samples were collected for EPR tooth enamel dosimetry from population of Stepnogorsk city and staff of uranium mining enterprise in Shantobe settlment (Akmola region, North of Kazakhstan). By measurements of tooth enamel EPR spectra, the total absorbed dose in the enamel samples and added doses after subtraction of the contribution of natural background radiation are determined. For the population of Stepnogorsk city average added dose value of 4 +/- 11 mGy with variation of 51 mGy was obtained. For the staff of uranium mining enterprise in Shantobe settlment average value of added dose 95 +/- 20 mGy, with 85 mGy variation was obtained. Higher doses and the average value and a large variation for the staff, probably is due to the contribution of occupational exposure.
Added Value of Reliability to a Microgrid: Simulations of Three California Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; Lai, Judy; Stadler, Michael
The Distributed Energy Resources Customer Adoption Model is used to estimate the value an Oakland nursing home, a Riverside high school, and a Sunnyvale data center would need to put on higher electricity service reliability for them to adopt a Consortium for Electric Reliability Technology Solutions Microgrid (CM) based on economics alone. A fraction of each building's load is deemed critical based on its mission, and the added cost of CM capability to meet it added to on-site generation options. The three sites are analyzed with various resources available as microgrid components. Results show that the value placed on highermore » reliability often does not have to be significant for CM to appear attractive, about 25 $/kWcdota and up, but the carbon footprint consequences are mixed because storage is often used to shift cheaper off-peak electricity to use during afternoon hours in competition with the solar sources.« less
Handling value added tax (VAT) in economic evaluations: should prices include VAT?
Bech, Mickael; Christiansen, Terkel; Gyrd-Hansen, Dorte
2006-01-01
In health economic evaluations, value added tax is commonly treated as a transfer payment. Following this argument, resources are valued equal to their net-of-tax prices in economic evaluations applying a societal perspective. In this article we argue that if there is the possibility that a new healthcare intervention may expand the healthcare budget, the social cost of input factors should be the gross-of-tax prices and not the net-of-tax prices. The rising interest in cost-benefit analysis and the use of absolute thresholds, net benefit estimates and acceptability curves in cost-effectiveness analysis makes this argument highly relevant for an appropriate use of these tools in prioritisation.
Quantitative evaluation of Alzheimer's disease
NASA Astrophysics Data System (ADS)
Duchesne, S.; Frisoni, G. B.
2009-02-01
We propose a single, quantitative metric called the disease evaluation factor (DEF) and assess its efficiency at estimating disease burden in normal, control subjects (CTRL) and probable Alzheimer's disease (AD) patients. The study group consisted in 75 patients with a diagnosis of probable AD and 75 age-matched normal CTRL without neurological or neuropsychological deficit. We calculated a reference eigenspace of MRI appearance from reference data, in which our CTRL and probable AD subjects were projected. We then calculated the multi-dimensional hyperplane separating the CTRL and probable AD groups. The DEF was estimated via a multidimensional weighted distance of eigencoordinates for a given subject and the CTRL group mean, along salient principal components forming the separating hyperplane. We used quantile plots, Kolmogorov-Smirnov and χ2 tests to compare the DEF values and test that their distribution was normal. We used a linear discriminant test to separate CTRL from probable AD based on the DEF factor, and reached an accuracy of 87%. A quantitative biomarker in AD would act as an important surrogate marker of disease status and progression.
Lima de Souza, Alexandre; Divino Ribeiro, Marinaldo; Mattos Negrão, Fagton; Castro, Wanderson José Rodrigues; Valério Geron, Luiz Juliano; de Azevedo Câmara, Larissa Rodrigues
2016-01-01
The objective was to evaluate the ingestive behavior of ovine fed Marandu grass silage with dehydrated brewery residue added. The experiment had a completely randomized design with five treatments and four repetitions, with the treatments levels of inclusion being of 0, 10, 20, 30, and 40% natural matter of naturally dehydrated brewery residue for 36 hours to the marandu grass silage. 20 ovines were used and the experimental period was 21 days, 15 being for adaptation to diets. The use of brewery byproduct promoted quadratic effect (P < 0.05) for the consumption of dry matter with maximum point value estimated at adding 23.25% additive. Ingestion efficiency and rumination efficiency of dry matter (g DM/hour) were significant (P < 0.05), by quadratic behavior, and NDF ingestion and rumination efficiency showed crescent linear behavior. The DM and NDF consumption expressed in kg/meal and in minutes/kg were also significant (P < 0.05), showing quadratic behavior. Rumination activity expressed in g DM and NDF/piece was influenced (P < 0.05) by the adding of brewery residue in marandu grass silage in quadratic way, with maximum value estimated of 1.57 g DM/bolus chewed in inclusion of 24.72% additive in grass silage. The conclusion is that intermediary levels adding of 20 to 25% dehydrated brewery residue affects certain parameters of ingestive behavior. PMID:27547811
Spectral Estimation: An Overdetermined Rational Model Equation Approach.
1982-09-15
A-A123 122 SPECTRAL ESTIMATION: AN OVERDETERMINEO RATIONAL MODEL 1/2 EQUATION APPROACH..(U) ARIZONA STATE UNIV TEMPE DEPT OF ELECTRICAL AND COMPUTER...2 0 447,_______ 4. TITLE (mAd Sabile) S. TYPE or REPORT a PEP40D COVERED Spectral Estimation; An Overdeteruined Rational Final Report 9/3 D/8 to...andmmd&t, by uwek 7a5 4 Rational Spectral Estimation, ARMA mo~Ie1, AR model, NMA Mdle, Spectrum, Singular Value Decomposition. Adaptivb Implementatlan
Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.
Dalessandro, Brian; Perlich, Claudia; Raeder, Troy
2014-06-01
Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.
Senthamizhchelvan, Srinivasan; Hobbs, Robert F.; Song, Hong; Frey, Eric C.; Zhang, Zhe; Armour, Elwood; Wahl, Richard L.; Loeb, David M.; Sgouros, George
2012-01-01
153Sm-ethylenediamine tetramethylene phosphonic acid (153Sm-EDTMP) therapy for osteosarcoma is being investigated. In this study, we analyzed the influence of 153Sm-EDTMP administered activity (AA), osteosarcoma tumor density, mass, and the shape of the tumor on absorbed dose (AD). We also studied the biologic implication of the nonuniform tumor AD distribution using radio-biologic modeling and examined the relationship between tumor AD and response. Methods Nineteen tumors in 6 patients with recurrent, refractory osteosarcoma enrolled in a phase I or II clinical trial of 153Sm-EDTMP were analyzed using the 3-dimensional radiobiologic dosimetry (3D-RD) software package. Patients received a low dose of 153Sm-EDTMP (37.0–51.8 MBq/kg), followed on hematologic recovery by a second, high dose (222 MBq/kg). Treatment response was evaluated using either CT or MRI after each therapy. SPECT/CT of the tumor regions were obtained at 4 and 48 h or 72 h after 153Sm-EDTMP therapy for 3D-RD analysis. Mean tumor AD was also calculated using the OLINDA/EXM unit-density sphere model and was compared with the 3D-RD estimates. Results On average, a 5-fold increase in the AA led to a 4-fold increase in the mean tumor AD over the high- versus low-dose–treated patients. The range of mean tumor AD and equivalent uniform dose (EUD) for low-dose therapy were 1.48–14.6 and 0.98–3.90 Gy, respectively. Corresponding values for high-dose therapy were 2.93–59.3 and 1.89–12.3 Gy, respectively. Mean tumor AD estimates obtained from OLINDA/EXM were within 5% of the mean AD values obtained using 3D-RD. On an individual tumor basis, both mean AD and EUD were positively related to percentage tumor volume reduction (P = 0.031 and 0.023, respectively). Conclusion The variations in tumor density, mass, and shape seen in these tumors did not affect the mean tumor AD estimation significantly. The tumor EUD was approximately 2- and 3-fold lower than the mean AD for low-and high-dose therapy, respectively. A dose–response relationship was observed for transient tumor volume shrinkage. PMID:22251554
Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick
2016-03-01
When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.
von Thiele Schwarz, Ulrica; Hasson, Henna
2012-05-01
To investigate the effects of physical exercise during work hours (PE) and reduced work hours (RWH) on direct and indirect costs associated with sickness absence (SA). Sickness absence and related costs at six workplaces, matched and randomized to three conditions (PE, RWH, and referents), were retrieved from company records and/or estimated using salary conversion methods or value-added equations on the basis of interview data. Although SA days decreased in all conditions (PE, 11.4%; RWH, 4.9%; referents, 15.9%), costs were reduced in the PE (22.2%) and RWH (4.9%) conditions but not among referents (10.2% increase). Worksite health interventions may generate savings in SA costs. Costs may not be linear to changes in SA days. Combing the friction method with indirect cost estimates on the basis of value-added productivity may help illuminate both direct and indirect SA costs.
Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2016-07-01
The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.
Estimation of the effects of heavy Asian dust on respiratory function by definition type.
Kurai, Jun; Watanabe, Masanari; Noma, Hisashi; Iwata, Kyoko; Taniguchi, Jumpei; Sano, Hiroyuki; Tohda, Yuji; Shimizu, Eiji
2017-01-01
The adverse effects of Asian dust (AD) on health have been demonstrated in earlier studies, but there is no standardized definition for heavy-AD. This study aimed to examine which definition of heavy-AD has the most adverse effect on respiratory function. One-hundred-and-thirty-seven adults with asthma, and 384 school children self-measured their morning peak expiratory flow (PEF). The four definitions of heavy-AD are: (1) the definition provided by the Japan Meteorological Agency (JMA), (2) daily median AD particle level ≥ 0.07 km -1 , obtained through light detection and ranging (LIDAR) (3) hourly AD particle level ≥ 0.1 km -1 , and (4) hourly level ≥ 0.07 km -1 . Linear mixed models were used to estimate the effects of heavy-AD, by definition type, on daily PEF values. In adults with asthma, as per the JMA's definition, significantly reduced PEF were observed on heavy-AD days (lag 0), lag 0-1, and lag 0-3. In school children, after a heavy-AD event, as defined by the JMA, PEF significantly decreased on lag 0-1, lag 0-2, and lag 0-3. However, as per the other definitions, there was no significant decrease in the PEF in the adults and children. The associations between heavy-AD and respiratory function differed between these definitions.
Predicting volumes and numbers of logs by grade from hardwood cruise data
Daniel A. Yaussy; Robert L. Brisbin; Mary J. Humphreys; Mary J. Humphreys
1988-01-01
The equations presented allow the estimation of quality and quantity of logs produced from a hardwood stand based on cruise data. When packaged in appropriate computer software, the information will provide the mill manager with the means to estimate the value of logs that would be added to a mill yard inventory from a timber sale.
Costs, Benefits, and Adoption of Additive Manufacturing: A Supply Chain Perspective
Thomas, Douglas
2017-01-01
There are three primary aspects to the economics of additive manufacturing: measuring the value of goods produced, measuring the costs and benefits of using the technology, and estimating the adoption and diffusion of the technology. This paper provides an updated estimate of the value of goods produced. It then reviews the literature on additive manufacturing costs and identifies those instances in the literature where this technology is cost effective. The paper then goes on to propose an approach for examining and understanding the societal costs and benefits of this technology both from a monetary viewpoint and a resource consumption viewpoint. The final section discusses the trends in the adoption of additive manufacturing. Globally, there is an estimated $667 million in value added produced using additive manufacturing, which equates to 0.01 % of total global manufacturing value added. US value added is estimated as $241 million. Current research on additive manufacturing costs reveals that it is cost effective for manufacturing small batches with continued centralized production; however, with increased automation distributed production may become cost effective. Due to the complexities of measuring additive manufacturing costs and data limitations, current studies are limited in their scope. Many of the current studies examine the production of single parts and those that examine assemblies tend not to examine supply chain effects such as inventory and transportation costs along with decreased risk to supply disruption. The additive manufacturing system and the material costs constitute a significant portion of an additive manufactured product; however, these costs are declining over time. The current trends in costs and benefits have resulted in this technology representing 0.02 % of the relevant manufacturing industries in the US; however, as the costs of additive manufacturing systems decrease, this technology may become widely adopted and change the supplier, manufacturer, and consumer interactions. An examination in the adoption of additive manufacturing reveals that for this technology to exceed $4.4 billion in 2020, $16.0 billion in 2025, and $196.8 billion in 2035 it would need to deviate from its current trends of adoption. PMID:28747809
Costs, Benefits, and Adoption of Additive Manufacturing: A Supply Chain Perspective.
Thomas, Douglas
2016-07-01
There are three primary aspects to the economics of additive manufacturing: measuring the value of goods produced, measuring the costs and benefits of using the technology, and estimating the adoption and diffusion of the technology. This paper provides an updated estimate of the value of goods produced. It then reviews the literature on additive manufacturing costs and identifies those instances in the literature where this technology is cost effective. The paper then goes on to propose an approach for examining and understanding the societal costs and benefits of this technology both from a monetary viewpoint and a resource consumption viewpoint. The final section discusses the trends in the adoption of additive manufacturing. Globally, there is an estimated $667 million in value added produced using additive manufacturing, which equates to 0.01 % of total global manufacturing value added. US value added is estimated as $241 million. Current research on additive manufacturing costs reveals that it is cost effective for manufacturing small batches with continued centralized production; however, with increased automation distributed production may become cost effective. Due to the complexities of measuring additive manufacturing costs and data limitations, current studies are limited in their scope. Many of the current studies examine the production of single parts and those that examine assemblies tend not to examine supply chain effects such as inventory and transportation costs along with decreased risk to supply disruption. The additive manufacturing system and the material costs constitute a significant portion of an additive manufactured product; however, these costs are declining over time. The current trends in costs and benefits have resulted in this technology representing 0.02 % of the relevant manufacturing industries in the US; however, as the costs of additive manufacturing systems decrease, this technology may become widely adopted and change the supplier, manufacturer, and consumer interactions. An examination in the adoption of additive manufacturing reveals that for this technology to exceed $4.4 billion in 2020, $16.0 billion in 2025, and $196.8 billion in 2035 it would need to deviate from its current trends of adoption.
AdS and dS Entropy from String Junctions or The Function of Junction Conjunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverstein, Eva M
Flux compactifications of string theory exhibiting the possibility of discretely tuning the cosmological constant to small values have been constructed. The highly tuned vacua in this discretuum have curvature radii which scale as large powers of the flux quantum numbers, exponential in the number of cycles in the compactification. By the arguments of Susskind/Witten (in the AdS case) and Gibbons/Hawking (in the dS case), we expect correspondingly large entropies associated with these vacua. If they are to provide a dual description of these vacua on their Coulomb branch, branes traded for the flux need to account for this entropy atmore » the appropriate energy scale. In this note, we argue that simple string junctions and webs ending on the branes can account for this large entropy, obtaining a rough estimate for junction entropy that agrees with the existing rough estimates for the spacing of the discretuum. In particular, the brane entropy can account for the (A)dS entropy far away from string scale correspondence limits.« less
Romano, Karen Rodrigues; Dias Bartolomeu Abadio Finco, Fernanda; Rosenthal, Amauri; Vinicius Alves Finco, Marcus; Deliza, Rosires
2016-11-01
This study aimed at estimating the consumer's willingness to pay (WTP) more for value-added pomegranate juice using the contingent valuation method (CVM). The WTP was estimated applying the open-ended elicitation technique with 454 consumers in two supermarkets located in Rio de Janeiro, Brazil. The average consumer's WTP more for pomegranate juice was estimated in R$2.04 (Brazilian currency) and the income elasticity coefficient at the midpoint was 0.19, i.e., a 10% increase in consumer income will increase, on average, 1.9% the WTP of pomegranate juice (ceteris paribus). Therefore, the income elasticity coefficient was considered inelastic, once an increase in income would have low effect on the WTP for these consumers. The results indicated that the consumers were interested in acquiring a non-traditional juice processed using a technology that preserves vitamins and antioxidants, maintains the flavor of "fresh juice" without colorants and preservatives, despite the pomegranate is not part of the Brazilian diet. Copyright © 2016 Elsevier Ltd. All rights reserved.
Unsteady load on an oscillating Kaplan turbine runner
NASA Astrophysics Data System (ADS)
Puolakka, O.; Keto-Tokoi, J.; Matusiak, J.
2013-02-01
A Kaplan turbine runner oscillating in turbine waterways is subjected to a varying hydrodynamic load. Numerical simulation of the related unsteady flow is time-consuming and research is very limited. In this study, a simplified method based on unsteady airfoil theory is presented for evaluation of the unsteady load for vibration analyses of the turbine shaft line. The runner is assumed to oscillate as a rigid body in spin and axial heave, and the reaction force is resolved into added masses and dampings. The method is applied on three Kaplan runners at nominal operating conditions. Estimates for added masses and dampings are considered to be of a magnitude significant for shaft line vibration. Moderate variation in the added masses and minor variation in the added dampings is found in the frequency range of interest. Reference results for added masses are derived by solving the boundary value problem for small motions of inviscid fluid using the finite element method. Good correspondence is found in the added mass estimates of the two methods. The unsteady airfoil method is considered accurate enough for design purposes. Experimental results are needed for validation of unsteady load analyses.
NASA Astrophysics Data System (ADS)
Kopeć, Jacek M.; Kwiatkowski, Kamil; de Haan, Siebren; Malinowski, Szymon P.
2016-05-01
Navigational information broadcast by commercial aircraft in the form of Mode-S EHS (Mode-S Enhanced Surveillance) and ADS-B (Automatic Dependent Surveillance-Broadcast) messages can be considered a new source of upper tropospheric and lower stratospheric turbulence estimates. A set of three processing methods is proposed and analysed using a quality record of turbulence encounters made by a research aircraft.The proposed methods are based on processing the vertical acceleration or the background wind into the eddy dissipation rate. Turbulence intensity can be estimated using the standard content of the Mode-S EHS/ADS-B.The results are based on a Mode-S EHS/ADS-B data set generated synthetically based on the transmissions from the research aircraft. This data set was validated using the overlapping record of the Mode-S EHS/ADS-B received from the same research aircraft. The turbulence intensity, meaning the eddy dissipation rate, obtained from the proposed methods based on the Mode-S EHS/ADS-B is compared with the value obtained using on-board accelerometer. The results of the comparison indicate the potential of the methods. The advantages and limitation of the presented approaches are discussed.
ERIC Educational Resources Information Center
Goldhaber, Dan; Hansen, Michael
2010-01-01
Economic theory commonly models unobserved worker quality as a given parameter that is fixed over time, but empirical evidence supporting this assumption is sparse. In this paper we report on work estimating the stability of value-added estimates of teacher effects, an important area of investigation given that new workforce policies implicitly…
Total economic value of wetlands products and services in Uganda.
Kakuru, Willy; Turyahabwe, Nelson; Mugisha, Johnny
2013-01-01
Wetlands provide food and non-food products that contribute to income and food security in Uganda. This study determined the economic value of wetland resources and their contribution to food security in the three agroecological zones of Uganda. The values of wetland resources were estimated using primary and secondary data. Market price, Productivity, and Contingent valuation methods were used to estimate the value of wetland resources. The per capita value of fish was approximately US$ 0.49 person⁻¹. Fish spawning was valued at approximately US$ 363,815 year⁻¹, livestock pastures at US$ 4.24 million, domestic water use at US$ 34 million year⁻¹, and the gross annual value added by wetlands to milk production at US$ 1.22 million. Flood control was valued at approximately US$ 1,702,934,880 hectare⁻¹ year⁻¹ and water regulation and recharge at US$ 7,056,360 hectare⁻¹ year⁻¹. Through provision of grass for mulching, wetlands were estimated to contribute to US$ 8.65 million annually. The annual contribution of non-use values was estimated in the range of US$ 7.1 million for water recharge and regulation and to US$ 1.7 billion for flood control. Thus, resource investment for wetlands conservation is economically justified to create incentives for continued benefits.
Total Economic Value of Wetlands Products and Services in Uganda
Kakuru, Willy; Turyahabwe, Nelson; Mugisha, Johnny
2013-01-01
Wetlands provide food and non-food products that contribute to income and food security in Uganda. This study determined the economic value of wetland resources and their contribution to food security in the three agroecological zones of Uganda. The values of wetland resources were estimated using primary and secondary data. Market price, Productivity, and Contingent valuation methods were used to estimate the value of wetland resources. The per capita value of fish was approximately US$ 0.49 person−1. Fish spawning was valued at approximately US$ 363,815 year−1, livestock pastures at US$ 4.24 million, domestic water use at US$ 34 million year−1, and the gross annual value added by wetlands to milk production at US$ 1.22 million. Flood control was valued at approximately US$ 1,702,934,880 hectare−1 year−1 and water regulation and recharge at US$ 7,056,360 hectare−1 year−1. Through provision of grass for mulching, wetlands were estimated to contribute to US$ 8.65 million annually. The annual contribution of non-use values was estimated in the range of US$ 7.1 million for water recharge and regulation and to US$ 1.7 billion for flood control. Thus, resource investment for wetlands conservation is economically justified to create incentives for continued benefits. PMID:24163614
Aquacultural and socio-economic aspects of processing carps into some value-added products.
Sehgal, H S; Sehgal, G K
2002-05-01
Carps are the mainstay of Indian aquaculture, contributing over 90% to the total fish production, which was estimated to be 1.77 million metric tonnes in 1996. Carp culture has a great potential for waste utilization and thus for pollution abatement. Many wastes such as cow, poultry, pig, duck, goat, and sheep excreta, biogas slurry, effluents from different kinds of factories/industries have been efficiently used for enhancing the productivity of natural food of carps and related species. Besides, several organic wastes/byproducts such as plant products, wastes from animal husbandry, and industrial by-products have been used as carp feed ingredients to lower the cost of supplementary feeding. However, to ensure the continued expansion of fish ponds and the pollution control, there must be a market for the fish (carps) produced in these ponds. The carps have, however, a low market value due to the presence of intra-muscular bones, which reduces their consumer acceptability. Thus, a need was felt to develop some boneless convenience products for enhancing the consumer acceptability of the carps. Efforts were made to prepare three value-added fish products, namely fish patty, fish finger and fish salad from carp flesh and were compared with a reference product ('fish pakoura'). Sensory evaluation of these products gave highly encouraging results. The methods of preparation of these products were transferred to some progressive farmers of the region who prepared and sold these products at very attractive prices. Carp processing has a great potential for the establishment of a fish ancillary industry and thus for boosting the production of these species. In Punjab alone, there is a potential of consuming 32,448 metric tonnes per annum of such value-added products (which would require 54,080 metric tonnes of raw fish). The development of value-added products has a significant role in raising the socio-economic status of the people associated with carp culture. The average cost of production of these products was estimated to be INR 80 per kg. With a sale price of INR 110 per kg, and a sale of 50 kg per day of the value-added products (26 days a month), the average monthly income of a carp-processing unit comes to be INR 39,000 (929 USD, approximately).
Sea-level change during the last 2500 years in New Jersey, USA
Kemp, Andrew C.; Horton, Benjamin P.; Vane, Christopher H.; Bernhardt, Christopher E.; Corbett, D. Reide; Engelhart, Simon E.; Anisfeld, Shimon C.; Parnell, Andrew C.; Cahill, Niamh
2013-01-01
Relative sea-level changes during the last ∼2500 years in New Jersey, USA were reconstructed to test if late Holocene sea level was stable or included persistent and distinctive phases of variability. Foraminifera and bulk-sediment δ13C values were combined to reconstruct paleomarsh elevation with decimeter precision from sequences of salt-marsh sediment at two sites using a multi-proxy approach. The additional paleoenvironmental information provided by bulk-sediment δ13C values reduced vertical uncertainty in the sea-level reconstruction by about one third of that estimated from foraminifera alone using a transfer function. The history of sediment deposition was constrained by a composite chronology. An age–depth model developed for each core enabled reconstruction of sea level with multi-decadal resolution. Following correction for land-level change (1.4 mm/yr), four successive and sustained (multi-centennial) sea-level trends were objectively identified and quantified (95% confidence interval) using error-in-variables change point analysis to account for age and sea-level uncertainties. From at least 500 BC to 250 AD, sea-level fell at 0.11 mm/yr. The second period saw sea-level rise at 0.62 mm/yr from 250 AD to 733 AD. Between 733 AD and 1850 AD, sea level fell at 0.12 mm/yr. The reconstructed rate of sea-level rise since ∼1850 AD was 3.1 mm/yr and represents the most rapid period of change for at least 2500 years. This trend began between 1830 AD and 1873 AD. Since this change point, reconstructed sea-level rise is in agreement with regional tide-gauge records and exceeds the global average estimate for the 20th century. These positive and negative departures from background rates demonstrate that the late Holocene sea level was not stable in New Jersey.
Added-values of high spatiotemporal remote sensing data in crop yield estimation
NASA Astrophysics Data System (ADS)
Gao, F.; Anderson, M. C.
2017-12-01
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing derived parameters have been used for estimating crop yield by using either empirical or crop growth models. The uses of remote sensing vegetation index (VI) in crop yield modeling have been typically evaluated at regional and country scales using coarse spatial resolution (a few hundred to kilo-meters) data or assessed over a small region at field level using moderate resolution spatial resolution data (10-100m). Both data sources have shown great potential in capturing spatial and temporal variability in crop yield. However, the added value of data with both high spatial and temporal resolution data has not been evaluated due to the lack of such data source with routine, global coverage. In recent years, more moderate resolution data have become freely available and data fusion approaches that combine data acquired from different spatial and temporal resolutions have been developed. These make the monitoring crop condition and estimating crop yield at field scale become possible. Here we investigate the added value of the high spatial and temporal VI for describing variability of crop yield. The explanatory ability of crop yield based on high spatial and temporal resolution remote sensing data was evaluated in a rain-fed agricultural area in the U.S. Corn Belt. Results show that the fused Landsat-MODIS (high spatial and temporal) VI explains yield variability better than single data source (Landsat or MODIS alone), with EVI2 performing slightly better than NDVI. The maximum VI describes yield variability better than cumulative VI. Even though VI is effective in explaining yield variability within season, the inter-annual variability is more complex and need additional information (e.g. weather, water use and management). Our findings augment the importance of high spatiotemporal remote sensing data and supports new moderate resolution satellite missions for agricultural applications.
Moham P. Tiruveedhula; Joseph Fan; Ravi R. Sadasivuni; Surya S. Durbha; David L. Evans
2010-01-01
The accumulation of small diameter trees (SDTs) is becoming a nationwide concern. Forest management practices such as fire suppression and selective cutting of high grade timber have contributed to an overabundance of SDTs in many areas. Alternative value-added utilization of SDTs (for composite wood products and biofuels) has prompted the need to estimate their...
Estimates of RF-induced erosion at antenna-connected beryllium plasma-facing components in JET
Klepper, C. C.; Borodin, D.; Groth, M.; ...
2016-01-18
Radio-frequency (RF)-enhanced surface erosion of beryllium (Be) plasma-facing components is explored, for the first time, using the ERO code. We applied the code in order to measure the RF-enhanced edge Be line emission at JET Be outboard limiters, in the presence of high-power, ion cyclotronresonance heating (ICRH) in L-mode discharges. In this first modelling study, the RF sheath effect from an ICRH antenna on a magnetically connected, limiter region is simulated by adding a constant potential to the local sheath, in an attempt to match measured increases in local Be I and Be II emission of factors of 2 3.more » It was found that such increases are readily simulated with added potentials in the range of 100 200 V, which is compatible with expected values for potentials arising from rectification of sheath voltage oscillations from ICRH antennas in the scrape-off layer plasma. We also estimated absolute erosion values within the uncertainties in local plasma conditions.« less
Herold, Christine; Hooli, Basavaraj V.; Mullin, Kristina; Liu, Tian; Roehr, Johannes T; Mattheisen, Manuel; Parrado, Antonio R.; Bertram, Lars; Lange, Christoph; Tanzi, Rudolph E.
2015-01-01
The genetic basis of Alzheimer's disease (AD) is complex and heterogeneous. Over 200 highly penetrant pathogenic variants in the genes APP, PSEN1 and PSEN2 cause a subset of early-onset familial Alzheimer's disease (EOFAD). On the other hand, susceptibility to late-onset forms of AD (LOAD) is indisputably associated to the ε4 allele in the gene APOE, and more recently to variants in more than two-dozen additional genes identified in the large-scale genome-wide association studies (GWAS) and meta-analyses reports. Taken together however, although the heritability in AD is estimated to be as high as 80%, a large proportion of the underlying genetic factors still remain to be elucidated. In this study we performed a systematic family-based genome-wide association and meta-analysis on close to 15 million imputed variants from three large collections of AD families (~3,500 subjects from 1,070 families). Using a multivariate phenotype combining affection status and onset age, meta-analysis of the association results revealed three single nucleotide polymorphisms (SNPs) that achieved genome-wide significance for association with AD risk: rs7609954 in the gene PTPRG (P-value = 3.98·10−08), rs1347297 in the gene OSBPL6 (P-value = 4.53·10−08), and rs1513625 near PDCL3 (P-value = 4.28·10−08). In addition, rs72953347 in OSBPL6 (P-value = 6.36·10−07) and two SNPs in the gene CDKAL1 showed marginally significant association with LOAD (rs10456232, P-value: 4.76·10−07; rs62400067, P-value: 3.54·10−07). In summary, family-based GWAS meta-analysis of imputed SNPs revealed novel genomic variants in (or near) PTPRG, OSBPL6, and PDCL3 that influence risk for AD with genome-wide significance. PMID:26830138
First archeointensity results from Portuguese potteries (1550-1750 AD)
NASA Astrophysics Data System (ADS)
Hartmann, Gelvam A.; Trindade, Ricardo I. F.; Goguitchaichvili, Avto; Etchevarne, Carlos; Morales, Juan; Afonso, Marisa C.
2009-01-01
Geomagnetic field variations at archeomagnetic timescales can be obtained from well-dated heated structures and archeological potsherds. Here, we present the first archeointensity results obtained on Portuguese ceramics (1550 to 1750 AD) collected at Brazilian archeological sites. The results are compared to those obtained from Western Europe and currently available geomagnetic field models. Continuous thermomagnetic and IRM acquisitions curves indicate that Ti-poor titanomagnetite is responsible for the remanence in these ceramic fragments. Five fragments (24 samples) out of twelve analyzed yielded reliable intensity estimates. The row archeointensity data were corrected for TRM anisotropy and cooling rate effect. The mean dipole moments are obtained for three different age intervals: 1550±30 AD, 1600±30 AD and 1750±50 AD. Mean intensities vary from 37.9±4.2 μT to 54.8±7.6 μT in agreement with the previously reported data for 1550 AD and 1750 AD. Relatively weaker, but still highly dispersed, values were obtained for 1600 AD ceramics.
Albin, Thomas J
2013-01-01
Designers and ergonomists occasionally must produce anthropometric models of workstations with only summary percentile data available regarding the intended users. Until now the only option available was adding or subtracting percentiles of the anthropometric elements, e.g. heights and widths, used in the model, despite the known resultant errors in the estimate of the percent of users accommodated. This paper introduces a new method, the Median Correlation Method (MCM) that reduces the error. Compare the relative accuracy of MCM to combining percentiles for anthropometric models comprised of all possible pairs of five anthropometric elements. Describe the mathematical basis of the greater accuracy of MCM. MCM is described. 95th percentile accommodation percentiles are calculated for the sums and differences of all combinations of five anthropometric elements by combining percentiles and using MCM. The resulting estimates are compared with empirical values of the 95th percentiles, and the relative errors are reported. The MCM method is shown to be significantly more accurate than adding percentiles. MCM is demonstrated to have a mathematical advantage estimating accommodation relative to adding or subtracting percentiles. The MCM method should be used in preference to adding or subtracting percentiles when limited data prevent more sophisticated anthropometric models.
2015 Alzheimer's disease facts and figures.
2015-03-01
This report discusses the public health impact of Alzheimer’s disease (AD), including incidence and prevalence, mortality rates, costs of care and the overall effect on caregivers and society. It also examines the challenges encountered by health care providers when disclosing an AD diagnosis to patients and caregivers. An estimated 5.3 million Americans have AD; 5.1 million are age 65 years, and approximately 200,000 are age <65 years and have younger onset AD. By mid-century, the number of people living with AD in the United States is projected to grow by nearly 10 million, fueled in large part by the aging baby boom generation. Today, someone in the country develops AD every 67 seconds. By 2050, one new case of AD is expected to develop every 33 seconds, resulting in nearly 1 million new cases per year, and the estimated prevalence is expected to range from 11 million to 16 million. In 2013, official death certificates recorded 84,767 deaths from AD, making AD the sixth leading cause of death in the United States and the fifth leading cause of death in Americans age 65 years. Between 2000 and 2013, deaths resulting from heart disease, stroke and prostate cancer decreased 14%, 23% and 11%, respectively, whereas deaths from AD increased 71%. The actual number of deaths to which AD contributes (or deaths with AD) is likely much larger than the number of deaths from AD recorded on death certificates. In 2015, an estimated 700,000 Americans age 65 years will die with AD, and many of them will die from complications caused by AD. In 2014, more than 15 million family members and other unpaid caregivers provided an estimated 17.9 billion hours of care to people with AD and other dementias, a contribution valued at more than $217 billion. Average per-person Medicare payments for services to beneficiaries age 65 years with AD and other dementias are more than two and a half times as great as payments for all beneficiaries without these conditions, and Medicaid payments are 19 times as great. Total payments in 2015 for health care, long-term care and hospice services for people age 65 years with dementia are expected to be $226 billion. Among people with a diagnosis of AD or another dementia, fewer than half report having been told of the diagnosis by their health care provider. Though the benefits of a prompt, clear and accurate disclosure of an AD diagnosis are recognized by the medical profession, improvements to the disclosure process are needed. These improvements may require stronger support systems for health care providers and their patients.
Choi, Sang Hyun; Lee, Jeong Hyun; Choi, Young Jun; Park, Ji Eun; Sung, Yu Sub; Kim, Namkug; Baek, Jung Hwan
2017-01-01
This study aimed to explore the added value of histogram analysis of the ratio of initial to final 90-second time-signal intensity AUC (AUCR) for differentiating local tumor recurrence from contrast-enhancing scar on follow-up dynamic contrast-enhanced T1-weighted perfusion MRI of patients treated for head and neck squamous cell carcinoma (HNSCC). AUCR histogram parameters were assessed among tumor recurrence (n = 19) and contrast-enhancing scar (n = 27) at primary sites and compared using the t test. ROC analysis was used to determine the best differentiating parameters. The added value of AUCR histogram parameters was assessed when they were added to inconclusive conventional MRI results. Histogram analysis showed statistically significant differences in the 50th, 75th, and 90th percentiles of the AUCR values between the two groups (p < 0.05). The 90th percentile of the AUCR values (AUCR 90 ) was the best predictor of local tumor recurrence (AUC, 0.77; 95% CI, 0.64-0.91) with an estimated cutoff of 1.02. AUCR 90 increased sensitivity by 11.7% over that of conventional MRI alone when added to inconclusive results. Histogram analysis of AUCR can improve the diagnostic yield for local tumor recurrence during surveillance after treatment for HNSCC.
A simulation of water pollution model parameter estimation
NASA Technical Reports Server (NTRS)
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Fang, Ye; Moreno, Jose L; Streiff, Shawn L; Villegas, Jorge; Muñoz, Ricardo F; Tercyak, Kenneth P; Mandelblatt, Jeanne S; Vallone, Donna M
2012-01-01
Background Tobacco cessation among Latinos is a public health priority in the United States, particularly given the relatively high growth of this population segment. Although a substantial percentage of American Latinos use the Internet, they have not engaged in Web-based cessation programs as readily as other racial/ethnic subgroups. A lack of culturally specific advertising efforts may partly explain this disparity. Objective Phase I of this study focused on the development of four Spanish-language online banner advertisements to promote a free Spanish-language smoking cessation website (es.BecomeAnEX.org). Phase II examined the relative effectiveness of the four banner ads in reaching and recruiting Latino smokers to the cessation website. Methods In Phase I, 200 Spanish-speaking Latino smokers completed an online survey to indicate their preference for Spanish-language banner ads that incorporated either the cultural value of family (familismo) or fatalism (fatalismo). Ads included variations on message framing (gain vs loss) and depth of cultural targeting (surface vs deep). In Phase II, a Latin square design evaluated the effectiveness of the four preferred ads from Phase I. Ads were systematically rotated across four popular Latino websites (MySpace Latino, MSN Latino, MiGente, and Yahoo! en Español) over four months from August to November 2009. Tracking software recorded ad clicks and registrants on the cessation website. Negative binomial regression and general linear modeling examined the main and interacting effects of message framing and depth of cultural targeting for four outcomes: number of clicks, click-through rate, number of registrants, and cost per registrant. Results In Phase I, smokers preferred the four ads featuring familismo. In Phase II, 24,829,007 impressions were placed, yielding 24,822 clicks, an overall click-through rate of 0.10%, and 500 registrants (2.77% conversion rate). Advertising costs totaled US $104,669.49, resulting in an overall cost per click of US $4.22 and cost per registrant of US $209.34. Website placement predicted all four outcomes (all P values < .01). Yahoo! en Español yielded the highest click-through rate (0.167%) and number of registrants (n = 267). The message framing and cultural targeting interaction was not significant. Contrary to hypotheses, loss-framed ads yielded a higher click-through rate than gain-framed ads (point estimate = 1.08, 95% CI 1.03 1.14, P = 0.004), and surface-targeted ads outperformed deep-targeted ads for clicks (point estimate = 1.20, 95% CI 1.13 1.28, P < .001), click-through rate (point estimate = 1.22, 95% CI 1.16 1.29, P < .001), and number of registrants (point estimate = 2.73, 95% CI 2.14 3.48, P < .001). Conclusions Online advertising can be an effective and cost-efficient strategy to reach and engage Spanish-speaking Latino smokers in an evidence-based Internet cessation program. Cultural targeting and smoking-relevant images may be important factors for banner ad design. Online advertising holds potential for Web-based cessation program implementation and research. PMID:22954502
Graham, Amanda L; Fang, Ye; Moreno, Jose L; Streiff, Shawn L; Villegas, Jorge; Muñoz, Ricardo F; Tercyak, Kenneth P; Mandelblatt, Jeanne S; Vallone, Donna M
2012-08-27
Tobacco cessation among Latinos is a public health priority in the United States, particularly given the relatively high growth of this population segment. Although a substantial percentage of American Latinos use the Internet, they have not engaged in Web-based cessation programs as readily as other racial/ethnic subgroups. A lack of culturally specific advertising efforts may partly explain this disparity. Phase I of this study focused on the development of four Spanish-language online banner advertisements to promote a free Spanish-language smoking cessation website (es.BecomeAnEX.org). Phase II examined the relative effectiveness of the four banner ads in reaching and recruiting Latino smokers to the cessation website. In Phase I, 200 Spanish-speaking Latino smokers completed an online survey to indicate their preference for Spanish-language banner ads that incorporated either the cultural value of family (familismo) or fatalism (fatalismo). Ads included variations on message framing (gain vs loss) and depth of cultural targeting (surface vs deep). In Phase II, a Latin square design evaluated the effectiveness of the four preferred ads from Phase I. Ads were systematically rotated across four popular Latino websites (MySpace Latino, MSN Latino, MiGente, and Yahoo! en Español) over four months from August to November 2009. Tracking software recorded ad clicks and registrants on the cessation website. Negative binomial regression and general linear modeling examined the main and interacting effects of message framing and depth of cultural targeting for four outcomes: number of clicks, click-through rate, number of registrants, and cost per registrant. In Phase I, smokers preferred the four ads featuring familismo. In Phase II, 24,829,007 impressions were placed, yielding 24,822 clicks, an overall click-through rate of 0.10%, and 500 registrants (2.77% conversion rate). Advertising costs totaled US $104,669.49, resulting in an overall cost per click of US $4.22 and cost per registrant of US $209.34. Website placement predicted all four outcomes (all P values < .01). Yahoo! en Español yielded the highest click-through rate (0.167%) and number of registrants (n = 267). The message framing and cultural targeting interaction was not significant. Contrary to hypotheses, loss-framed ads yielded a higher click-through rate than gain-framed ads (point estimate = 1.08, 95% CI 1.03 1.14, P = 0.004), and surface-targeted ads outperformed deep-targeted ads for clicks (point estimate = 1.20, 95% CI 1.13 1.28, P < .001), click-through rate (point estimate = 1.22, 95% CI 1.16 1.29, P < .001), and number of registrants (point estimate = 2.73, 95% CI 2.14 3.48, P < .001). Online advertising can be an effective and cost-efficient strategy to reach and engage Spanish-speaking Latino smokers in an evidence-based Internet cessation program. Cultural targeting and smoking-relevant images may be important factors for banner ad design. Online advertising holds potential for Web-based cessation program implementation and research.
38 CFR 36.4310 - Amortization.
Code of Federal Regulations, 2010 CFR
2010-07-01
... loan shall be repayable within the estimated economic life of the property securing the loan. (d... percent of the lesser of the reasonable value of the property as of the time the loan is made or the... assure that the principal amount of the loan, including all interest scheduled to be deferred and added...
38 CFR 36.4310 - Amortization.
Code of Federal Regulations, 2014 CFR
2014-07-01
... loan shall be repayable within the estimated economic life of the property securing the loan. (d... percent of the lesser of the reasonable value of the property as of the time the loan is made or the... assure that the principal amount of the loan, including all interest scheduled to be deferred and added...
38 CFR 36.4310 - Amortization.
Code of Federal Regulations, 2012 CFR
2012-07-01
... loan shall be repayable within the estimated economic life of the property securing the loan. (d... percent of the lesser of the reasonable value of the property as of the time the loan is made or the... assure that the principal amount of the loan, including all interest scheduled to be deferred and added...
38 CFR 36.4310 - Amortization.
Code of Federal Regulations, 2011 CFR
2011-07-01
... loan shall be repayable within the estimated economic life of the property securing the loan. (d... percent of the lesser of the reasonable value of the property as of the time the loan is made or the... assure that the principal amount of the loan, including all interest scheduled to be deferred and added...
38 CFR 36.4310 - Amortization.
Code of Federal Regulations, 2013 CFR
2013-07-01
... loan shall be repayable within the estimated economic life of the property securing the loan. (d... percent of the lesser of the reasonable value of the property as of the time the loan is made or the... assure that the principal amount of the loan, including all interest scheduled to be deferred and added...
Teacher Spillover Effects across Four Subjects in Middle Schools
ERIC Educational Resources Information Center
Yuan, Kun
2014-01-01
Value-added modeling (VAM), one class of statistical models used to estimate individual teacher's or school's contribution to student achievement based on student test score growth between consecutive years, has become increasingly popular in the last decades. Despite the increasing popularity of VAM, many researchers are concerned about the…
Characteristics of Schools Successful in STEM: Evidence from Two States' Longitudinal Data
ERIC Educational Resources Information Center
Hansen, Michael
2014-01-01
Present federal education policies promote learning in science, technology, engineering, and mathematics (STEM) and the participation of minority students in these fields. Using longitudinal data on students in Florida and North Carolina, value-added estimates in mathematics and science are generated to categorize schools into performance levels…
Stated Preference Economic Development Model
2015-02-01
calculated the public benefit associated with Petroglyph by extracting the value for day hikes from the first study, the added value of rock art from the...2002. There are a lack of data and methods to determine the net social benefit of this aid. Additionally, currently available data are insufficient to...properly prioritize the usage and award of this aid. SPED involved the creation of tools that estimate the net social benefit of projects using
Drummond, Michael; de Pouvourville, Gerard; Jones, Elizabeth; Haig, Jennifer; Saba, Grece; Cawston, Hélène
2014-05-01
Within Europe, contrasting approaches have emerged for rewarding the value added by new drugs. In Ireland, The Netherlands, Sweden and the UK, the price of, and access to, a new drug has to be justified by the health gain it delivers compared with current therapy, typically expressed in quality-adjusted life-years (QALYs) gained. By contrast, in France and Germany, the assessment of added benefit is expressed on an ordinal scale, based on an assessment of the clinical outcomes as compared with existing care. This assessment then influences price negotiations. The objective of this paper is to assess the pros and cons of each approach, both in terms of the assessments they produce and the efficiency and practical feasibility of the process. We reviewed the technology appraisals performed by the National Institute for Health and Care Excellence (NICE) relating to 49 anticancer drug decisions in the UK from September 2003 to January 2012. Estimates of the QALYs gained and incremental cost per QALY gained were then compared with the assessments of the Amélioration du Service Médical Rendu (ASMR) made by the Haute Autorité de Santé (HAS) in France for the same drugs in the same clinical indications. We also undertook a qualitative assessment of the two approaches, considering the resources required, timeliness, transparency, stakeholder engagement, and political acceptability. In the UK, the estimates of QALYs gained ranged from 0.003 to 1.46 and estimates of incremental cost per QALY from £3,320 to £458,000. The estimate of cost per QALY gained was a good predictor of the level of restriction imposed on the use of the drug concerned. Patient access schemes, which normally imply price reductions, were proposed in 45 % of cases. In France, the distribution of ASMRs was I, 12 %; II, 18 %; III, 24 %; IV, 18 %; V, 22 %; and uncategorized/non-reimbursed, 4 %. Since ASMRs of IV and above signify minor or no improvement over existing therapy, these ratings imply that, in around 40 % of cases, the drugs concerned would face price controls. Overall, the assessments of value added in the two jurisdictions were very similar. A superior ASMR rating was associated with higher QALYs gained. However, a superior ASMR was not associated with a lower incremental cost per QALY. There are substantial differences in respect of the other attributes considered, but these mainly reflect the result of institutional choices in the jurisdictions concerned and it is not possible to conclude that one approach is universally superior to the other. The two approaches produce very similar assessments of added value, but have different attributes in terms of cost, timeliness, transparency and political acceptability. How these considerations impact market access and prices is difficult to assess, because of the lack of transparency concerning prices in both countries and the fact that market access also depends on a broader range of factors. There is some evidence of convergence in the approaches, with the movement in France towards producing cost-effectiveness estimates and the movement in the UK towards negotiated prices.
Declining functional connectivity and changing hub locations in Alzheimer's disease: an EEG study.
Engels, Marjolein M A; Stam, Cornelis J; van der Flier, Wiesje M; Scheltens, Philip; de Waal, Hanneke; van Straaten, Elisabeth C W
2015-08-20
EEG studies have shown that patients with Alzheimer's disease (AD) have weaker functional connectivity than controls, especially in higher frequency bands. Furthermore, active regions seem more prone to AD pathology. How functional connectivity is affected in AD subgroups of disease severity and how network hubs (highly connected brain areas) change is not known. We compared AD patients with different disease severity and controls in terms of functional connections, hub strength and hub location. We studied routine 21-channel resting-state electroencephalography (EEG) of 318 AD patients (divided into tertiles based on disease severity: mild, moderate and severe AD) and 133 age-matched controls. Functional connectivity between EEG channels was estimated with the Phase Lag Index (PLI). From the PLI-based connectivity matrix, the minimum spanning tree (MST) was derived. For each node (EEG channel) in the MST, the betweenness centrality (BC) was computed, a measure to quantify the relative importance of a node within the network. Then we derived color-coded head plots based on BC values and calculated the center of mass (the exact middle had x and y values of 0). A shifting of the hub locations was defined as a shift of the center of mass on the y-axis across groups. Multivariate general linear models with PLI or BC values as dependent variables and the groups as continuous variables were used in the five conventional frequency bands. We found that functional connectivity decreases with increasing disease severity in the alpha band. All, except for posterior, regions showed increasing BC values with increasing disease severity. The center of mass shifted from posterior to more anterior regions with increasing disease severity in the higher frequency bands, indicating a loss of relative functional importance of the posterior brain regions. In conclusion, we observed decreasing functional connectivity in the posterior regions, together with a shifted hub location from posterior to central regions with increasing AD severity. Relative hub strength decreases in posterior regions while other regions show a relative rise with increasing AD severity, which is in accordance with the activity-dependent degeneration theory. Our results indicate that hubs are disproportionally affected in AD.
Naska, A; Oikonomou, E; Trichopoulou, A; Wagner, K; Gedrich, K
2007-12-01
To describe a cost-efficient method for estimating energy and nutrient availability using household budget survey (HBS) data. Four different approaches were tested and the results were compared with published nutrient intake data. The selected method was exemplarily applied in German and Greek data. Germany, 1998; Greece, 1998/99. Nationally representative HBSs. Comparisons showed that HBS-based estimates were generally close to intake data when results were presented as contributions to daily energy intake. Daily energy and protein availabilities were similar in Germany and Greece. Differences were observed in the availability of carbohydrates (German households reported a 5 percentage points higher contribution to daily energy availability) and lipids (Greek households recorded higher values for total fat, but lower values for saturated fat). Meat, added lipids and potatoes were important energy suppliers in Germany, whereas in Greece the first three energy suppliers were added lipids, cereals and meat. In both countries, meat, cereals, milk and cheese were important protein sources and cereals, potatoes, fruits and nuts contributed more than 60% of the daily carbohydrate availability. Added lipids were the major source of fat in the daily diet of both countries, but their contribution amounted to less than one-third in Germany and two-thirds in Greece. National HBS data can be used for monitoring and comparing nutrient availability among representative population samples of different countries. The ground is set for the development of a harmonised food composition table to be applied to HBS food data at international level.
Jaiswal, Kishor; Wald, D.J.
2013-01-01
This chapter summarizes the state-of-the-art for rapid earthquake impact estimation. It details the needs and challenges associated with quick estimation of earthquake losses following global earthquakes, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational earthquake loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global Earthquakes for Response). It also details some of the ongoing developments of PAGER’s loss estimation models to better supplement the operational empirical models, and to produce value-added web content for a variety of PAGER users.
Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer's disease.
Lee, Spencer A W; Sposato, Luciano A; Hachinski, Vladimir; Cipriano, Lauren E
2017-03-16
Accurate and timely diagnosis of Alzheimer's disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of $11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over $50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers.
A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.
Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin
2015-09-16
In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.
Composition, Context, and Endogeneity in School and Teacher Comparisons
ERIC Educational Resources Information Center
Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
Investigations of the effects of schools (or teachers) on student achievement focus on either (1) individual school effects, such as value-added analyses, or (2) school-type effects, such as comparisons of charter and public schools. Controlling for school composition by including student covariates is critical for valid estimation of either kind…
Essays on School Quality and Student Outcomes
ERIC Educational Resources Information Center
Crispin, Laura M.
2012-01-01
In my first chapter, I explore the relationship between school size and student achievement where, conditional on observable educational inputs, school size is a proxy for factors that are difficult to measure directly ( e.g., school climate and organization). Using data from the NELS:88, I estimate a series of value-added education production…
Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19
ERIC Educational Resources Information Center
Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2008-01-01
Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…
The Application of Function Points to Predict Source Lines of Code for Software Development
1992-09-01
there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available
Teacher Effects on Student Achievement and Height: A Cautionary Tale
ERIC Educational Resources Information Center
Bitler, Marianne P.; Corcoran, Sean P.; Domina, Thurston; Penner, Emily K.
2014-01-01
The growing availability of data linking students to classroom teachers has made it possible to estimate the contribution teachers make to student achievement. While there is a growing consensus that teacher quality is important and current evaluation systems are inadequate, many have expressed concerns over the use of value-added measures (VAMs)…
Energy budget for yearling lake trout, Salvelinus namaycush
Rottiers, Donald V.
1993-01-01
Components of the energy budget of yearling lake trout (Salvelinus namaycush) were derived from data gathered in laboratory growth and metabolism studies; values for energy lost as waste were estimated with previously published equations. Because the total caloric value of food consumed by experimental lake trout was significantly different during the two years in which the studies were done, separate annual energy budgets were formulated. The gross conversion efficiency in yearling lake trout fed ad libitum rations of alewives at 10A?C was 26.6% to 41%. The distribution of energy with temperature was similar for each component of the energy budget. Highest conversion efficiencies were observed in fish fed less than ad libitum rations; fish fed an amount of food equivalent to about 4% of their body weight at 10A?C had a conversion efficiency of 33% to 45.1%. Physiologically useful energy was 76.1-80.1% of the total energy consumed. Estimated growth for age-I and -II lake fish was near that observed for laboratory fish held at lake temperatures and fed reduced rations.
ERIC Educational Resources Information Center
Baird, Matthew David
2012-01-01
I study three separate questions in this dissertation. In Chapter 1, I develop and estimate a structural dynamic model of occupation and job choice to test hypotheses of the importance of wages and non-wages and learning in occupational transitions, and find that wages are approximately 3 times as important as non-wage benefits in decisions and…
Automated drug dispensing systems in the intensive care unit: a financial analysis.
Chapuis, Claire; Bedouch, Pierrick; Detavernier, Maxime; Durand, Michel; Francony, Gilles; Lavagne, Pierre; Foroni, Luc; Albaladejo, Pierre; Allenet, Benoit; Payen, Jean-Francois
2015-09-09
To evaluate the economic impact of automated-drug dispensing systems (ADS) in surgical intensive care units (ICUs). A financial analysis was conducted in three adult ICUs of one university hospital, where ADS were implemented, one in each unit, to replace the traditional floor stock system. Costs were estimated before and after implementation of the ADS on the basis of floor stock inventories, expired drugs, and time spent by nurses and pharmacy technicians on medication-related work activities. A financial analysis was conducted that included operating cash flows, investment cash flows, global cash flow and net present value. After ADS implementation, nurses spent less time on medication-related activities with an average of 14.7 hours saved per day/33 beds. Pharmacy technicians spent more time on floor-stock activities with an average of 3.5 additional hours per day across the three ICUs. The cost of drug storage was reduced by €44,298 and the cost of expired drugs was reduced by €14,772 per year across the three ICUs. Five years after the initial investment, the global cash flow was €148,229 and the net present value of the project was positive by €510,404. The financial modeling of the ADS implementation in three ICUs showed a high return on investment for the hospital. Medication-related costs and nursing time dedicated to medications are reduced with ADS.
Jaskula, B.W.
2012-01-01
In 2011, world lithium consumption was estimated to have been about 25 kt (25,000 st) of lithium contained in minerals and compounds, a 10-percent increase from 2010. U.S. consumption was estimated to have been about 2 kt (2,200 st) of contained lithium, a 100-percent increase from 2010. The United States was estimated to be the fourth-ranked consumer of lithium and remained the leading importer of lithium carbonate and the leading producer of value-added lithium materials. One company, Chemetall Foote Corp. (a subsidiary of Chemetall GmbH of Germany), produced lithium compounds from domestic brine resources near Silver Peak, NV.
Jaskula, B.W.
2011-01-01
In 2010, lithium consumption in the United States was estimated to have been about 1 kt (1,100 st) of contained lithium, a 23-percent decrease from 2009. The United States was estimated to be the fourth largest consumer of lithium. It remained the leading importer of lithium carbonate and the leading producer of value-added lithium materials. Only one company, Chemetall Foote Corp. (a subsidiary of Chemetall GmbH of Germany), produced lithium compounds from domestic resources. In 2010, world lithium consumption was estimated to have been about 21 kt (22,000 st) of lithium contained in minerals and compounds, a 12-percent increase from 2009.
Jaskula, B.W.
2010-01-01
In 2009, lithium consumption in the United States was estimated to have been about 1.2 kt (1,300 st) of contained lithium, a 40-percent decrease from 2008. The United States was estimated to be the fourth largest consumer of lithium, and remained the leading importer of lithium carbonate and the leading producer of value-added lithium materials. Only one company, Chemetall Foote Corp. (a subsidiary of Chemetall GmbH of Germany), produced lithium compounds from domestic resources. In 2009, world lithium consumption was estimated to have been about 18.7 kt (20,600 st) of lithium contained in minerals and compounds.
Pharmacologic treatments for dry eye: a worthwhile investment?
Novack, Gary D
2002-01-01
To determine whether investment in a novel pharmacologic agent for the treatment of dry eye would be worthwhile from a financial perspective. Estimates were made of the cost and time required to develop a novel pharmacologic treatment of dry eye and the potential revenues for the product. These estimates were used to compute the value of the investment, adjusting for the time value of money. Development was estimated to cost $42 million and to take 55 months from investigational new drug exemption filing to new drug application approval. The potential market for this treatment was estimated at $542 million per year at year 5. Adding in the cost of development and marketing as well as other costs, net present value was very positive at the 5, 8, 10, and 40% cost of financing. The internal rate of return was 90%. In summary, if there were a successful pharmacologic treatment of dry eye and if a firm could manage the cash flow during the development, then the market potential approaches that of other treatment of chronic ophthalmic conditions (e.g., glaucoma), and it would be a worthwhile investment.
‘Alzheimer’s Progression Score’: Development of a Biomarker Summary Outcome for AD Prevention Trials
Leoutsakos, J.-M.; Gross, A.L.; Jones, R.N.; Albert, M.S.; Breitner, J.C.S.
2017-01-01
BACKGROUND Alzheimer’s disease (AD) prevention research requires methods for measurement of disease progression not yet revealed by symptoms. Preferably, such measurement should encompass multiple disease markers. OBJECTIVES Evaluate an item response theory (IRT) model-based latent variable Alzheimer Progression Score (APS) that uses multi-modal disease markers to estimate pre-clinical disease progression. DESIGN Estimate APS scores in the BIOCARD observational study, and in the parallel PREVENT-AD Cohort and its sister INTREPAD placebo-controlled prevention trial. Use BIOCARD data to evaluate whether baseline and early APS trajectory predict later progression to MCI/dementia. Similarly, use longitudinal PREVENT-AD data to assess test measurement invariance over time. Further, assess portability of the PREVENT-AD IRT model to baseline INTREPAD data, and explore model changes when CSF markers are added or withdrawn. SETTING BIOCARD was established in 1995 and participants were followed up to 20 years in Baltimore, USA. The PREVENT-AD and INTREPAD trial cohorts were established between 2011–2015 in Montreal, Canada, using nearly identical entry criteria to enroll high-risk cognitively normal persons aged 60+ then followed for several years. PARTICIPANTS 349 cognitively normal, primarily middle-aged participants in BIOCARD, 125 high-risk participants aged 60+ in PREVENT-AD, and 217 similar subjects in INTREPAD. 106 INTREPAD participants donated up to four serial CSF samples. MEASUREMENTS Global cognitive assessment and multiple structural, functional, and diffusion MRI metrics, sensori-neural tests, and CSF concentrations of tau, Aβ42 and their ratio. RESULTS Both baseline values and early slope of APS scores in BIOCARD predicted later progression to MCI or AD. Presence of CSF variables strongly improved such prediction. A similarly derived APS in PREVENT-AD showed measurement invariance over time and portability to the parallel INTREPAD sample. CONCLUSIONS An IRT-based APS can summarize multimodal information to provide a longitudinal measure of pre-clinical AD progression, and holds promise as an outcome for AD prevention trials. PMID:29034223
Leoutsakos, J-M; Gross, A L; Jones, R N; Albert, M S; Breitner, J C S
2016-01-01
Alzheimer's disease (AD) prevention research requires methods for measurement of disease progression not yet revealed by symptoms. Preferably, such measurement should encompass multiple disease markers. Evaluate an item response theory (IRT) model-based latent variable Alzheimer Progression Score (APS) that uses multi-modal disease markers to estimate pre-clinical disease progression. Estimate APS scores in the BIOCARD observational study, and in the parallel PREVENT-AD Cohort and its sister INTREPAD placebo-controlled prevention trial. Use BIOCARD data to evaluate whether baseline and early APS trajectory predict later progression to MCI/dementia. Similarly, use longitudinal PREVENT-AD data to assess test measurement invariance over time. Further, assess portability of the PREVENT-AD IRT model to baseline INTREPAD data, and explore model changes when CSF markers are added or withdrawn. BIOCARD was established in 1995 and participants were followed up to 20 years in Baltimore, USA. The PREVENT-AD and INTREPAD trial cohorts were established between 2011-2015 in Montreal, Canada, using nearly identical entry criteria to enroll high-risk cognitively normal persons aged 60+ then followed for several years. 349 cognitively normal, primarily middle-aged participants in BIOCARD, 125 high-risk participants aged 60+ in PREVENT-AD, and 217 similar subjects in INTREPAD. 106 INTREPAD participants donated up to four serial CSF samples. Global cognitive assessment and multiple structural, functional, and diffusion MRI metrics, sensori-neural tests, and CSF concentrations of tau, Aβ42 and their ratio. Both baseline values and early slope of APS scores in BIOCARD predicted later progression to MCI or AD. Presence of CSF variables strongly improved such prediction. A similarly derived APS in PREVENT-AD showed measurement invariance over time and portability to the parallel INTREPAD sample. An IRT-based APS can summarize multimodal information to provide a longitudinal measure of pre-clinical AD progression, and holds promise as an outcome for AD prevention trials.
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.
Evaluating Prospective Teachers: Testing the Predictive Validity of the EdTPA
ERIC Educational Resources Information Center
Goldhaber, Dan; Cowan, James; Theobald, Roddy
2017-01-01
We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…
Sources of the Indiana hardwood industry's competitiveness
Silas Tora; Eva Haviarova
2008-01-01
The estimated 1,600 forest products-related firms in Indiana employ more than 56,000 workers. Hardwood manufacturers are the largest segment, adding approximately $2 billion per year of raw product value. A recent report by BioCrossroads ranked the hardwood industry as the most important in the agricultural sector in Indiana. Like most of the other forest products...
Evaluating Prospective Teachers: Testing the Predictive Validity of the edTPA. Working Paper 157
ERIC Educational Resources Information Center
Goldhaber, Dan; Cowan, James; Theobald, Roddy
2016-01-01
We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…
ERIC Educational Resources Information Center
Lipscomb, Stephen; Chiang, Hanley; Gill, Brian
2012-01-01
The Commonwealth of Pennsylvania plans to develop a new statewide evaluation system for teachers and principals in its public schools by school year 2013-2014. To inform the development of this evaluation system, the Team Pennsylvania Foundation (Team PA) undertook the first phase of the Pennsylvania Teacher and Principal Evaluation…
ERIC Educational Resources Information Center
Lipscomb, Stephen; Chiang, Hanley; Gill, Brian
2012-01-01
The Commonwealth of Pennsylvania plans to develop a new statewide evaluation system for teachers and principals in its public schools by school year 2013-2014. To inform the development of this evaluation system, the Team Pennsylvania Foundation (Team PA) undertook the first phase of the Pennsylvania Teacher and Principal Evaluation…
Hardwood Blanks Expand Export Opportunities
Bruce G. Hansen; Philip A. Araman
1985-01-01
This article reviews the latest statistics pertaining to the export of hardwood lumber to the Pacific Rim; discusses possible reasons for the emergence and growth of this market; offers alternatives to rough hardwood lumber (two forms of value-added, hardwood blanks); and develops estimates of prices needed at the mill to earn a 30% return on investment from the...
ERIC Educational Resources Information Center
Hewitt, Kimberly Kappler
2015-01-01
In the United States, policies in forty states and D.C. incorporate student growth measures--estimates of student progress attributed to educators--into educator evaluation. The federal government positions such policies as levers for ensuring that more students are taught by effective teachers and that effective educators are more equitably…
Zhang, Enlan; Li, Jiajia; Zhang, Keqiang; Wang, Feng; Yang, Houhua; Zhi, Suli; Liu, Guangqing
2018-03-22
Sweet potato vine (SPV) is an abundant agricultural waste, which is easy to obtain at low cost and has the potential to produce clean energy via anaerobic digestion (AD). The main objectives of this study were to reveal methane production and process stability of SPV and the mixtures with animal manure under various total solid conditions, to verify synergetic effect in co-digestion of SPV and manure in AD systems, and to determine the kinetics characteristics during the full AD process. The results showed that SPV was desirable feedstock for AD with 200.22 mL/g VS added of methane yield in wet anaerobic digestion and 12.20 L methane /L working volume in dry anaerobic digestion (D-AD). Synergistic effects were found in semi-dry anaerobic digestion and D-AD with each two mixing feedstock. In contrast with SPV mono-digestion, co-digestion with manure increased methane yield within the range of 14.34-49.11% in different AD digesters. The values of final volatile fatty acids to total alkalinity (TA) were below 0.4 and the values of final pH were within the range of 7.4-8.2 in all the reactors, which supported a positive relationship between carbohydrate hydrolysis and methanogenesis during AD process. The mathematical modified first order model was applied to estimate substrate biodegradability and methane production potential well with conversion constant ranged from 0.0003 to 0.0953 1/day, which indicated that co-digestion increased hydrolysis efficiency and metabolic activity. This work provides useful information to improve the utilization and stability of digestion using SPV and livestock or poultry manure as substrates.
Mayeux, Richard; Reitz, Christiane; Brickman, Adam M.; Haan, Mary N.; Manly, Jennifer J.; Glymour, M. Maria; Weiss, Christopher C.; Yaffe, Kristine; Middleton, Laura; Hendrie, Hugh C.; Warren, Lauren H.; Hayden, Kathleen M.; Welsh-Bohmer, Kathleen A.; Breitner, John C. S.; Morris, John C.
2011-01-01
Population studies strive to determine the prevalence of Alzheimer dementia but prevalence estimates vary widely. The challenges faced by several noted population studies for Alzheimer dementia in operationalizing current clinical diagnostic criteria for Alzheimer’s disease (AD) are reviewed. Differences in case ascertainment, methodological biases, cultural and educational influences on test performance, inclusion of special populations such as underrepresented minorities and the oldest old, and detection of the earliest symptomatic stages of underlying AD are considered. Classification of Alzheimer dementia may be improved by the incorporation of biomarkers for AD if the sensitivity, specificity, and predictive value of the biomarkers are established and if they are appropriate for epidemiological studies as may occur should a plasma biomarker be developed. Biomarkers for AD also could facilitate studies of the interactions of various forms of neurodegenerative disorders with cerebrovascular disease, resulting in “mixed dementia”. PMID:21255741
Dysregulation of glucose metabolism even in Chinese PCOS women with normal glucose tolerance.
Li, Weiping; Li, Qifu
2012-01-01
To clarify the necessity of improving glucose metabolism in polycystic ovary syndrome (PCOS) women as early as possible, 111 PCOS women with normal glucose tolerance and 92 healthy age-matched controls were recruited to investigate glucose levels distribution, insulin sensitivity and β cell function. 91 PCOS women and 33 controls underwent hyperinsulinemic-euglycemic clamp to assess their insulin sensitivity, which was expressed as M value. β cell function was estimated by homeostatic model assessment (HOMA)-β index after adjusting insulin sensitivity (HOMA-βad index). Compared with lean controls, lean PCOS women had similar fasting plasma glucose (FPG), higher postprandial plasma glucose (PPG) (6.03±1.05 vs. 5.44±0.97 mmol/L, P<0.05), lower M value but similar HOMA-βad index, while overweight/obese PCOS women had higher levels of both FPG (5.24±0.58 vs. 4.90±0.39, P<0.05) and PPG (6.15±0.84 vs. 5.44±0.97 mmol/L, P<0.05), and lower levels of both M value and HOMA-βad index. Linear regression and ROC analysis found BMI was independently associated with M value and HOMA-βad index in PCOS women separately, and the cutoff of BMI indicating impaired β cell function of PCOS women was 25.545kg/m². In conclusion, insulin resistance and dysregulation of glucose metabolism were common in Chinese PCOS women with normal glucose tolerance. BMI ≥ 25.545kg/m² indicated impaired β cell function in PCOS women with normal glucose tolerance.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
Computing and Representing Sea Ice Trends: Toward a Community Consensus
NASA Technical Reports Server (NTRS)
Wohlleben, T.; Tivy, A.; Stroeve, J.; Meier, Walter N.; Fetterer, F.; Wang, J.; Assel, R.
2013-01-01
Estimates of the recent decline in Arctic Ocean summer sea ice extent can vary due to differences in sea ice data sources, in the number of years used to compute the trend, and in the start and end years used in the trend computation. Compounding such differences, estimates of the relative decline in sea ice cover (given in percent change per decade) can further vary due to the choice of reference value (the initial point of the trend line, a climatological baseline, etc.). Further adding to the confusion, very often when relative trends are reported in research papers, the reference values used are not specified or made clear. This can lead to confusion when trend studies are cited in the press and public reports.
Estimated intakes and sources of total and added sugars in the Canadian diet.
Brisbois, Tristin D; Marsden, Sandra L; Anderson, G Harvey; Sievenpiper, John L
2014-05-08
National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS) nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data and adjusting these for wastage to estimate intakes. Added sugars intakes were estimated from CCHS data by categorizing the sugars content of food groups as either added or naturally occurring. Added sugars accounted for approximately half of total sugars consumed. Annual availability data were obtained from Statistics Canada CANSIM database. Estimates for added sugars were obtained by summing the availability of "sugars and syrups" with availability of "soft drinks" (proxy for high fructose corn syrup) and adjusting for waste. Analysis of both survey and availability data suggests that added sugars average 11%-13% of total energy intake. Availability data indicate that added sugars intakes have been stable or modestly declining as a percent of total energy over the past three decades. Although these are best estimates based on available data, this analysis may encourage the development of better databases to help inform public policy recommendations.
Estimated Intakes and Sources of Total and Added Sugars in the Canadian Diet
Brisbois, Tristin D.; Marsden, Sandra L.; Anderson, G. Harvey; Sievenpiper, John L.
2014-01-01
National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS) nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data and adjusting these for wastage to estimate intakes. Added sugars intakes were estimated from CCHS data by categorizing the sugars content of food groups as either added or naturally occurring. Added sugars accounted for approximately half of total sugars consumed. Annual availability data were obtained from Statistics Canada CANSIM database. Estimates for added sugars were obtained by summing the availability of “sugars and syrups” with availability of “soft drinks” (proxy for high fructose corn syrup) and adjusting for waste. Analysis of both survey and availability data suggests that added sugars average 11%–13% of total energy intake. Availability data indicate that added sugars intakes have been stable or modestly declining as a percent of total energy over the past three decades. Although these are best estimates based on available data, this analysis may encourage the development of better databases to help inform public policy recommendations. PMID:24815507
Rial-Crestelo, M; Martinez-Portilla, R J; Cancemi, A; Caradeux, J; Fernandez, L; Peguero, A; Gratacos, E; Figueras, Francesc
2018-03-04
The objective of this study is to determine the added value of cerebroplacental ratio (CPR) and uterine Doppler velocimetry at third trimester scan in an unselected obstetric population to predict smallness and growth restriction. We constructed a prospective cohort study of women with singleton pregnancies attended for routine third trimester screening (32 +0 -34 +6 weeks). Fetal biometry and fetal-maternal Doppler ultrasound examinations were performed by certified sonographers. The CPR was calculated as a ratio of the middle cerebral artery to the umbilical artery pulsatility indices. Both attending professionals and patients were blinded to the results, except in cases of estimated fetal weight < p10. The association between third trimester Doppler parameters and small for gestational age (SGA) (birth weight <10th centile) and fetal growth restriction (FGR) (birth weight below the third centile) was assessed by logistic regression, where the basal comparison was a model comprising maternal characteristics and estimated fetal weight (EFW). A total of 1030 pregnancies were included. The mean gestational age at scan was 33 weeks (SD 0.6). The addition of CPR and uterine Doppler to maternal characteristics plus EFW improved the explained uncertainty of the predicting models for SGA (15 versus 10%, p < .001) and FGR (12 versus 8%, p = .03). However, the addition of CPR and uterine Doppler to maternal characteristics plus EFW only marginally improved the detection rates for SGA (38 versus 34% for a 10% of false positives) and did not change the predictive performance for FGR. The added value of CPR and uterine Doppler at 33 weeks of gestation for detecting defective growth is poor.
[Value-Added--Adding Economic Value in the Food Industry].
ERIC Educational Resources Information Center
Welch, Mary A., Ed.
1989-01-01
This booklet focuses on the economic concept of "value added" to goods and services. A student activity worksheet illustrates how the steps involved in processing food are examples of the concept of value added. The booklet further links food processing to the idea of value added to the Gross National Product (GNP). Discussion questions,…
Myths & Facts about Value-Added Analysis
ERIC Educational Resources Information Center
TNTP, 2011
2011-01-01
This paper presents myths as well as facts about value-added analysis. These myths include: (1) "Value-added isn't fair to teachers who work in high-need schools, where students tend to lag far behind academically"; (2) "Value-added scores are too volatile from year-to-year to be trusted"; (3) "There's no research behind value-added"; (4) "Using…
Diagnostic and Prognostic Utility of the Synaptic Marker Neurogranin in Alzheimer Disease.
Tarawneh, Rawan; D'Angelo, Gina; Crimmins, Dan; Herries, Elizabeth; Griest, Terry; Fagan, Anne M; Zipfel, Gregory J; Ladenson, Jack H; Morris, John C; Holtzman, David M
2016-05-01
Synaptic loss is an early pathologic substrate of Alzheimer disease (AD). Neurogranin is a postsynaptic neuronal protein that has demonstrated utility as a cerebrospinal fluid (CSF) marker of synaptic loss in AD. To investigate the diagnostic and prognostic utility of CSF neurogranin levels in a large, well-characterized cohort of individuals with symptomatic AD and cognitively normal controls. A cross-sectional and longitudinal observational study of cognitive decline in patients with symptomatic AD and cognitively normal controls was performed. Participants were individuals with a clinical diagnosis of early symptomatic AD and cognitively normal controls who were enrolled in longitudinal studies of aging and dementia at the Charles F. and Joanne Knight Alzheimer Disease Research Center, Washington University School of Medicine, from January 21, 2000, through March 21, 2011. Data analysis was performed from November 1, 2013, to March 31, 2015. Correlations between baseline CSF biomarker levels and future cognitive decline in patients with symptomatic AD and cognitively normal controls over time. A total of 302 individuals (mean [SE] age, 73.1 [0.4] years) were included in this study (95 patients [52 women and 43 men] with AD and 207 controls [125 women and 82 men]). The CSF neurogranin levels differentiated patients with early symptomatic AD from controls with comparable diagnostic utility (mean [SE] area under the receiver operating characteristic curve, 0.71 [0.03]; 95% CI, 0.64-0.77) to the other CSF biomarkers. The CSF neurogranin levels correlated with brain atrophy (normalized whole-brain volumes: adjusted r = -0.38, P = .02; hippocampal volumes: adjusted r = -0.36, P = .03; entorhinal volumes: adjusted r = -0.46, P = .006; and parahippocampal volumes: adjusted r = -0.47, P = .005, n = 38) in AD and with amyloid load (r = 0.39, P = .02, n = 36) in preclinical AD. The CSF neurogranin levels predicted future cognitive impairment (adjusted hazard ratio, 1.89; 95% CI, 1.29-2.78; P = .001 as a continuous measure, and adjusted hazard ratio, 2.78; 95% CI, 1.13-5.99; P = .02 as a categorical measure using the 85th percentile cutoff value) in controls and rates of cognitive decline (Clinical Dementia Rating sum of boxes score: β estimate, 0.29; P = .001; global composite scores: β estimate, -0.11; P = .001; episodic memory scores: β estimate, -0.18; P < .001; and semantic memory scores: β estimate, -0.06; P = .04, n = 57) in patients with symptomatic AD over time, similarly to the CSF proteins VILIP-1, tau, and p-tau181. The CSF levels of the synaptic marker neurogranin offer diagnostic and prognostic utility for early symptomatic AD that is comparable to other CSF markers of AD. Importantly, CSF neurogranin complements the collective ability of these markers to predict future cognitive decline in cognitively normal individuals and, therefore, will be a useful addition to the current panel of AD biomarkers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riihimaki, L.; McFarlane, S.; Sivaraman, C.
The ndrop_mfrsr value-added product (VAP) provides an estimate of the cloud droplet number concentration of overcast water clouds retrieved from cloud optical depth from the multi-filter rotating shadowband radiometer (MFRSR) instrument and liquid water path (LWP) retrieved from the microwave radiometer (MWR). When cloud layer information is available from vertically pointing lidar and radars in the Active Remote Sensing of Clouds (ARSCL) product, the VAP also provides estimates of the adiabatic LWP and an adiabatic parameter (beta) that indicates how divergent the LWP is from the adiabatic case. quality control (QC) flags (qc_drop_number_conc), an uncertainty estimate (drop_number_conc_toterr), and a cloudmore » layer type flag (cloud_base_type) are useful indicators of the quality and accuracy of any given value of the retrieval. Examples of these major input and output variables are given in sample plots in section 6.0.« less
Fine-tuning satellite-based rainfall estimates
NASA Astrophysics Data System (ADS)
Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.
2018-05-01
Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.
Jaskula, B.W.
2013-01-01
In 2012, estimated world lithium consumption was about 28 kt (31,000 st) of lithium contained in minerals and compounds, an 8 percent increase from that of 2011. Estimated U.S. consumption was about 2 kt (2,200 st) of contained lithium, the same as that of 2011. The United States was thought to rank fourth in consumption of lithium and remained the leading importer of lithium carbonate and the leading producer of value-added lithium materials. One company, Rockwood Lithium Inc., produced lithium compounds from domestic brine resources near Silver Peak, NV.
On the applicability of integrated circuit technology to general aviation orientation estimation
NASA Technical Reports Server (NTRS)
Debra, D. B.; Tashker, M. G.
1976-01-01
The criteria of the significant value of the panel instruments used in general aviation were examined and kinematic equations were added for comparison. An instrument survey was performed to establish the present state of the art in linear and angular accelerometers, pressure transducers, and magnetometers. A very preliminary evaluation was done of the computers available for data evaluation and estimator mechanization. The mathematical model of a light twin aircraft employed in the evaluation was documented, the results of the sensor survey and the results of the design studies were presented.
Using CT Data to Improve the Quantitative Analysis of 18F-FBB PET Neuroimages
Segovia, Fermín; Sánchez-Vañó, Raquel; Górriz, Juan M.; Ramírez, Javier; Sopena-Novales, Pablo; Testart Dardel, Nathalie; Rodríguez-Fernández, Antonio; Gómez-Río, Manuel
2018-01-01
18F-FBB PET is a neuroimaging modality that is been increasingly used to assess brain amyloid deposits in potential patients with Alzheimer's disease (AD). In this work, we analyze the usefulness of these data to distinguish between AD and non-AD patients. A dataset with 18F-FBB PET brain images from 94 subjects diagnosed with AD and other disorders was evaluated by means of multiple analyses based on t-test, ANOVA, Fisher Discriminant Analysis and Support Vector Machine (SVM) classification. In addition, we propose to calculate amyloid standardized uptake values (SUVs) using only gray-matter voxels, which can be estimated using Computed Tomography (CT) images. This approach allows assessing potential brain amyloid deposits along with the gray matter loss and takes advantage of the structural information provided by most of the scanners used for PET examination, which allow simultaneous PET and CT data acquisition. The results obtained in this work suggest that SUVs calculated according to the proposed method allow AD and non-AD subjects to be more accurately differentiated than using SUVs calculated with standard approaches. PMID:29930505
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
Alaska's lumber-drying industry—impacts from a federal grant program.
David L. Nicholls; Allen M. Brackley; Thomas D. Rojas
2006-01-01
A survey determined that installed dry kiln capacity in Alaska more than doubled to an estimated 220 thousand board feet (mbf) within 4 years (2000-2004). This increased ability to produce dry lumber and value-added products resulted from industry efforts to obtain federal funding to support a dry kiln grant program. This report reviews grantees' progress in...
ERIC Educational Resources Information Center
Goldhaber, Dan; Cowan, James; Theobald, Roddy
2016-01-01
We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…
ERIC Educational Resources Information Center
Goldhaber, Dan; Cowan, James; Theobald, Roddy
2016-01-01
We use longitudinal data from Washington State to provide estimates of the extent to which performance on the edTPA, a performance-based, subject-specific assessment of teacher candidates, is predictive of the likelihood of employment in the teacher workforce and value-added measures of teacher effectiveness. While edTPA scores are highly…
Observed Characteristics and Teacher Quality: Impacts of Sample Selection on a Value Added Model
ERIC Educational Resources Information Center
Winters, Marcus A.; Dixon, Bruce L.; Greene, Jay P.
2012-01-01
We measure the impact of observed teacher characteristics on student math and reading proficiency using a rich dataset from Florida. We expand upon prior work by accounting directly for nonrandom attrition of teachers from the classroom in a sample selection framework. We find evidence that sample selection is present in the estimation of the…
ERIC Educational Resources Information Center
Moriyama, Karen Ito
2009-01-01
In this era of accountability, there is a need to fairly and accurately document the ways that educational systems contribute to student achievement. This study used the regression discontinuity design within a multilevel framework as an alternative approach to estimate school effectiveness by examining the effect of the value added to students'…
ERIC Educational Resources Information Center
Koedel, Cory; Betts, Julian R.
2007-01-01
This study uses administrative data linking students and teachers at the classroom level to estimate teacher value-added to student test scores. We find that variation in teacher quality is an important contributor to student achievement--more important than has been implied by previous work. This result is attributable, at least in part, to the…
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
Rosero, D S; Odle, J; Arellano, C; Boyd, R D; van Heugten, E
2015-03-01
Two studies were conducted 1) to determine the effects of free fatty acid (FFA) concentrations and the degree of saturation of lipids (unsaturated to saturated fatty acids ratio [U:S]) on apparent total tract digestibility (ATTD) and DE content of lipids and 2) to derive prediction equations to estimate the DE content of lipids when added to lactating sow diets. In Exp. 1, 85 lactating sows were assigned randomly to a 4 × 5 factorial arrangement of treatments plus a control diet with no added lipid. Factors included 1) FFA concentrations of 0, 18, 36, and 54% and 2) U:S of 2.0, 2.8, 3.5, 4.2, and 4.9. Diets were corn-soybean meal based and lipid was supplemented at 6%. Concentrations of FFA and U:S were obtained by blending 4 lipid sources: choice white grease (CWG; FFA = 0.3% and U:S = 2.0), soybean oil (FFA = 0.1% and U:S = 5.5), CWG acid oil (FFA = 57.8% and U:S = 2.1), and soybean-cottonseed acid oil (FFA = 67.5% and U:S = 3.8). Titanium dioxide was added to diets (0.5%) as a digestibility marker. Treatments started on d 4 of lactation and fecal samples were collected after 6 d of adaptation to diets on a daily basis from d 10 to 13. The ATTD of added lipid and DE content of lipids were negatively affected (linear, < 0.001) with increasing FFA concentrations, but negative effects were less pronounced with increasing U:S (interaction, < 0.05). Coefficients of ATTD for the added lipid and DE content of lipids increased with increasing U:S (quadratic, = 0.001), but these improvements were less pronounced when the FFA concentration was less than 36%. Digestible energy content of added lipid was described by DE (kcal/kg) = [8,381 - (80.6 × FFA) + (0.4 × FFA) + (248.8 × U:S) - (28.1 × U:S) + (12.8 × FFA × U:S)] ( = 0.74). This prediction equation was validated in Exp. 2, in which 24 lactating sows were fed diets supplemented with 6% of either an animal-vegetable blend (A-V; FFA = 14.5% and U:S = 2.3) or CWG (FFA = 3.7% and U:S = 1.5) plus a control diet with no added lipids. Digestible energy content of A-V (8,317 and 8,127 kcal/kg for measured and predicted values, respectively) and CWG (8,452 and 8,468 kcal/kg for measured and predicted values, respectively) were accurately estimated using the proposed equation. The proposed equation involving FFA concentration and U:S resulted in highly accurate estimations of DE content (relative error, +0.2 to -2.3%) of commercial sources of lipids for lactating sows.
What's the Value in Value-Added?
ERIC Educational Resources Information Center
Duffrin, Elizabeth
2011-01-01
A growing number of school districts are adopting "value-added" measures of teaching quality to award bonuses or even tenure. And two competitive federal grants are spurring them on. Districts using value-added data are encouraged by the results. But researchers who support value-added measures advise caution. The ratings, which use a…
SU-E-QI-08: Fourier Properties of Cone Beam CT Projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H
Purpose: To explore the Fourier properties of cone beam CT (CBCT) projections and apply the property to directly estimate noise level of CBCT projections without any prior information. Methods: By utilizing the property of Bessel function, we derivate the Fourier properties of the CBCT projections for an arbitrary point object. It is found that there exists a double-wedge shaped region in the Fourier space where the intensity is approximately zero. We further derivate the Fourier properties of independent noise added to CBCT projections. The expectation of the square of the module in any point of the Fourier space is constantmore » and the value approximately equals to noise energy. We further validate the theory in numerical simulations for both a delta function object and a NCAT phantom with different levels of noise added. Results: Our simulation confirmed the existence of the double-wedge shaped region in Fourier domain for the x-ray projection image. The boundary locations of this region agree well with theoretical predictions. In the experiments of estimating noise level, the mean relative error between the theory estimation and the ground truth values is 2.697%. Conclusion: A novel theory on the Fourier properties of CBCT projections has been discovered. Accurate noise level estimation can be achieved by applying this theory directly to the measured CBCT projections. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011) and China Scholarship Council.« less
NASA Astrophysics Data System (ADS)
Foolad, Foad; Franz, Trenton E.; Wang, Tiejun; Gibson, Justin; Kilic, Ayse; Allen, Richard G.; Suyker, Andrew
2017-03-01
In this study, the feasibility of using inverse vadose zone modeling for estimating field-scale actual evapotranspiration (ETa) was explored at a long-term agricultural monitoring site in eastern Nebraska. Data from both point-scale soil water content (SWC) sensors and the area-average technique of cosmic-ray neutron probes were evaluated against independent ETa estimates from a co-located eddy covariance tower. While this methodology has been successfully used for estimates of groundwater recharge, it was essential to assess the performance of other components of the water balance such as ETa. In light of recent evaluations of land surface models (LSMs), independent estimates of hydrologic state variables and fluxes are critically needed benchmarks. The results here indicate reasonable estimates of daily and annual ETa from the point sensors, but with highly varied soil hydraulic function parameterizations due to local soil texture variability. The results of multiple soil hydraulic parameterizations leading to equally good ETa estimates is consistent with the hydrological principle of equifinality. While this study focused on one particular site, the framework can be easily applied to other SWC monitoring networks across the globe. The value-added products of groundwater recharge and ETa flux from the SWC monitoring networks will provide additional and more robust benchmarks for the validation of LSM that continues to improve their forecast skill. In addition, the value-added products of groundwater recharge and ETa often have more direct impacts on societal decision-making than SWC alone. Water flux impacts human decision-making from policies on the long-term management of groundwater resources (recharge), to yield forecasts (ETa), and to optimal irrigation scheduling (ETa). Illustrating the societal benefits of SWC monitoring is critical to insure the continued operation and expansion of these public datasets.
Chapter 17: Adding Value to the Biorefinery with Lignin: An Engineer's Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biddy, Mary J
There is a long-standing belief that 'you can make anything out of lignin...except money.' This chapter serves to highlight that opportunities for making money from biomass-derived lignin exist both with current technology in the production of steam and power to new emerging areas of R&D focused on value-added chemical and material coproducts from lignin. To understand and quantify the economic potential for lignin valorization, the techno-economic analysis methodology is first described in detail. As demonstrated in the provided case study, these types of economic evaluations serve not only to estimate the economic impacts that lignin conversion could have for anmore » integrated biorefinery and outline drivers for further cost reduction but also identify data gaps and R&D needs for improving the design basis and reducing the risk for process scale-up.« less
NASA Astrophysics Data System (ADS)
Dorofeeva, Olga V.; Suchkova, Taisiya A.
2018-04-01
The gas-phase enthalpies of formation of four molecules with high flexibility, which leads to the existence of a large number of low-energy conformers, were calculated with the G4 method to see whether the lowest energy conformer is sufficient to achieve high accuracy in the computed values. The calculated values were in good agreement with the experiment, whereas adding the correction for conformer distribution makes the agreement worse. The reason for this effect is a large anharmonicity of low-frequency torsional motions, which is ignored in the calculation of ZPVE and thermal enthalpy. It was shown that the approximate correction for anharmonicity estimated using a free rotor model is of very similar magnitude compared with the conformer correction but has the opposite sign, and thus almost fully compensates for it. Therefore, the common practice of adding only the conformer correction is not without problems.
Atsma, Femke; van der Schouw, Yvonne T; Grobbee, Diederick E; Hoes, Arno W; Bartelink, Marie-Louise E L
2008-11-12
The aim of the present study was to investigate the added value of age at menopause and the lifetime cumulative number of menstrual cycles in cardiovascular risk prediction in postmenopausal women. This study included 971 women. The ankle-arm index was used as a proxy for cardiovascular morbidity and mortality. The ankle-arm index was calculated for each leg by dividing the highest ankle systolic blood pressure by the highest brachial systolic blood pressure. A cut-off value of 0.95 was used to differentiate between low and high risk women. Three cardiovascular risk models were constructed. In the initial model all classical predictors for cardiovascular disease were investigated. This model was then extended by age at menopause or the lifetime cumulative number of menstrual cycles to test their added value for cardiovascular risk prediction. Differences in discriminative power between the models were investigated by comparing the area under the receiver operating characteristic (ROC) curves. The mean age was 66.0 (+/-5.6) years. The 6 independent predictors for cardiovascular disease were age, systolic blood pressure, total to HDL cholesterol ratio, current smoking, glucose level, and body mass index > or =30 kg/m(2). The ROC area was 0.69 (0.64-0.73) and did not change when age at menopause or the lifetime cumulative number of menstrual cycles was added. The findings in this study among postmenopausal women did not support the view that age at menopause or a refined estimation of lifetime endogenous estrogen exposure would improve cardiovascular risk prediction as approximated by the ankle-arm index.
NASA Astrophysics Data System (ADS)
Dozier, J.; Bair, N.; Calfa, A. A.; Skalka, C.; Tolle, K.; Bongard, J.
2015-12-01
The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements such as the Hindu Kush range in Afghanistan. During the snow season, we can use two measurements: (1) passive microwave estimates of SWE, which generally underestimate in the mountains; (2) fractional snow-covered area from MODIS. Once the snow has melted, we can reconstruct the accumulated SWE back to the last significant snowfall by calculating the energy used in melt. The reconstructed SWE values provide a training set for predictions from the passive microwave SWE and snow-covered area. We examine several machine learning methods—regression-boosted decision trees, bagged trees, neural networks, and genetic programming—to estimate SWE. All methods work reasonably well, with R2 values greater than 0.8. Predictors built with multiple years of data reduce the bias that usually appears if we predict one year from just one other year's training set. Genetic programming tends to produce results that additionally provide physical insight. Adding precipitation estimates from the Global Precipitation Measurements mission is in progress.
Teaching Robust Methods for Exploratory Data Analysis.
1980-10-01
of adding a new point x to a sample x1*9...sX n* The Influence Function of the estimate 0 at the value x is defined to be For example, if 0 is the...mean (Ex )/n, we can calculate II+(x,iZ) x-ix Plotting I+, ’I- -9- we see that the mean has an unbounded Influence Function , and is therefore not robust
Michigan`s forests 1993: An analysis. Forest Service resource bulletin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, T.L.; Spencer, J.S.; Bertsch, R.
1997-02-04
Michigan`s forests are abundant, diverse, healthy, productive, and expanding. These forests make important contributions to the quality of life by providing a wide array of benefits, including wildlife habitat, biological diversity, outdoor recreation, improved air and water quality, and economic resources such as the estimated $12 billion of value added and 200,000 jobs annually supported by forest-based industries/tourism/recreation.
Todd A. Schroeder; Gretchen G. Moisen; Sean P. Healey; Warren B. Cohen
2012-01-01
In addition to being one of the primary drivers of the net terrestrial carbon budget, forest disturbance also plays a critical role in regulating the surface energy balance, promoting biodiversity, and creating wildlife habitat. With climate change and an ever growing human population poised to alter the frequency and severity of disturbance regimes across the globe,...
Nongovernment Philanthropic Spending on Public Health in the United States.
Shaw-Taylor, Yoku
2016-01-01
The objective of this study was to estimate the dollar amount of nongovernment philanthropic spending on public health activities in the United States. Health expenditure data were derived from the US National Health Expenditures Accounts and the US Census Bureau. Results reveal that spending on public health is not disaggregated from health spending in general. The level of philanthropic spending is estimated as, on average, 7% of overall health spending, or about $150 billion annually according to National Health Expenditures Accounts data tables. When a point estimate of charity care provided by hospitals and office-based physicians is added, the value of nongovernment philanthropic expenditures reaches approximately $203 billion, or about 10% of all health spending annually.
The Added Value of Water Footprint Assessment for National Water Policy: A Case Study for Morocco
Schyns, Joep F.; Hoekstra, Arjen Y.
2014-01-01
A Water Footprint Assessment is carried out for Morocco, mapping the water footprint of different activities at river basin and monthly scale, distinguishing between surface- and groundwater. The paper aims to demonstrate the added value of detailed analysis of the human water footprint within a country and thorough assessment of the virtual water flows leaving and entering a country for formulating national water policy. Green, blue and grey water footprint estimates and virtual water flows are mainly derived from a previous grid-based (5×5 arc minute) global study for the period 1996–2005. These estimates are placed in the context of monthly natural runoff and waste assimilation capacity per river basin derived from Moroccan data sources. The study finds that: (i) evaporation from storage reservoirs is the second largest form of blue water consumption in Morocco, after irrigated crop production; (ii) Morocco’s water and land resources are mainly used to produce relatively low-value (in US$/m3 and US$/ha) crops such as cereals, olives and almonds; (iii) most of the virtual water export from Morocco relates to the export of products with a relatively low economic water productivity (in US$/m3); (iv) blue water scarcity on a monthly scale is severe in all river basins and pressure on groundwater resources by abstractions and nitrate pollution is considerable in most basins; (v) the estimated potential water savings by partial relocation of crops to basins where they consume less water and by reducing water footprints of crops down to benchmark levels are significant compared to demand reducing and supply increasing measures considered in Morocco’s national water strategy. PMID:24919194
The added value of water footprint assessment for national water policy: a case study for Morocco.
Schyns, Joep F; Hoekstra, Arjen Y
2014-01-01
A Water Footprint Assessment is carried out for Morocco, mapping the water footprint of different activities at river basin and monthly scale, distinguishing between surface- and groundwater. The paper aims to demonstrate the added value of detailed analysis of the human water footprint within a country and thorough assessment of the virtual water flows leaving and entering a country for formulating national water policy. Green, blue and grey water footprint estimates and virtual water flows are mainly derived from a previous grid-based (5 × 5 arc minute) global study for the period 1996-2005. These estimates are placed in the context of monthly natural runoff and waste assimilation capacity per river basin derived from Moroccan data sources. The study finds that: (i) evaporation from storage reservoirs is the second largest form of blue water consumption in Morocco, after irrigated crop production; (ii) Morocco's water and land resources are mainly used to produce relatively low-value (in US$/m3 and US$/ha) crops such as cereals, olives and almonds; (iii) most of the virtual water export from Morocco relates to the export of products with a relatively low economic water productivity (in US$/m3); (iv) blue water scarcity on a monthly scale is severe in all river basins and pressure on groundwater resources by abstractions and nitrate pollution is considerable in most basins; (v) the estimated potential water savings by partial relocation of crops to basins where they consume less water and by reducing water footprints of crops down to benchmark levels are significant compared to demand reducing and supply increasing measures considered in Morocco's national water strategy.
Implementing Value-Added Measures of School Effectiveness: Getting the Incentives Right.
ERIC Educational Resources Information Center
Ladd, Helen F.; Walsh, Randall P.
2002-01-01
Evaluates value-added approach to measuring school effectiveness in North and South Carolina. Finds that value-added approach favors high-achievement schools, with large percentage of students from high-SES backgrounds. Discusses statistical problems in measuring value added. Concludes teachers' and administrators' avoidance of low-achievement,…
Diagnostic and Prognostic Utility of the Synaptic Marker Neurogranin in Alzheimer Disease
Tarawneh, Rawan; D’Angelo, Gina; Crimmins, Dan; Herries, Elizabeth; Griest, Terry; Fagan, Anne M.; Zipfel, Gregory J.; Ladenson, Jack H.; Morris, John C.; Holtzman, David M.
2016-01-01
IMPORTANCE Synaptic loss is an early pathologic substrate of Alzheimer disease (AD). Neurogranin is a postsynaptic neuronal protein that has demonstrated utility as a cerebrospinal fluid (CSF) marker of synaptic loss in AD. OBJECTIVE To investigate the diagnostic and prognostic utility of CSF neurogranin levels in a large, well-characterized cohort of individuals with symptomatic AD and cognitively normal controls. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional and longitudinal observational study of cognitive decline in patients with symptomatic AD and cognitively normal controls was performed. Participants were individuals with a clinical diagnosis of early symptomatic AD and cognitively normal controls who were enrolled in longitudinal studies of aging and dementia at the Charles F. and Joanne Knight Alzheimer Disease Research Center, Washington University School of Medicine, from January 21, 2000, through March 21, 2011. Data analysis was performed from November 1, 2013, to March 31, 2015. MAIN OUTCOMES AND MEASURES Correlations between baseline CSF biomarker levels and future cognitive decline in patients with symptomatic AD and cognitively normal controls overtime. RESULTS A total of 302 individuals (mean [SE] age, 73.1 [0.4] years) were included in this study (95 patients [52 women and 43 men] with AD and 207 controls [125 women and 82 men]). The CSF neurogranin levels differentiated patients with early symptomatic AD from controls with comparable diagnostic utility (mean [SE] area under the receiver operating characteristic curve, 0.71 [0.03]; 95% CI, 0.64–0.77) to the other CSF biomarkers. The CSF neurogranin levels correlated with brain atrophy (normalized whole-brain volumes: adjusted r = −0.38, P = .02; hippocampal volumes: adjusted r = −0.36, P = .03; entorhinal volumes: adjusted r = −0.46, P = .006; and parahippocampal volumes: adjusted r = −0.47, P = .005, n = 38) in AD and with amyloid load (r = 0.39, P = .02, n = 36) in preclinical AD. The CSF neurogranin levels predicted future cognitive impairment (adjusted hazard ratio, 1.89; 95% CI, 1.29–2.78; P = .001 as a continuous measure, and adjusted hazard ratio, 2.78; 95% CI, 1.13–5.99; P = .02 as a categorical measure using the 85th percentile cutoff value) in controls and rates of cognitive decline (Clinical Dementia Rating sum of boxes score: β estimate, 0.29; P = .001; global composite scores: β estimate, −0.11; P = .001; episodic memory scores: β estimate, −0.18; P < .001; and semantic memory scores: β estimate, −0.06; P = .04, n = 57) in patients with symptomatic AD over time, similarly to the CSF proteins VILIP-1, tau, and p-tau181. CONCLUSIONS AND RELEVANCE The CSF levels of the synaptic marker neurogranin offer diagnostic and prognostic utility for early symptomatic AD that is comparable to other CSF markers of AD. Importantly, CSF neurogranin complements the collective ability of these markers to predict future cognitive decline in cognitively normal individuals and, therefore, will be a useful addition to the current panel of AD biomarkers. PMID:27018940
Intake of total and added sugars and nutrient dilution in Australian children and adolescents.
Louie, Jimmy Chun Yu; Tapsell, Linda C
2015-12-14
This analysis aimed to examine the association between intake of sugars (total or added) and nutrient intake with data from a recent Australian national nutrition survey, the 2007 Australian National Children's Nutrition and Physical Activity Survey (2007ANCNPAS). Data from participants (n 4140; 51 % male) who provided 2×plausible 24-h recalls were included in the analysis. The values on added sugars for foods were estimated using a previously published ten-step systematic methodology. Reported intakes of nutrients and foods defined in the 2007ANCNPAS were analysed by age- and sex-specific quintiles of %energy from added sugars (%EAS) or %energy from total sugars (%ETS) using ANCOVA. Linear trends across the quintiles were examined using multiple linear regression. Logistic regression analysis was used to calculate the OR of not meeting a specified nutrient reference values for Australia and New Zealand per unit in %EAS or %ETS. Analyses were adjusted for age, sex, BMI z-score and total energy intake. Small but significant negative associations were seen between %EAS and the intakes of most nutrient intakes (all P<0·001). For %ETS the associations with nutrient intakes were inconsistent; even then they were smaller than that for %EAS. In general, higher intakes of added sugars were associated with lower intakes of most nutrient-rich, 'core' food groups and higher intakes of energy-dense, nutrient-poor 'extra' foods. In conclusion, assessing intakes of added sugars may be a better approach for addressing issues of diet quality compared with intakes of total sugars.
Rossberg, Siri; Gerhold, Kerstin; Geske, Thomas; Zimmermann, Kurt; Menke, Georg; Zaino, Mohammad; Wahn, Ulrich; Hamelmann, Eckard; Lau, Susanne
2016-11-01
Accessible markers to predict the development of atopic diseases are highly desirable but yet matter of debate. We investigated the role of blood eosinophils at 4 weeks and 7 months of life and their association with developing atopic dermatitis (AD) in a birth cohort of children with atopic heredity. Infant blood samples for eosinophil counts were taken from 559 infants at 4 weeks and from 467 infants at 7 month of life with at least one atopic parent. Elevation of blood eosinophils was defined as ≥ 5% of total leukocytes and the asscociation for the occurrence of AD was assessed by entering 2 × 2 tables and the odds ratios were estimated followed by hypothesis testing against the alternate working hypothesis: odds ratio < > 1. Survival analysis was carried out estimating the Kaplan-Meier product limit estimator from the life-time table of AD score and time to AD manifestation stratified by the eosinophil binary score. Elevated blood eosinophils observed at 4 weeks were significantly associated with the occurrence of AD in the whole cohort at the age of 7 months (p = 0.007), 1 year (p = 0.004), 2 years (p = 0.007) and 3 years (p = 0.006) of life. AD occurred app. 12 weeks earlier in infants with elevated blood eosinophils at 4 weeks of life. Blood eosinophil counts ≥5% at 7 months of life failed to show significance for AD; for eosinophils at 4.5% a significant association at 7 months (p = 0.005), and 1 year of life (p = 0.039), 2 years (p = 0.033) and 3 years (p = 0.034) was observed. Elevated blood eosinophils at age 4 weeks have a predictive value for the onset of atopic dermatitis in infancy and early childhood in children with high risk for atopy. Early eosinophil counts may therefore be helpful for counseling parents to provide infant skincare but furthermore identify individuals for interventional trials aiming at allergy prevention. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Estimating Free and Added Sugar Intakes in New Zealand.
Kibblewhite, Rachael; Nettleton, Alice; McLean, Rachael; Haszard, Jillian; Fleming, Elizabeth; Kruimer, Devonia; Te Morenga, Lisa
2017-11-27
The reduction of free or added sugar intake (sugars added to food and drinks as a sweetener) is almost universally recommended to reduce the risk of obesity-related diseases and dental caries. The World Health Organisation recommends intakes of free sugars of less than 10% of energy intake. However, estimating and monitoring intakes at the population level is challenging because free sugars cannot be analytically distinguished from naturally occurring sugars and most national food composition databases do not include data on free or added sugars. We developed free and added sugar estimates for the New Zealand (NZ) food composition database (FOODfiles 2010) by adapting a method developed for Australia. We reanalyzed the 24 h recall dietary data collected for 4721 adults aged 15 years and over participating in the nationally representative 2008/09 New Zealand Adult Nutrition Survey to estimate free and added sugar intakes. The median estimated intake of free and added sugars was 57 and 49 g/day respectively and 42% of adults consumed less than 10% of their energy intake from free sugars. This approach provides more direct estimates of the free and added sugar contents of New Zealand foods than previously available and will enable monitoring of adherence to free sugar intake guidelines in future.
Estimating Free and Added Sugar Intakes in New Zealand
Kibblewhite, Rachael; Nettleton, Alice; McLean, Rachael; Haszard, Jillian; Fleming, Elizabeth; Kruimer, Devonia
2017-01-01
The reduction of free or added sugar intake (sugars added to food and drinks as a sweetener) is almost universally recommended to reduce the risk of obesity-related diseases and dental caries. The World Health Organisation recommends intakes of free sugars of less than 10% of energy intake. However, estimating and monitoring intakes at the population level is challenging because free sugars cannot be analytically distinguished from naturally occurring sugars and most national food composition databases do not include data on free or added sugars. We developed free and added sugar estimates for the New Zealand (NZ) food composition database (FOODfiles 2010) by adapting a method developed for Australia. We reanalyzed the 24 h recall dietary data collected for 4721 adults aged 15 years and over participating in the nationally representative 2008/09 New Zealand Adult Nutrition Survey to estimate free and added sugar intakes. The median estimated intake of free and added sugars was 57 and 49 g/day respectively and 42% of adults consumed less than 10% of their energy intake from free sugars. This approach provides more direct estimates of the free and added sugar contents of New Zealand foods than previously available and will enable monitoring of adherence to free sugar intake guidelines in future. PMID:29186927
Poor Gait Performance and Prediction of Dementia: Results From a Meta-Analysis.
Beauchet, Olivier; Annweiler, Cédric; Callisaya, Michele L; De Cock, Anne-Marie; Helbostad, Jorunn L; Kressig, Reto W; Srikanth, Velandai; Steinmetz, Jean-Paul; Blumen, Helena M; Verghese, Joe; Allali, Gilles
2016-06-01
Poor gait performance predicts risk of developing dementia. No structured critical evaluation has been conducted to study this association yet. The aim of this meta-analysis was to systematically examine the association of poor gait performance with incidence of dementia. An English and French Medline search was conducted in June 2015, with no limit of date, using the medical subject headings terms "Gait" OR "Gait Disorders, Neurologic" OR "Gait Apraxia" OR "Gait Ataxia" AND "Dementia" OR "Frontotemporal Dementia" OR "Dementia, Multi-Infarct" OR "Dementia, Vascular" OR "Alzheimer Disease" OR "Lewy Body Disease" OR "Frontotemporal Dementia With Motor Neuron Disease" (Supplementary Concept). Poor gait performance was defined by standardized tests of walking, and dementia was diagnosed according to international consensus criteria. Four etiologies of dementia were identified: any dementia, Alzheimer disease (AD), vascular dementia (VaD), and non-AD (ie, pooling VaD, mixed dementias, and other dementias). Fixed effects meta-analyses were performed on the estimates in order to generate summary values. Of the 796 identified abstracts, 12 (1.5%) were included in this systematic review and meta-analysis. Poor gait performance predicted dementia [pooled hazard ratio (HR) combined with relative risk and odds ratio = 1.53 with P < .001 for any dementia, pooled HR = 1.79 with P < .001 for VaD, HR = 1.89 with P value < .001 for non-AD]. Findings were weaker for predicting AD (HR = 1.03 with P value = .004). This meta-analysis provides evidence that poor gait performance predicts dementia. This association depends on the type of dementia; poor gait performance is a stronger predictor of non-AD dementias than AD. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sato, Tomohiro O.; Sato, Takao M.; Sagawa, Hideo; Noguchi, Katsuyuki; Saitoh, Naoko; Irie, Hitoshi; Kita, Kazuyuki; Mahani, Mona E.; Zettsu, Koji; Imasu, Ryoichi; Hayashida, Sachiko; Kasai, Yasuko
2018-03-01
We performed a feasibility study of constraining the vertical profile of the tropospheric ozone by using a synergetic retrieval method on multiple spectra, i.e., ultraviolet (UV), thermal infrared (TIR), and microwave (MW) ranges, measured from space. This work provides, for the first time, a quantitative evaluation of the retrieval sensitivity of the tropospheric ozone by adding the MW measurement to the UV and TIR measurements. Two observation points in East Asia (one in an urban area and one in an ocean area) and two observation times (one during summer and one during winter) were assumed. Geometry of line of sight was nadir down-looking for the UV and TIR measurements, and limb sounding for the MW measurement. The retrieval sensitivities of the ozone profiles in the upper troposphere (UT), middle troposphere (MT), and lowermost troposphere (LMT) were estimated using the degree of freedom for signal (DFS), the pressure of maximum sensitivity, reduction rate of error from the a priori error, and the averaging kernel matrix, derived based on the optimal estimation method. The measurement noise levels were assumed to be the same as those for currently available instruments. The weighting functions for the UV, TIR, and MW ranges were calculated using the SCIATRAN radiative transfer model, the Line-By-Line Radiative Transfer Model (LBLRTM), and the Advanced Model for Atmospheric Terahertz Radiation Analysis and Simulation (AMATERASU), respectively. The DFS value was increased by approximately 96, 23, and 30 % by adding the MW measurements to the combination of UV and TIR measurements in the UT, MT, and LMT regions, respectively. The MW measurement increased the DFS value of the LMT ozone; nevertheless, the MW measurement alone has no sensitivity to the LMT ozone. The pressure of maximum sensitivity value for the LMT ozone was also increased by adding the MW measurement. These findings indicate that better information on LMT ozone can be obtained by adding constraints on the UT and MT ozone from the MW measurement. The results of this study are applicable to the upcoming air-quality monitoring missions, APOLLO, GMAP-Asia, and uvSCOPE.
Specifying the ISS Plasma Environment
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Diekmann, Anne; Neergaard, Linda; Bui, Them; Mikatarian, Ronald; Barsamian, Hagop; Koontz, Steven
2002-01-01
Quantifying the spacecraft charging risks and corresponding hazards for the International Space Station (ISS) requires a plasma environment specification describing the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IN) model typically only provide estimates of long term (seasonal) mean Te and Ne values for the low Earth orbit environment. Knowledge of the Te and Ne variability as well as the likelihood of extreme deviations from the mean values are required to estimate both the magnitude and frequency of occurrence of potentially hazardous spacecraft charging environments for a given ISS construction stage and flight configuration. This paper describes the statistical analysis of historical ionospheric low Earth orbit plasma measurements used to estimate Ne, Te variability in the ISS flight environment. The statistical variability analysis of Ne and Te enables calculation of the expected frequency of occurrence of any particular values of Ne and Te, especially those that correspond to possibly hazardous spacecraft charging environments. The database used in the original analysis included measurements from the AE-C, AE-D, and DE-2 satellites. Recent work on the database has added additional satellites to the database and ground based incoherent scatter radar observations as well. Deviations of the data values from the IRI estimated Ne, Te parameters for each data point provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output.
Evaluating Teachers: The Important Role of Value-Added
ERIC Educational Resources Information Center
Glazerman, Steven; Loeb, Susanna; Goldhaber, Dan; Staiger, Douglas; Raudenbush, Stephen; Whitehurst, Grover
2010-01-01
The evaluation of teachers based on the contribution they make to the learning of their students, "value-added", is an increasingly popular but controversial education reform policy. In this report, the authors highlight and try to clarify four areas of confusion about value-added. The first is between value-added information and the…
Selecting Value-Added Models for Postsecondary Institutional Assessment
ERIC Educational Resources Information Center
Steedle, Jeffrey T.
2012-01-01
Value-added scores from tests of college learning indicate how score gains compare to those expected from students of similar entering academic ability. Unfortunately, the choice of value-added model can impact results, and this makes it difficult to determine which results to trust. The research presented here demonstrates how value-added models…
Value Added in English Schools
ERIC Educational Resources Information Center
Ray, Andrew; McCormack, Tanya; Evans, Helen
2009-01-01
Value-added indicators are now a central part of school accountability in England, and value-added information is routinely used in school improvement at both the national and the local levels. This article describes the value-added models that are being used in the academic year 2007-8 by schools, parents, school inspectors, and other…
48 CFR 252.229-7006 - Value added tax exclusion (United Kingdom).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Value added tax exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value added tax exclusion (United Kingdom). As prescribed in 229.402-70(f), use the following clause: Value Added Tax Exclusion (United Kingdom) (JUN 1997...
48 CFR 252.229-7006 - Value added tax exclusion (United Kingdom).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Value added tax exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value added tax exclusion (United Kingdom). As prescribed in 229.402-70(f), use the following clause: Value Added Tax Exclusion (United Kingdom) (JUN 1997...
2 CFR 200.470 - Taxes (including Value Added Tax).
Code of Federal Regulations, 2014 CFR
2014-01-01
... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false Taxes (including Value Added Tax). 200.470... Cost § 200.470 Taxes (including Value Added Tax). (a) For states, local governments and Indian tribes... Federal government for the taxes, interest, and penalties. (c) Value Added Tax (VAT) Foreign taxes charged...
7 CFR 766.202 - Determining the shared appreciation due.
Code of Federal Regulations, 2012 CFR
2012-01-01
... resulting from capital improvements added during the term of the SAA (contributory value). The market value... contributory value of capital improvements added during the term of the SAA will be deducted from the market... value added to the real property by the new or expanded portion of the original residence (if it added...
48 CFR 252.229-7006 - Value Added Tax Exclusion (United Kingdom)
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Value Added Tax Exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value Added Tax Exclusion (United Kingdom) As prescribed in 229.402-70(f), use the follow clause: Value Added Tax Exclusion (United Kingdom) (DEC 2011) The...
48 CFR 252.229-7006 - Value Added Tax Exclusion (United Kingdom)
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Value Added Tax Exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value Added Tax Exclusion (United Kingdom) As prescribed in 229.402-70(f), use the follow clause: Value Added Tax Exclusion (United Kingdom) (DEC 2011) The...
48 CFR 252.229-7006 - Value Added Tax Exclusion (United Kingdom)
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Value Added Tax Exclusion... CLAUSES Text of Provisions And Clauses 252.229-7006 Value Added Tax Exclusion (United Kingdom) As prescribed in 229.402-70(f), use the follow clause: Value Added Tax Exclusion (United Kingdom) (DEC 2011) The...
Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A
2016-10-26
Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.
Spatio-Temporal History of HIV-1 CRF35_AD in Afghanistan and Iran.
Eybpoosh, Sana; Bahrampour, Abbas; Karamouzian, Mohammad; Azadmanesh, Kayhan; Jahanbakhsh, Fatemeh; Mostafavi, Ehsan; Zolala, Farzaneh; Haghdoost, Ali Akbar
2016-01-01
HIV-1 Circulating Recombinant Form 35_AD (CRF35_AD) has an important position in the epidemiological profile of Afghanistan and Iran. Despite the presence of this clade in Afghanistan and Iran for over a decade, our understanding of its origin and dissemination patterns is limited. In this study, we performed a Bayesian phylogeographic analysis to reconstruct the spatio-temporal dispersion pattern of this clade using eligible CRF35_AD gag and pol sequences available in the Los Alamos HIV database (432 sequences available from Iran, 16 sequences available from Afghanistan, and a single CRF35_AD-like pol sequence available from USA). Bayesian Markov Chain Monte Carlo algorithm was implemented in BEAST v1.8.1. Between-country dispersion rates were tested with Bayesian stochastic search variable selection method and were considered significant where Bayes factor values were greater than three. The findings suggested that CRF35_AD sequences were genetically similar to parental sequences from Kenya and Uganda, and to a set of subtype A1 sequences available from Afghan refugees living in Pakistan. Our results also showed that across all phylogenies, Afghan and Iranian CRF35_AD sequences formed a monophyletic cluster (posterior clade credibility> 0.7). The divergence date of this cluster was estimated to be between 1990 and 1992. Within this cluster, a bidirectional dispersion of the virus was observed across Afghanistan and Iran. We could not clearly identify if Afghanistan or Iran first established or received this epidemic, as the root location of this cluster could not be robustly estimated. Three CRF35_AD sequences from Afghan refugees living in Pakistan nested among Afghan and Iranian CRF35_AD branches. However, the CRF35_AD-like sequence available from USA diverged independently from Kenyan subtype A1 sequences, suggesting it not to be a true CRF35_AD lineage. Potential factors contributing to viral exchange between Afghanistan and Iran could be injection drug networks and mass migration of Afghan refugees and labours to Iran, which calls for extensive preventive efforts.
Spatio-Temporal History of HIV-1 CRF35_AD in Afghanistan and Iran
Eybpoosh, Sana; Bahrampour, Abbas; Karamouzian, Mohammad; Azadmanesh, Kayhan; Jahanbakhsh, Fatemeh; Mostafavi, Ehsan; Zolala, Farzaneh; Haghdoost, Ali Akbar
2016-01-01
HIV-1 Circulating Recombinant Form 35_AD (CRF35_AD) has an important position in the epidemiological profile of Afghanistan and Iran. Despite the presence of this clade in Afghanistan and Iran for over a decade, our understanding of its origin and dissemination patterns is limited. In this study, we performed a Bayesian phylogeographic analysis to reconstruct the spatio-temporal dispersion pattern of this clade using eligible CRF35_AD gag and pol sequences available in the Los Alamos HIV database (432 sequences available from Iran, 16 sequences available from Afghanistan, and a single CRF35_AD-like pol sequence available from USA). Bayesian Markov Chain Monte Carlo algorithm was implemented in BEAST v1.8.1. Between-country dispersion rates were tested with Bayesian stochastic search variable selection method and were considered significant where Bayes factor values were greater than three. The findings suggested that CRF35_AD sequences were genetically similar to parental sequences from Kenya and Uganda, and to a set of subtype A1 sequences available from Afghan refugees living in Pakistan. Our results also showed that across all phylogenies, Afghan and Iranian CRF35_AD sequences formed a monophyletic cluster (posterior clade credibility> 0.7). The divergence date of this cluster was estimated to be between 1990 and 1992. Within this cluster, a bidirectional dispersion of the virus was observed across Afghanistan and Iran. We could not clearly identify if Afghanistan or Iran first established or received this epidemic, as the root location of this cluster could not be robustly estimated. Three CRF35_AD sequences from Afghan refugees living in Pakistan nested among Afghan and Iranian CRF35_AD branches. However, the CRF35_AD-like sequence available from USA diverged independently from Kenyan subtype A1 sequences, suggesting it not to be a true CRF35_AD lineage. Potential factors contributing to viral exchange between Afghanistan and Iran could be injection drug networks and mass migration of Afghan refugees and labours to Iran, which calls for extensive preventive efforts. PMID:27280293
Bhanegaonkar, Abhijeet J; Horodniceanu, Erica G; Abdul Latiff, Amir Hamzah; Woodhull, Sanjay; Khoo, Phaik Choo; Detzel, Patrick; Ji, Xiang; Botteman, Marc F
2015-04-01
Breastfeeding is best for infants and the World Health Organization recommends exclusive breastfeeding for at least the first 6 months of life. For those who are unable to be breastfed, previous studies demonstrate that feeding high-risk infants with hydrolyzed formulas instead of cow's milk formula (CMF) may decrease the risk of atopic dermatitis (AD). To estimate the economic impact of feeding high-risk, not exclusively breastfed, urban Malaysian infants with partiallyhydrolyzed whey-based formula (PHF-W) instead of CMF for the first 17 weeks of life as an AD risk reduction strategy. A cohort Markov model simulated the AD incidence and burden from birth to age 6 years in the target population fed with PHF-W vs. CMF. The model integrated published clinical and epidemiologic data, local cost data, and expert opinion. Modeled outcomes included AD-risk reduction, time spent post AD diagnosis, days without AD flare, quality-adjusted life years (QALYs), and costs (direct and indirect). Outcomes were discounted at 3% per year. Costs are expressed in Malaysian Ringgit (MYR; MYR 1,000 = United States dollar [US $]316.50). Feeding a high-risk infant PHF-W vs. CMF resulted in a 14% point reduction in AD risk (95% confidence interval [CI], 3%-23%), a 0.69-year (95% CI, 0.25-1.10) reduction in time spent post-AD diagnosis, additional 38 (95% CI, 2-94) days without AD flare, and an undiscounted gain of 0.041 (95% CI, 0.007-0.103) QALYs. The discounted AD-related 6-year cost estimates when feeding a high-risk infant with PHF-W were MYR 1,758 (US $556) (95% CI, MYR 917-3,033) and with CMF MYR 2,871 (US $909) (95% CI, MYR 1,697-4,278), resulting in a per-child net saving of MYR 1,113 (US $352) (95% CI, MYR 317-1,884) favoring PHF-W. Using PHF-W instead of CMF in this population is expected to result in AD-related costs savings.
Economic value of atopic dermatitis prevention via infant formula use in high-risk Malaysian infants
Bhanegaonkar, Abhijeet J; Horodniceanu, Erica G; Abdul Latiff, Amir Hamzah; Woodhull, Sanjay; Khoo, Phaik Choo; Detzel, Patrick; Ji, Xiang
2015-01-01
Background Breastfeeding is best for infants and the World Health Organization recommends exclusive breastfeeding for at least the first 6 months of life. For those who are unable to be breastfed, previous studies demonstrate that feeding high-risk infants with hydrolyzed formulas instead of cow's milk formula (CMF) may decrease the risk of atopic dermatitis (AD). Objective To estimate the economic impact of feeding high-risk, not exclusively breastfed, urban Malaysian infants with partiallyhydrolyzed whey-based formula (PHF-W) instead of CMF for the first 17 weeks of life as an AD risk reduction strategy. Methods A cohort Markov model simulated the AD incidence and burden from birth to age 6 years in the target population fed with PHF-W vs. CMF. The model integrated published clinical and epidemiologic data, local cost data, and expert opinion. Modeled outcomes included AD-risk reduction, time spent post AD diagnosis, days without AD flare, quality-adjusted life years (QALYs), and costs (direct and indirect). Outcomes were discounted at 3% per year. Costs are expressed in Malaysian Ringgit (MYR; MYR 1,000 = United States dollar [US $]316.50). Results Feeding a high-risk infant PHF-W vs. CMF resulted in a 14% point reduction in AD risk (95% confidence interval [CI], 3%-23%), a 0.69-year (95% CI, 0.25-1.10) reduction in time spent post-AD diagnosis, additional 38 (95% CI, 2-94) days without AD flare, and an undiscounted gain of 0.041 (95% CI, 0.007-0.103) QALYs. The discounted AD-related 6-year cost estimates when feeding a high-risk infant with PHF-W were MYR 1,758 (US $556) (95% CI, MYR 917-3,033) and with CMF MYR 2,871 (US $909) (95% CI, MYR 1,697-4,278), resulting in a per-child net saving of MYR 1,113 (US $352) (95% CI, MYR 317-1,884) favoring PHF-W. Conclusion Using PHF-W instead of CMF in this population is expected to result in AD-related costs savings. PMID:25938073
Combining Relevance Vector Machines and exponential regression for bearing residual life estimation
NASA Astrophysics Data System (ADS)
Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico
2012-08-01
In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.
Willingness to pay for non angler recreation at the lower Snake River reservoirs
McKean, J.R.; Johnson, D.; Taylor, R.G.; Johnson, Richard L.
2005-01-01
This study applied the travel cost method to estimate demand for non angler recreation at the impounded Snake River in eastern Washington. Net value per person per recreation trip is estimated for the full non angler sample and separately for camping, boating, water-skiing, and swimming/picnicking. Certain recreation activities would be reduced or eliminated and new activities would be added if the dams were breached to protect endangered salmon and steelhead. The effect of breaching on non angling benefits was found by subtracting our benefits estimate from the projected non angling benefits with breaching. Major issues in demand model specification and definition of the price variables are discussed. The estimation method selected was truncated negative binomial regression with adjustment for self selection bias.
Tidal Love and Shida numbers estimated by geodetic VLBI.
Krásná, Hana; Böhm, Johannes; Schuh, Harald
2013-10-01
Frequency-dependent Love and Shida numbers, which characterize the Earth response to the tidal forces, were estimated in a global adjustment of all suitable geodetic Very Long Baseline Interferometry (VLBI) sessions from 1984.0 to 2011.0. Several solutions were carried out to determine the Love and Shida numbers for the tidal constituents at periods in the diurnal band and in the long-period band in addition to values of the Love and Shida numbers common for all tides of degree two. Adding up all twelve diurnal tidal waves that were estimated, the total differences in displacement with respect to the theoretical conventional values of the Love and Shida numbers calculated from an Earth model reach 1.73 ± 0.29 mm in radial direction and 1.15 ± 0.15 mm in the transverse plane. The difference in the radial deformation following from the estimates of the zonal Love numbers is largest for the semi-annual tide S sa with 1.07 ± 0.19 mm.
Tidal Love and Shida numbers estimated by geodetic VLBI☆
Krásná, Hana; Böhm, Johannes; Schuh, Harald
2013-01-01
Frequency-dependent Love and Shida numbers, which characterize the Earth response to the tidal forces, were estimated in a global adjustment of all suitable geodetic Very Long Baseline Interferometry (VLBI) sessions from 1984.0 to 2011.0. Several solutions were carried out to determine the Love and Shida numbers for the tidal constituents at periods in the diurnal band and in the long-period band in addition to values of the Love and Shida numbers common for all tides of degree two. Adding up all twelve diurnal tidal waves that were estimated, the total differences in displacement with respect to the theoretical conventional values of the Love and Shida numbers calculated from an Earth model reach 1.73 ± 0.29 mm in radial direction and 1.15 ± 0.15 mm in the transverse plane. The difference in the radial deformation following from the estimates of the zonal Love numbers is largest for the semi-annual tide Ssa with 1.07 ± 0.19 mm. PMID:26523082
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks
Hammad, Karim; El Bakly, Ahmed M.
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.
Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.
Kiilsgaard, Thor H.
1970-01-01
The Samrah mine, near Ad Dawadimi, Kingdom of Saudi Arabia, has been explored by 18 diamond drill holes, aggregating 3,624.3 meters in length. The holes demonstrate that the Samrah vein zone follows premineral andesitic dikes. Smaller veins split away from the main Samrmh vein zone, The Samrah vein zone is known to be mineralized at the surface for at least 400 meters and to a depth of a of the least 220 meters below the surface. Within this mineralized part of the vein zone diamond drilling has indicated ore reserves of approximately 204,000 metric tons, the average value of which is estimated at $57 per ton.
ERIC Educational Resources Information Center
Imberman, Scott; Lovenheim, Michael F.
2015-01-01
Value-added data have become an increasingly common evaluation tool for schools and teachers. Many school districts have begun to adopt these methods and have released results publicly. In this paper, we use the unique public release of value-added data in Los Angeles to identify how this measure of school quality is capitalized into housing…
Kuznik, Andreas; Bégo-Le-Bagousse, Gaëlle; Eckert, Laurent; Gadkari, Abhijit; Simpson, Eric; Graham, Christopher N; Miles, LaStella; Mastey, Vera; Mahajan, Puneet; Sullivan, Sean D
2017-12-01
Dupilumab significantly improves signs and symptoms of atopic dermatitis (AD), including pruritus, symptoms of anxiety and depression, and health-related quality of life versus placebo in adults with moderate-to-severe AD. Since the cost-effectiveness of dupilumab has not been evaluated, the objective of this analysis was to estimate a value-based price range in which dupilumab would be considered cost-effective compared with supportive care (SC) for treatment of moderate-to-severe AD in an adult population. A health economic model was developed to evaluate from the US payer perspective the long-term costs and benefits of dupilumab treatment administered every other week (q2w). Dupilumab q2w was compared with SC; robustness of assumptions and results were tested using sensitivity and scenario analyses. Clinical data were derived from the dupilumab LIBERTY AD SOLO trials; healthcare use and cost data were from health insurance claims histories of adult patients with AD. The annual price of maintenance therapy with dupilumab to be considered cost-effective was estimated for decision thresholds of US$100,000 and $150,000 per quality-adjusted life-year (QALY) gained. In the base case, the annual maintenance price for dupilumab therapy to be considered cost-effective would be $28,770 at a $100,000 per QALY gained threshold, and $39,940 at a $150,000 threshold. Results were generally robust to parameter variations in one-way and probabilistic sensitivity analyses. Dupilumab q2w compared with SC is cost-effective for the treatment of moderate-to-severe AD in US adults at an annual price of maintenance therapy in the range of $29,000-$40,000 at the $100,000-$150,000 per QALY thresholds. Sanofi and Regeneron Pharmaceuticals, Inc.
2011-02-01
Defense DoE Department of Energy DPT Direct push technology EPA Environmental Protection Agency ERPIMS Enviromental Restoration Program...and 3) assessing whether new wells should be added and where (i.e., network adequacy). • Predict allows import and comparison of new sampling...data against previously estimated trends and maps. Two options include trend flagging and plume flagging to identify potentially anomalous new values
United States Marine Corps Career Designation Board: Significant Factors in Predicting Selection
2014-03-01
estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services...Advisor Dina Shatnawi Second Reader William Gates Dean, Graduate School of Business and Public Policy iv THIS PAGE INTENTIONALLY...but also do it in a way that would be easiest for me to use. Doreen assisted in extracting the necessary FITREP data, which added significant value
Assessing the value-adding impact of diagnostic-type tests on drug development and marketing.
Blair, Edward D
2008-01-01
We explore the cash value of the companion diagnostics opportunity from the perspective of the pharmaceutical partner. Cashflow-based modeling is used to demonstrate the potential financial benefits of key relationships between the pharmaceutical and diagnostics industries. In four scenarios, the uplift in the net present value (NPV) of a proprietary medicine can exceed $US1.8 billion. By simple extrapolation, the uplifted NPV calculations allow realistic and plausible estimates of the companion diagnostic opportunity to be in the region of $US40 billion to $US90 billion. It is expected that such market valuation could drive a macroeconomic change that shifts healthcare practice from reactionary disease-treatment to proactive health maintenance.
NASA Astrophysics Data System (ADS)
Norris, Scott A.; Samela, Juha; Vestberg, Matias; Nordlund, Kai; Aziz, Michael J.
2015-04-01
Because Eq. (1) is independent of the details of the crater function Δh , updated formulae for { A, C,A‧,C‧ } that account for the curvature dependence are obtained simply by inserting Harrison and Bradley's expressions (2) into Eq. (1). As those authors show, at normal incidence the added terms are expected to have positive values, increasing estimates of C (which is a sum of SX,YZ, type terms), while producing a less noticable effect on the value of C‧ (which is a difference of SX,YZ, type terms). These modifications therefore do not alter the primary physical conclusions of our study, which are that for the GaSb system both C and (A‧ C -C‧ A) appear to be positive.
Flood frequency analysis - the challenge of using historical data
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn
2015-04-01
Estimates of high flood quantiles are needed for many applications, .e.g. dam safety assessments are based on the 1000 years flood, whereas the dimensioning of important infrastructure requires estimates of the 200 year flood. The flood quantiles are estimated by fitting a parametric distribution to a dataset of high flows comprising either annual maximum values or peaks over a selected threshold. Since the record length of data is limited compared to the desired flood quantile, the estimated flood magnitudes are based on a high degree of extrapolation. E.g. the longest time series available in Norway are around 120 years, and as a result any estimation of a 1000 years flood will require extrapolation. One solution is to extend the temporal dimension of a data series by including information about historical floods before the stream flow was systematically gaugeded. Such information could be flood marks or written documentation about flood events. The aim of this study was to evaluate the added value of using historical flood data for at-site flood frequency estimation. The historical floods were included in two ways by assuming: (1) the size of (all) floods above a high threshold within a time interval is known; and (2) the number of floods above a high threshold for a time interval is known. We used a Bayesian model formulation, with MCMC used for model estimation. This estimation procedure allowed us to estimate the predictive uncertainty of flood quantiles (i.e. both sampling and parameter uncertainty is accounted for). We tested the methods using 123 years of systematic data from Bulken in western Norway. In 2014 the largest flood in the systematic record was observed. From written documentation and flood marks we had information from three severe floods in the 18th century and they were likely to exceed the 2014 flood. We evaluated the added value in two ways. First we used the 123 year long streamflow time series and investigated the effect of having several shorter series' which could be supplemented with a limited number of known large flood events. Then we used the three historical floods from the 18th century combined with the whole and subsets of the 123 years of systematic observations. In the latter case several challenges were identified: i) The possibility to transfer water levels to river streamflows due to man made changes in the river profile, (ii) The stationarity of the data might be questioned since the three largest historical floods occurred during the "little ice age" with different climatic conditions compared to today.
Code of Federal Regulations, 2014 CFR
2014-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2010 CFR
2010-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2013 CFR
2013-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2012 CFR
2012-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
Code of Federal Regulations, 2011 CFR
2011-10-01
... and value added tax on fuel (passenger vehicles) (United Kingdom). 252.229-7009 Section 252.229-7009... Relief from customs duty and value added tax on fuel (passenger vehicles) (United Kingdom). As prescribed in 229.402-70(i), use the following clause: Relief from Customs Duty and Value Added Tax on Fuel...
How One School Implements and Experiences Ohio's Value-Added Model: A Case Study
ERIC Educational Resources Information Center
Quattrochi, David
2009-01-01
Ohio made value-added law in 2003 and incorporated value-added assessment to its operating standards for teachers and administrators in 2006. Value-added data is used to determine if students are making a year's growth at the end of each school year. Schools and districts receive a rating of "Below Growth, Met Growth, or Above Growth" on…
Watanabe, Masanari; Noma, Hisashi; Kurai, Jun; Sano, Hiroyuki; Ueda, Yasuto; Mikami, Masaaki; Yamamoto, Hiroyuki; Tokuyasu, Hirokazu; Kato, Kazuhiro; Konishi, Tatsuya; Tatsukawa, Toshiyuki; Shimizu, Eiji; Kitano, Hiroya
2016-01-01
Background Asian dust (AD) exposure exacerbates pulmonary dysfunction in patients with asthma. Asthma–chronic obstructive pulmonary disease overlap syndrome (ACOS), characterized by coexisting symptoms of asthma and chronic obstructive pulmonary disease, is considered a separate disease entity. Previously, we investigated the effects of AD on pulmonary function in adult patients with asthma. Here, we present the findings of our further research on the differences in the effects of AD exposure on pulmonary function between patients with asthma alone and those with ACOS. Methods Between March and May 2012, we conducted a panel study wherein we monitored daily peak expiratory flow (PEF) values in 231 adult patients with asthma. These patients were divided into 190 patients with asthma alone and 41 patients with ACOS in this study. Daily AD particle levels were measured using light detection and ranging systems. Two heavy AD days (April 23 and 24) were determined according to the Japan Meteorological Agency definition. A linear mixed model was used to estimate the association between PEF and AD exposure. Results Increments in the interquartile range of AD particles (0.018 km−1) led to PEF changes of −0.50 L/min (95% confidence interval, −0.98 to −0.02) in patients with asthma alone and −0.11 L/min (−0.11 to 0.85) in patients with ACOS. The PEF changes after exposure to heavy AD were −2.21 L/min (−4.28 to −0.15) in patients with asthma alone and −2.76 L/min (−6.86 to 1.35) in patients with ACOS. In patients with asthma alone, the highest decrease in PEF values was observed on the heavy AD day, with a subsequent gradual increase over time. Conclusion Our results suggest that the effects of AD exposure on pulmonary function differ between patients with asthma alone and ACOS, with the former exhibiting a greater likelihood of decreased pulmonary function after AD exposure. PMID:26869784
Genomic-based multiple-trait evaluation in Eucalyptus grandis using dominant DArT markers.
Cappa, Eduardo P; El-Kassaby, Yousry A; Muñoz, Facundo; Garcia, Martín N; Villalba, Pamela V; Klápště, Jaroslav; Marcucci Poltri, Susana N
2018-06-01
We investigated the impact of combining the pedigree- and genomic-based relationship matrices in a multiple-trait individual-tree mixed model (a.k.a., multiple-trait combined approach) on the estimates of heritability and on the genomic correlations between growth and stem straightness in an open-pollinated Eucalyptus grandis population. Additionally, the added advantage of incorporating genomic information on the theoretical accuracies of parents and offspring breeding values was evaluated. Our results suggested that the use of the combined approach for estimating heritabilities and additive genetic correlations in multiple-trait evaluations is advantageous and including genomic information increases the expected accuracy of breeding values. Furthermore, the multiple-trait combined approach was proven to be superior to the single-trait combined approach in predicting breeding values, in particular for low-heritability traits. Finally, our results advocate the use of the combined approach in forest tree progeny testing trials, specifically when a multiple-trait individual-tree mixed model is considered. Copyright © 2018 Elsevier B.V. All rights reserved.
Dave, Ashok; Huang, Ye; Rezvani, Sina; McIlveen-Wright, David; Novaes, Marcio; Hewitt, Neil
2013-05-01
The techno-economic characteristics of macro-algae utilisation from European temperate zones was evaluated in a selected Anaerobic Digester (AD) using the chemical process modelling software ECLIPSE. The assessment covered the mass and energy balance of the entire process followed by the economic feasibility study, which included the total cost estimation, net present value calculation, and sensitivity analysis. The selected plant size corresponded to a community based AD of 1.6 MWth with a macro-algae feed rate of 8.64 tonnes per day (dry basis). The produced biogas was utilised in a combined heat and power plant generating 237 kWenet electricity and 367 kWth heat. The breakeven electricity-selling price in this study was estimated at around €120/MWh. On the ground of different national and regional policies, this study did not account for any government incentives. However, different support mechanisms such as Feed-in-Tariffs or Renewable Obligation Certificates can significantly improve the project viability. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nongovernment Philanthropic Spending on Public Health in the United States
2016-01-01
The objective of this study was to estimate the dollar amount of nongovernment philanthropic spending on public health activities in the United States. Health expenditure data were derived from the US National Health Expenditures Accounts and the US Census Bureau. Results reveal that spending on public health is not disaggregated from health spending in general. The level of philanthropic spending is estimated as, on average, 7% of overall health spending, or about $150 billion annually according to National Health Expenditures Accounts data tables. When a point estimate of charity care provided by hospitals and office-based physicians is added, the value of nongovernment philanthropic expenditures reaches approximately $203 billion, or about 10% of all health spending annually. PMID:26562104
Analysis of production flow process with lean manufacturing approach
NASA Astrophysics Data System (ADS)
Siregar, Ikhsan; Arif Nasution, Abdillah; Prasetio, Aji; Fadillah, Kharis
2017-09-01
This research was conducted on the company engaged in the production of Fast Moving Consumer Goods (FMCG). The production process in the company are still exists several activities that cause waste. Non value added activity (NVA) in the implementation is still widely found, so the cycle time generated to make the product will be longer. A form of improvement on the production line is by applying lean manufacturing method to identify waste along the value stream to find non value added activities. Non value added activity can be eliminated and reduced by utilizing value stream mapping and identifying it with activity mapping process. According to the results obtained that there are 26% of value-added activities and 74% non value added activity. The results obtained through the current state map of the production process of process lead time value of 678.11 minutes and processing time of 173.94 minutes. While the results obtained from the research proposal is the percentage of value added time of 41% of production process activities while non value added time of the production process of 59%. While the results obtained through the future state map of the production process of process lead time value of 426.69 minutes and processing time of 173.89 minutes.
Edison, Paul; Brooks, David J; Turkheimer, Federico E; Archer, Hilary A; Hinz, Rainer
2009-11-01
Pittsburgh compound B or [11C]PIB is an amyloid imaging agent which shows a clear differentiation between subjects with Alzheimer's disease (AD) and controls. However the observed signal difference in other forms of dementia such as dementia with Lewy bodies (DLB) is smaller, and mild cognitively impaired (MCI) subjects and some healthy elderly normals may show intermediate levels of [11C]PIB binding. The cerebellum, a commonly used reference region for non-specific tracer uptake in [11C]PIB studies in AD may not be valid in Prion disorders or monogenic forms of AD. The aim of this work was to: 1-compare methods for generating parametric maps of [11C]PIB retention in tissue using a plasma input function in respect of their ability to discriminate between AD subjects and controls and 2-estimate the test-retest reproducibility in AD subjects. 12 AD subjects (5 of which underwent a repeat scan within 6 weeks) and 10 control subjects had 90 minute [11C]PIB dynamic PET scans, and arterial plasma input functions were measured. Parametric maps were generated with graphical analysis of reversible binding (Logan plot), irreversible binding (Patlak plot), and spectral analysis. Between group differentiation was calculated using Student's t-test and comparisons between different methods were made using p values. Reproducibility was assessed by intraclass correlation coefficients (ICC). We found that the 75 min value of the impulse response function showed the best group differentiation and had a higher ICC than volume of distribution maps generated from Logan and spectral analysis. Patlak analysis of [11C]PIB binding was the least reproducible.
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
Wind, Anne E; Gorter, Kees J; van den Donk, Maureen; Rutten, Guy E H M
2016-02-01
To investigate the impact of the UKPDS risk engine on management of CHD risk in T2DM patients. Observational study among 139 GPs. Data from 933 consecutive patients treated with a maximum of two oral glucose lowering drugs, collected at baseline and after twelve months. GPs estimated the CHD risk themselves and afterwards they calculated this with the UKPDS risk engine. Under- and overestimation were defined as a difference >5 percentage points difference between both calculations. The impact of the UKPDS risk engine was assessed by measuring differences in medication adjustments between the over-, under- and accurately estimated group. In 42.0% the GP accurately estimated the CHD risk, in 32.4% the risk was underestimated and in 25.6% overestimated. Mean difference between the estimated (18.7%) and calculated (19.1%) 10 years CHD risk was -0.36% (95% CI -1.24 to 0.52). Male gender, current smoking and total cholesterol level were associated with underestimation. Patients with an subjectively underestimated CHD risk received significantly more medication adjustments. Their UKPDS 10 year CHD risk did not increase during the follow-up period, contrary to the other two groups of patients. The UKPDS risk engine may be of added value for risk management in T2DM. Copyright © 2015 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Knol, Diny; Trautwein, Elke A.
2016-01-01
1 To evaluate the content of phytosterol oxidation products (POP) of foods with added phytosterols, in total 14 studies measuring POP contents of foods with added phytosterols were systematically reviewed. In non‐heated or stored foods, POP contents were low, ranging from (medians) 0.03–3.6 mg/100 g with corresponding oxidation rates of phytosterols (ORP) of 0.03–0.06%. In fat‐based foods with 8% of added free plant sterols (FPS), plant sterol esters (PSE) or plant stanol esters (PAE) pan‐fried at 160–200°C for 5–10 min, median POP contents were 72.0, 38.1, and 4.9 mg/100 g, respectively, with a median ORP of 0.90, 0.48, and 0.06%. Hence resistance to thermal oxidation was in the order of PAE > PSE > FPS. POP formation was highest in enriched butter followed by margarine and rapeseed oil. In margarines with 7.5–10.5% added PSE oven‐heated at 140–200°C for 5–30 min, median POP content was 0.3 mg/100 g. Further heating under same temperature conditions but for 60–120 min markedly increased POP formation to 384.3 mg/100 g. Estimated daily upper POP intake was 47.7 mg/d (equivalent to 0.69 mg/kg BW/d) for foods with added PSE and 78.3 mg/d (equivalent to 1.12 mg/kg BW/d) for foods with added FPS as calculated by multiplying the advised upper daily phytosterol intake of 3 g/d with the 90% quantile values of ORP. In conclusion, heating temperature and time, chemical form of phytosterols added and the food matrix are determinants of POP formation in foods with added phytosterols, leading to an increase in POP contents. Practical applications: Phytosterol oxidation products (POP) are formed in foods containing phytosterols especially when exposed to heat treatment. This review summarising POP contents in foods with added phytosterols in their free and esterified forms reveals that heating temperature and time, the chemical form of phytosterols added and the food matrix itself are determinants of POP formation with heating temperature and time having the biggest impact. The estimated upper daily intakes of POP is 78.3 mg/d for fat‐based products with added free plant sterols and 47.7 mg/d for fat‐based products with added plant sterol esters. Phytosterols in foods are susceptible to oxidation to form phytosterol oxidation products (POP). This review summarizes literature data regarding POP contents of foods with added phytosterols that were exposed to storage and heat treatments. PMID:27812313
Carr, Stephanie A; Mills, Christopher T.; Mandernack, Kevin W
2016-01-01
The Adélie Basin, located offshore of the Wilkes Land margin, experiences unusually high sedimentation rates (~ 2 cm yr− 1) for the Antarctic coast. This study sought to compare depthwise changes in organic matter (OM) quantity and quality with changes in microbial biomass with depth at this high-deposition site and an offshore continental margin site. Sediments from both sites were collected during the International Ocean Drilling (IODP) Program Expedition 318. Viable microbial biomass was estimated from concentrations of bacterial-derived phospholipid fatty acids, while OM quality was assessed using four different amino acid degradation proxies. Concentrations of total hydrolysable amino acids (THAA) measured from the continental margin suggest an oligotrophic environment, with THAA concentrations representing only 2% of total organic carbon with relative proportions of non-protein amino acids β-alanine and γ-aminobutyric acid as high as 40%. In contrast, THAA concentrations from the near-shore Adélie Basin represent 40%–60% of total organic carbon. Concentrations of β-alanine and γ-aminobutyric acid were often below the detection limit and suggest that the OM of the basin as labile. DI values in surface sediments at the Adélie and margin sites were measured to be + 0.78 and − 0.76, reflecting labile and more recalcitrant OM, respectively. Greater DI values in deeper and more anoxic portions of both cores correlated positively with increased relative concentrations of phenylalanine plus tyrosine and may represent a change of redox conditions, rather than OM quality. This suggests that DI values calculated along chemical profiles should be interpreted with caution. THAA concentrations, the percentage of organic carbon (CAA%) and total nitrogen (NAA%) represented by amino acids at both sites demonstrated a significant positive correlation with bacterial abundance estimates. These data suggest that the selective degradation of amino acids, as indicated by THAA concentrations, CAA% or NAA% values may be a better proxy for describing the general changes in sedimentary bacterial abundances than total organic matter or bulk sedimentation rates.
Beauchamp, Cynthia L; Felius, Joost; Beauchamp, George R
2010-01-01
Value analysis in health care calculates the economic value added (EVA) that results from improvements in health and health care. Our purpose was to develop an EVA model and to apply the model to typical and hypothetical (instantaneous and perfect) cures for amblyopia, surgical strabismus and asthma, as another, but non-ophthalmological disease standard for comparison, in the United States. The model is based on changes in utility and longevity, the associated incremental costs, and an estimate of the value of life. Univariate sensitivity analyses were performed to arrive at a plausible range of outcomes. For the United States, the EVA for current practice amblyopia care is 12.9B dollars (billion) per year, corresponding to a return on investment (ROI) of 10.4% per yr. With substantial increases in investment aimed at maximal improvement ("perfect cure"), the EVA is 32.7B per yr, with ROI of 5.3% per yr. The EVA for typical surgical strabismus care is 10.3B per yr. A perfect cure may yield EVA of 9.6B per yr. The EVA for asthma is 1317B per yr (ROI 20.4% per yr.., while a perfect cure may yield EVA of 110 B per yr. Sensitivity analysis demonstrated the relatively large effects of incidence, utility, and longevity, while incremental costs have a relatively minor effect on the EVA. The economic value added by improvements in patient-centered outcomes is very large. Failing to make the necessary investments in research, prevention, detection, prompt treatment and rehabilitation of these diseases, at virtually any conceivable cost, appears economically, medically, morally and ethically deficient and consequently wasteful at very least economically for our society.
ERIC Educational Resources Information Center
Harris, Douglas N.; Anderson, Andrew
2013-01-01
There is a growing body of research on the validity and reliability of value-added measures, but most of this research has focused on elementary grades. Driven by several federal initiatives such as Race to the Top, Teacher Incentive Fund, and ESEA waivers, however, many states have incorporated value-added measures into the evaluations not only…
ERIC Educational Resources Information Center
Rodgers, Timothy
2007-01-01
The 2003 UK higher education White Paper suggested that the sector needed to re-examine the potential of the value added concept. This paper describes a possible methodology for developing a performance indicator based on the economic value added to graduates. The paper examines how an entry-quality-adjusted measure of a graduate's…
ERIC Educational Resources Information Center
Loeb, Susanna
2013-01-01
The question for this brief is whether education leaders can use value-added measures as tools for improving schooling and, if so, how to do this. Districts, states, and schools can, at least in theory, generate gains in educational outcomes for students using value-added measures in three ways: creating information on effective programs, making…
The cost of Alzheimer's disease in China and re-estimation of costs worldwide.
Jia, Jianping; Wei, Cuibai; Chen, Shuoqi; Li, Fangyu; Tang, Yi; Qin, Wei; Zhao, Lina; Jin, Hongmei; Xu, Hui; Wang, Fen; Zhou, Aihong; Zuo, Xiumei; Wu, Liyong; Han, Ying; Han, Yue; Huang, Liyuan; Wang, Qi; Li, Dan; Chu, Changbiao; Shi, Lu; Gong, Min; Du, Yifeng; Zhang, Jiewen; Zhang, Junjian; Zhou, Chunkui; Lv, Jihui; Lv, Yang; Xie, Haiqun; Ji, Yong; Li, Fang; Yu, Enyan; Luo, Benyan; Wang, Yanjiang; Yang, Shanshan; Qu, Qiumin; Guo, Qihao; Liang, Furu; Zhang, Jintao; Tan, Lan; Shen, Lu; Zhang, Kunnan; Zhang, Jinbiao; Peng, Dantao; Tang, Muni; Lv, Peiyuan; Fang, Boyan; Chu, Lan; Jia, Longfei; Gauthier, Serge
2018-04-01
The socioeconomic costs of Alzheimer's disease (AD) in China and its impact on global economic burden remain uncertain. We collected data from 3098 patients with AD in 81 representative centers across China and estimated AD costs for individual patient and total patients in China in 2015. Based on this data, we re-estimated the worldwide costs of AD. The annual socioeconomic cost per patient was US $19,144.36, and total costs were US $167.74 billion in 2015. The annual total costs are predicted to reach US $507.49 billion in 2030 and US $1.89 trillion in 2050. Based on our results, the global estimates of costs for dementia were US $957.56 billion in 2015, and will be US $2.54 trillion in 2030, and US $9.12 trillion in 2050, much more than the predictions by the World Alzheimer Report 2015. China bears a heavy burden of AD costs, which greatly change the estimates of AD cost worldwide. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
An Online Observer for Minimization of Pulsating Torque in SMPM Motors
Roșca, Lucian
2016-01-01
A persistent problem of surface mounted permanent magnet (SMPM) motors is the non-uniformity of the developed torque. Either the motor design or the motor control needs to be improved in order to minimize the periodic disturbances. This paper proposes a new control technique for reducing periodic disturbances in permanent magnet (PM) electro-mechanical actuators, by advancing a new observer/estimator paradigm. A recursive estimation algorithm is implemented for online control. The compensating signal is identified and added as feedback to the control signal of the servo motor. Compensation is evaluated for different values of the input signal, to show robustness of the proposed method. PMID:27089182
Energy Balance Bowen Ratio (EBBR) Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, D. R.
2016-01-01
The Energy Balance Bowen Ratio (EBBR) system produces 30-minute estimates of the vertical fluxes of sensible and latent heat at the local surface. Flux estimates are calculated from observations of net radiation, soil surface heat flux, and the vertical gradients of temperature and relative humidity (RH). Meteorological data collected by the EBBR are used to calculate bulk aerodynamic fluxes, which are used in the Bulk Aerodynamic Technique (BA) EBBR value-added product (VAP) to replace sunrise and sunset spikes in the flux data. A unique aspect of the system is the automatic exchange mechanism (AEM), which helps to reduce errors frommore » instrument offset drift.« less
Energy Balance Bowen Ratio Station (EBBR) Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, DR
2011-02-23
The energy balance Bowen ratio (EBBR) system produces 30-minute estimates of the vertical fluxes of sensible and latent heat at the local surface. Flux estimates are calculated from observations of net radiation, soil surface heat flux, and the vertical gradients of temperature and relative humidity (RH). Meteorological data collected by the EBBR are used to calculate bulk aerodynamic fluxes, which are used in the Bulk Aerodynamic Technique (BA) EBBR value-added product (VAP) to replace sunrise and sunset spikes in the flux data. A unique aspect of the system is the automatic exchange mechanism (AEM), which helps to reduce errors frommore » instrument offset drift.« less
Robust GNSS and InSAR tomography of neutrospheric refractivity using a Compressive Sensing approach
NASA Astrophysics Data System (ADS)
Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan
2017-04-01
Motivation: An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. In addition, a precise determination of water vapor is also required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). Several approaches for 3D tomographic water vapor reconstruction from GNSS-based Slant Wet Delay (SWD) estimates using the least squares (LSQ) adjustment exist. However, the tomographic system is in general ill-conditioned and its solution is unstable. Therefore, additional information or constraints need to be added in order to regularize the system. Goal of this work: In this work, we analyze the potential of Compressive Sensing (CS) for robustly reconstructing neutrospheric refractivity from GNSS SWD estimates. Moreover, the benefit of adding InSAR SWD estimates into the tomographic system is studied. Approach: A sparse representation of the refractivity field is obtained using a dictionary composed of Discrete Cosine Transforms (DCT) in longitude and latitude direction and of an Euler transform in height direction. This sparsity of the signal can be used as a prior for regularization and the CS inversion is solved by minimizing the number of non-zero entries of the sparse solution in the DCT-Euler domain. No other regularization constraints or prior knowledge is applied. The tomographic reconstruction relies on total SWD estimates from GNSS Precise Point Positioning (PPP) and Persistent Scatterer (PS) InSAR. On the one hand, GNSS PPP SWD estimates are included into the system of equations. On the other hand, 2D ZWD maps are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data at given points as if corresponding to GNSS sites within the study area. The made-up ZWD values can be mapped into different elevation and azimuth angles. Moreover, using the same observation geometry as in the case of the GNSS and InSAR data, a synthetic set of SWD values was generated based on WRF simulations. Results: The CS approach shows particular strength in the case of a small number of SWD estimates. When compared to LSQ, the sparse reconstruction is much more robust. In the case of a low density of GNSS sites, adding InSAR SWD estimates improves the reconstruction accuracy for both LSQ and CS. Based on a synthetic SWD dataset generated using WRF simulations of wet refractivity, the CS based solution of the tomographic system is validated. In the vertical direction, the refractivity distribution deduced from GNSS and InSAR SWD estimates is compared to a tropospheric humidity data set provided by EUMETSAT consisting of daily mean values of specific humidity given on six pressure levels between 1000 hPa and 200 hPa. Study area: The Upper Rhine Graben (URG) characterized by negligible surface deformations is chosen as study area. A network of seven permanent GNSS receivers is used for this study, and a total number of 17 SAR images, acquired by ENVISAT ASAR is available.
Digital Games, Design, and Learning: A Systematic Review and Meta-Analysis.
Clark, Douglas B; Tanner-Smith, Emily E; Killingsworth, Stephen S
2016-03-01
In this meta-analysis, we systematically reviewed research on digital games and learning for K-16 students. We synthesized comparisons of game versus nongame conditions (i.e., media comparisons) and comparisons of augmented games versus standard game designs (i.e., value-added comparisons). We used random-effects meta-regression models with robust variance estimates to summarize overall effects and explore potential moderator effects. Results from media comparisons indicated that digital games significantly enhanced student learning relative to nongame conditions ([Formula: see text] = 0.33, 95% confidence interval [0.19, 0.48], k = 57, n = 209). Results from value-added comparisons indicated significant learning benefits associated with augmented game designs ([Formula: see text] = 0.34, 95% confidence interval [0.17, 0.51], k = 20, n = 40). Moderator analyses demonstrated that effects varied across various game mechanics characteristics, visual and narrative characteristics, and research quality characteristics. Taken together, the results highlight the affordances of games for learning as well as the key role of design beyond medium.
Digital Games, Design, and Learning
Clark, Douglas B.; Tanner-Smith, Emily E.; Killingsworth, Stephen S.
2016-01-01
In this meta-analysis, we systematically reviewed research on digital games and learning for K–16 students. We synthesized comparisons of game versus nongame conditions (i.e., media comparisons) and comparisons of augmented games versus standard game designs (i.e., value-added comparisons). We used random-effects meta-regression models with robust variance estimates to summarize overall effects and explore potential moderator effects. Results from media comparisons indicated that digital games significantly enhanced student learning relative to nongame conditions (g¯ = 0.33, 95% confidence interval [0.19, 0.48], k = 57, n = 209). Results from value-added comparisons indicated significant learning benefits associated with augmented game designs (g¯ = 0.34, 95% confidence interval [0.17, 0.51], k = 20, n = 40). Moderator analyses demonstrated that effects varied across various game mechanics characteristics, visual and narrative characteristics, and research quality characteristics. Taken together, the results highlight the affordances of games for learning as well as the key role of design beyond medium. PMID:26937054
Potential commercial uses of EOS remote sensing products
NASA Technical Reports Server (NTRS)
Thompson, Leslie L.
1991-01-01
The instrument complement of the Earth Observing System (EOS) satellite system will generate data sets with potential interest to a variety of users who are now just beginning to develop geographic information systems tailored to their special applications and/or jurisdictions. Other users may be looking for a unique product that enhances competitive position. The generally distributed products from EOS will require additional value added processing to derive the unique products desired by specific users. Entrepreneurs have an opportunity to create these proprietary level 4 products from the EOS data sets. Specific instruments or collections of instruments could provide information for crop futures trading, mineral exploration, television and printed medium news products, regional and local government land management and planning, digital map directories, products for third world users, ocean fishing fleet probability of harvest forecasts, and other areas not even imagined at this time. The projected level 3 product are examined that will be available at launch from EOS instruments and commercial uses of the data after value added processing is estimated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Q; Xie, S
This report describes the Atmospheric Radiation Measurement (ARM) Best Estimate (ARMBE) 2-dimensional (2D) gridded surface data (ARMBE2DGRID) value-added product. Spatial variability is critically important to many scientific studies, especially those that involve processes of great spatial variations at high temporal frequency (e.g., precipitation, clouds, radiation, etc.). High-density ARM sites deployed at the Southern Great Plains (SGP) allow us to observe the spatial patterns of variables of scientific interests. The upcoming megasite at SGP with its enhanced spatial density will facilitate the studies at even finer scales. Currently, however, data are reported only at individual site locations at different time resolutionsmore » for different datastreams. It is difficult for users to locate all the data they need and requires extra effort to synchronize the data. To address these problems, the ARMBE2DGRID value-added product merges key surface measurements at the ARM SGP sites and interpolates the data to a regular 2D grid to facilitate the data application.« less
Cost Effectiveness of Childhood Obesity Interventions: Evidence and Methods for CHOICES.
Gortmaker, Steven L; Long, Michael W; Resch, Stephen C; Ward, Zachary J; Cradock, Angie L; Barrett, Jessica L; Wright, Davene R; Sonneville, Kendrin R; Giles, Catherine M; Carter, Rob C; Moodie, Marj L; Sacks, Gary; Swinburn, Boyd A; Hsiao, Amber; Vine, Seanna; Barendregt, Jan; Vos, Theo; Wang, Y Claire
2015-07-01
The childhood obesity epidemic continues in the U.S., and fiscal crises are leading policymakers to ask not only whether an intervention works but also whether it offers value for money. However, cost-effectiveness analyses have been limited. This paper discusses methods and outcomes of four childhood obesity interventions: (1) sugar-sweetened beverage excise tax (SSB); (2) eliminating tax subsidy of TV advertising to children (TV AD); (3) early care and education policy change (ECE); and (4) active physical education (Active PE). Cost-effectiveness models of nationwide implementation of interventions were estimated for a simulated cohort representative of the 2015 U.S. population over 10 years (2015-2025). A societal perspective was used; future outcomes were discounted at 3%. Data were analyzed in 2014. Effectiveness, implementation, and equity issues were reviewed. Population reach varied widely, and cost per BMI change ranged from $1.16 (TV AD) to $401 (Active PE). At 10 years, assuming maintenance of the intervention effect, three interventions would save net costs, with SSB and TV AD saving $55 and $38 for every dollar spent. The SSB intervention would avert disability-adjusted life years, and both SSB and TV AD would increase quality-adjusted life years. Both SSB ($12.5 billion) and TV AD ($80 million) would produce yearly tax revenue. The cost effectiveness of these preventive interventions is greater than that seen for published clinical interventions to treat obesity. Cost-effectiveness evaluations of childhood obesity interventions can provide decision makers with information demonstrating best value for the money. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schnepp, Elisabeth; Lanos, Philippe; Chauvin, Annick
2009-08-01
Geomagnetic paleointensities have been determined from a single archaeological site in Lübeck, Germany, where a sequence of 25 bread oven floors has been preserved in a bakery from medieval times until today. Age dating confines the time interval from about 1300 A.D. to about 1750 A.D. Paleomagnetic directions have been published from each oven floor and are updated here. The specimens have very stable directions and no or only weak secondary components. The oven floor material was characterized rock magnetically using Thellier viscosity indices, median destructive field values, Curie point determinations, and hysteresis measurements. Magnetic carriers are mixtures of SD, PSD, and minor MD magnetite and/or maghemite together with small amounts of hematite. Paleointensity was measured from selected specimens with the double-heating Thellier method including pTRM checks and determination of TRM anisotropy tensors. Corrections for anisotropy as well as for cooling rate turned out to be unnecessary. Ninety-two percent of the Thellier experiments passed the assigned acceptance criteria and provided four to six reliable paleointensity estimates per oven floor. Mean paleointensity values derived from 22 oven floors show maxima in the 15th and early 17th centuries A.D., followed by a decrease of paleointensity of about 20% until 1750 A.D. Together with the directions the record represents about 450 years of full vector secular variation. The results compare well with historical models of the Earth's magnetic field as well as with a selected high-quality paleointensity data set for western and central Europe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallegos, A.F.; Gonzales, G.J.; Bennett, K.D.
1997-06-01
The Endangered Species Act and the Record of Decision on the Dual Axis Radiographic Hydrodynamic Test Facility at the Los Alamos National Laboratory require protection of the American peregrine falcon. A preliminary risk assessment of the peregrine was performed using a custom FORTRAN model and a geographical information system. Estimated doses to the falcon were compared against toxicity reference values to generate hazard indices. Hazard index results indicated no unacceptable risk to the falcon from the soil ingestion pathway, including a measure of cumulative effects from multiple contaminants that assumes a linear additive toxicity type. Scaling home ranges on themore » basis of maximizing falcon height for viewing prey decreased estimated risk by 69% in a canyons-based home range and increased estimated risk by 40% in a river-based home range. Improving model realism by weighting simulated falcon foraging based on distance from potential nest sites decreased risk by 93% in one exposure unit and by 82% in a second exposure unit. It was demonstrated that choice of toxicity reference values can have a substantial impact on risk estimates. Adding bioaccumulation factors for several organics increased partial hazard quotients by a factor of 110, but increased the mean hazard index by only 0.02 units. Adding a food consumption exposure pathway in the form of biomagnification factors for 15 contaminants of potential ecological concern increased the mean hazard index to 1.16 ({+-} 1.0), which is above the level of acceptability (1.0). Aroclor-1254, dichlorodiphenyltrichlorethane (DDT) and dichlorodiphenylethelyne (DDE) accounted for 81% of the estimated risk that includes soil ingestion and food consumption Contaminant pathways and a biomagnification component. Information on risk by specific geographical location was generated, which can be used to manage contaminated areas, falcon habitat, facility siting, and/or facility operations. 123 refs., 10 figs., 2 tabs.« less
16 CFR 460.18 - Insulation ads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Insulation ads. 460.18 Section 460.18... INSULATION § 460.18 Insulation ads. (a) If your ad gives an R-value, you must give the type of insulation and... the R-value, the greater the insulating power. Ask your seller for the fact sheet on R-values.” (b) If...
16 CFR 460.18 - Insulation ads.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 1 2014-01-01 2014-01-01 false Insulation ads. 460.18 Section 460.18... INSULATION § 460.18 Insulation ads. (a) If your ad gives an R-value, you must give the type of insulation and... the R-value, the greater the insulating power. Ask your seller for the fact sheet on R-values.” (b) If...
16 CFR 460.18 - Insulation ads.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Insulation ads. 460.18 Section 460.18... INSULATION § 460.18 Insulation ads. (a) If your ad gives an R-value, you must give the type of insulation and... the R-value, the greater the insulating power. Ask your seller for the fact sheet on R-values.” (b) If...
16 CFR 460.18 - Insulation ads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 1 2012-01-01 2012-01-01 false Insulation ads. 460.18 Section 460.18... INSULATION § 460.18 Insulation ads. (a) If your ad gives an R-value, you must give the type of insulation and... the R-value, the greater the insulating power. Ask your seller for the fact sheet on R-values.” (b) If...
16 CFR 460.18 - Insulation ads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Insulation ads. 460.18 Section 460.18... INSULATION § 460.18 Insulation ads. (a) If your ad gives an R-value, you must give the type of insulation and... the R-value, the greater the insulating power. Ask your seller for the fact sheet on R-values.” (b) If...
Freixas Sepúlveda, Alejandra; Díaz Narváez, Víctor Patricio; Durán Agüero, Samuel; Gaete Verdugo, María Cristina
2013-01-01
In order to analyze the usual consumption of vitamins in an adolescent population and young adult in the Metropolitan Region, were 213 food fortified with vitamins of the Chilean market. A survey of consumption and nutrient intake was calculated. The result added vitamins added to food. The normality of the variables of the intake was assessed and data were subjected to analysis of descriptive statisticians and percentiles are determined. Estimated percentages of subjects whose values exceed those fixed for DDR and UL listed for each vitamin and percentage of excess for each case. Discriminant analysis was performed using the M Box test. The correlation canonical and the Statisticians Wilks were estimated. Finally it was estimated the percentage of correctly classified data. Data were processed by the program SPSS 20.0 with a significance level of α ≤ 0.05. The results indicate that you for all the studied vitamins, the percentage of subjects who more than the DDR is for total folate (96.4%) and the lowest percentage is given for the vitamin E and B12 in young adult women. The percentage of subjects who exceed the UL values is greatest for the vitamin B3 (91.9%). According to the canonical correlation, there are differences in behavior between the groups. It is recommended to monitor the behavior and consumption of food fortified with vitamins, especially of the complex B and A. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.
Internal variation of electron temperature in HII regions
NASA Astrophysics Data System (ADS)
Oliveira, V. A.
2017-11-01
It is usual to think that if you calculate the same physical propriety from different methods you must find the same result, or within the margin of error. However, this is not the case if you calculate the abundance of heavy elements in photoionized nebulae. In fact, it is possible to find a value at least two times bigger, according to whether you estimate from recombination lines or from collisionally excited emission lines. This is called AD problem, and since 1967 the astronomers think about it and we do not have any final conclusion yet. This work aims to bring a small light to the path of a solution of AD problem, specifically for HII regions and, perhaps, to all types of photoionized nebulae.
CARA Status and Upcoming Enhancements
NASA Technical Reports Server (NTRS)
Johnson, Megan
2017-01-01
CAS 8.4.3 was deployed to operations on 13 June 2017. Discrepancies Between 3D Pc Estimates and advanced Monte Carlo Equinoctial-Sampling Pc Estimates discovered and discussed at 23 May 2017 Useras (Registered Trademark) Forum. The patch created the Reporting Pc, which is the greater value between the calculated 2D and 3D Pc values This changed the Pc reported in the CDMs to the Reporting Pc This changed the Pc reported on the Summary Report to the Reporting Pc This changed the Pc reported on Maneuver Screening Analysis (MSA) Report to the Reporting Pc. Both the 2D and 3D Pc added to the Summary Report details section The patch also updated the 3D Pc algorithm to eliminate velocity covariance from the Pc calculation This will bring 2D and 3D Pc into close alignment for vast majority of events, particularly for the events in which the 2D/3D discrepancy was found.
Recall of health warnings in smokeless tobacco ads
Truitt, L; Hamilton, W; Johnston, P; Bacani, C; Crawford, S; Hozik, L; Celebucki, C
2002-01-01
Design: Subjects examined two distracter ads and one of nine randomly assigned smokeless tobacco ads varying in health warning presence, size (8 to 18 point font), and contrast (low versus high)—including no health warning. They were then interviewed about ad content using recall and recognition questions. Subjects: A convenience sample of 895 English speaking males aged 16–24 years old who were intercepted at seven shopping malls throughout Massachusetts during May 2000. Main outcome measures: Proven aided recall, or recall of a health warning and correct recognition of the warning message among distracters, and false recall. Results: Controlling for covariates such as education, employment/student status, and Hispanic background, proven aided recall increased significantly with font size; doubling size from 10 to 20 point font would increase recall from 63% to 76%. Although not statistically significant, recall was somewhat better for high contrast warnings. Ten per cent of the sample mistakenly recalled the warning where none existed. Conclusions: As demonstrated by substantially greater recall among ads that included health warnings over ads that had none, health warnings retained their value to consumers despite years of exposure (that can produce false recall). Larger health warnings would enhance recall, and the proposed model can be used to estimate potential recall that affects communication, perceived health risk, and behaviour modification. PMID:12034984
Keveson, Benjamin; Clouser, Ryan D; Hamlin, Mark P; Stevens, Pamela; Stinnett-Donnelly, Justin M; Allen, Gilman B
2017-01-01
Chest X-rays (CXRs) are traditionally obtained daily in all patients on invasive mechanical ventilation (IMV) in the intensive care unit (ICU). We sought to reduce overutilisation of CXRs obtained in the ICU, using a multifaceted intervention to eliminate automated daily studies. We first educated ICU staff about the low diagnostic yield of automated daily CXRs, then removed the 'daily' option from the electronic health records-based ordering system, and added a query (CXR indicated or not indicated) to the ICU daily rounding checklist to prompt a CXR order when clinically warranted. We built a report from billing codes, focusing on all CXRs obtained on IMV census days in the medical (MICU) and surgical (SICU) ICUs, excluding the day of admission and days that a procedure warranting CXR was performed. This generated the number of CXRs obtained every 1000 'included' ventilator days (IVDs), the latter defined as not having an 'absolute' clinical indication for CXR. The average monthly number of CXRs on an IVD decreased from 919±90 (95% CI 877 to 963) to 330±87 (95% CI 295 to 354) per 1000 IVDs in the MICU, and from 995±69 (95% CI 947 to 1055) to 649±133 (95% CI 593 to 697) in the SICU. This yielded an estimated 1830 to 2066 CXRs avoided over 2 years and an estimated annual savings of $191 600 to $224 200. There was no increase in reported adverse events. ICUs can safely transition to a higher value strategy of indication-based chest imaging by educating staff, eliminating the 'daily' order option and adding a simplified prompt to avoid missing clinically indicated CXRs.
Topics in two-body hadronic decays of D mesons
NASA Astrophysics Data System (ADS)
El Aaoud, El Hassan
We have carried out an analysis of helicity and partial- wave amplitudes for the decay of D mesons to two vector mesons V 1V2, D --> V1V2. In particular we have studied the Cabibbo-favored decays D+s --> ρφ and D --> K*ρ in the factorization approximation using several models for the form factors. All the models, with the exception of one, generate partial-wave amplitudes with the hierarchy |S| > |P| > | D|. Even though in most models the D-wave amplitude is an order of magnitude smaller than the S-wave amplitude, its effect on the longitudinal polarization could be as large as 30%. Due to a misidentification of the partial-wave amplitudes in terms of the Lorentz structures in the relevant literature, we cast doubt on the veracity of the listed data for the decay D --> K*ρ, particularly the partial-wave branching ratios. We have also investigated the effect of the isospin ½, JP = 0+ resonant state K*0 (1950) on the decays D0 --> K¯0η and D0 --> K¯0η' as a function of the branching ratio sum r = Br( K*0 (1950) --> K¯0η) + Br( K*0 (1950) --> K¯0η ') and the coupling constants gK*0 K0h , and gK*0 K0h' . We have used a factorized input for the D 0 --> K*0 (1950) weak transition through a πK loop. We estimated both on- and off-shell contributions from the loop. Our calculation shows that the off-shell effects are significant. For r >= 30% a fit to the decay amplitude |A(D 0 --> K¯0η' )| was possible, but the amplitude A(D 0 --> K¯0η) remained at its factorized value and hence a branching ratio too low compared to data. For small values of r, r <= 18%, we were able to fit |A(D0 --> K¯0η)|, and despite the fact that | A(D0 --> K¯ 0η') | could be raised by almost 100% over its factorized value, it still falls short of its experimental value. A simultaneous fit to both amplitudes |(A(D0 --> K¯0η')| and | A(D0 --> K¯ 0η| was not possible. We have also determined the strong phase of the resonant amplitudes for both decays.
NASA Astrophysics Data System (ADS)
Larson, Robert Sherman
An Unmanned Aerial Vehicle (UAV) and a manned aircraft are tracked using ADS-B transponders and the Local Area Multilateration System (LAMS) in simulated GPS-degraded and GPS-denied environments. Several position estimation and fusion algorithms are developed for use with the Autonomous Flight Systems Laboratory (AFSL) TRansponder based Position Information System (TRAPIS) software. At the lowest level, these estimation and fusion algorithms use raw information from ADS-B and LAMS data streams to provide aircraft position estimates to the ground station user. At the highest level, aircraft position is estimated using a discrete time Kalman filter with real-time covariance updates and fusion involving weighted averaging of ADS-B and LAMS positions. Simulation and flight test results are provided, demonstrating the feasibility of incorporating an ADS-B transponder on a commercially-available UAS and maintaining situational awareness of aircraft positions in GPS-degraded and GPS-denied environments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... denominator of which is the customs value of the item and adding this amount to the customs value of the... the denominator of which is the customs value of the item and adding this amount to the customs value... customs value of the item and adding this amount to the customs value of the dutiable portion of the item...
Code of Federal Regulations, 2013 CFR
2013-04-01
... denominator of which is the customs value of the item and adding this amount to the customs value of the... the denominator of which is the customs value of the item and adding this amount to the customs value... customs value of the item and adding this amount to the customs value of the dutiable portion of the item...
Code of Federal Regulations, 2012 CFR
2012-04-01
... denominator of which is the customs value of the item and adding this amount to the customs value of the... the denominator of which is the customs value of the item and adding this amount to the customs value... customs value of the item and adding this amount to the customs value of the dutiable portion of the item...
Code of Federal Regulations, 2014 CFR
2014-04-01
... denominator of which is the customs value of the item and adding this amount to the customs value of the... the denominator of which is the customs value of the item and adding this amount to the customs value... customs value of the item and adding this amount to the customs value of the dutiable portion of the item...
Code of Federal Regulations, 2011 CFR
2011-04-01
... denominator of which is the customs value of the item and adding this amount to the customs value of the... the denominator of which is the customs value of the item and adding this amount to the customs value... customs value of the item and adding this amount to the customs value of the dutiable portion of the item...
Patient-centered care as value-added service by compounding pharmacists.
McPherson, Timothy B; Fontane, Patrick E; Day, Jonathan R
2013-01-01
The term "value-added" is widely used to describe business and professional services that complement a product or service or that differentiate it from competing products and services. The objective of this study was to determine compounding pharmacists' self-perceptions of the value-added services they provide. A web-based survey method was used. Respondents' perceptions of their most important value-added service frequently fell into one of two categories: (1) enhanced pharmacist contribution to developing and implementing patient therapeutic plans and (2) providing customized medications of high pharmaceutical quality. The results were consistent with a hybrid community clinical practice model for compounding pharmacists wherein personalization of the professional relationship is the value-added characteristic.
Sun, Yan; Shi, Jiajun; Zhang, Sizhong; Tang, Mouni; Han, Haiying; Guo, Yangbo; Ma, Cui; Liu, Xiehe; Li, Tao
2005-06-03
In order to clarify the relationship of apolipoprotein CIII (APOC3) polymorphism and sporadic Alzheimer's disease (AD) in Chinese, 165 sporadic AD patients and 174 age-matched elderly individuals were genotyped for the APOC3 SstI and apolipoprotein E (APOE) HhaI polymorphisms. As the result, the APOC3 3017G allele was found to be associated with AD in APOE epsilon4 allele noncarriers (chi2=4.433, P=0.035), and the risk estimate of allele C versus G resulted in an OR of 1.56 (95% CI: 1.03-2.37), although in total no significant differences of allelic or genotypic frequencies between patients and controls were found. Assessment of interaction between APOE epsilon4 and APOC3 3017G status presented an adjusted odds ratio of 0.62 (95% CI: 0.37-1.03) with a borderline significant P-value (P=0.066). Therefore, we conclude that the rare APOC3 G allele may offer some protection against the development of sporadic AD in APOE epsilon4 noncarriers in Chinese.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez, M.; Campion, D.; Babron, M.C.
1996-02-16
Segregation analysis of Alzheimer disease (AD) in 92 families ascertained through early-onset ({le}age 60 years) AD (EOAD) probands has been carried out, allowing for a mixture in AD inheritance among probands. The goal was to quantify the proportion of probands that could be explained by autosomal inheritance of a rare disease allele {open_quotes}a{close_quotes} at a Mendelian dominant gene (MDG). Our data provide strong evidence for a mixture of two distributions; AD transmission is fully explained by MDG inheritance in <20% of probands. Male and female age-of-onset distributions are significantly different for {open_quotes}AA{close_quote} but not for {open_quotes}aA{close_quote} subjects. For {open_quotes}aA{close_quote} subjectsmore » the estimated penetrance value was close to 1 by age 60. For {open_quotes}AA{close_quotes} subjects, it reaches, by age 90, 10% (males) and 30% (females). We show a clear cutoff in the posterior probability of being an MDG case. 10 refs., 1 tab.« less
SMOKE-FREE ORDINANCES INCREASE RESTAURANT PROFIT AND VALUE
Alamar, Benjamin C.
2011-01-01
This study estimates the economic value added to a restaurant by a smoke-free policy using regression analysis of the purchase price of restaurants, as a function of the presence of a smoke-free law and other control variables. There was a median increase of 16% (interquartile range 11% to 25%) in the sale price of a restaurant in a jurisdiction with a smoke-free law compared to a comparable restaurant in a community without such a law. This result indicates that, contrary to claims made by the tobacco industry and other opponents of smoke-free laws, these laws are associated with an increase in restaurant profitability. PMID:21637722
Garabedian, Stephen P.
1986-01-01
A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.
Recall of health warnings in smokeless tobacco ads.
Truitt, Linda; Hamilton, William L; Johnston, P R; Bacani, C P; Crawford, S O; Hozik, L; Celebucki, Carolyn
2002-06-01
To determine the effects of health warning characteristics in smokeless tobacco magazine print ads on warning recall, and the implications for current US Federal regulations. Subjects examined two distracter ads and one of nine randomly assigned smokeless tobacco ads varying in health warning presence, size (8 to 18 point font), and contrast (low versus high)-including no health warning. They were then interviewed about ad content using recall and recognition questions. A convenience sample of 895 English speaking males aged 16-24 years old who were intercepted at seven shopping malls throughout Massachusetts during May 2000. Proven aided recall, or recall of a health warning and correct recognition of the warning message among distracters, and false recall. Controlling for covariates such as education, employment/student status, and Hispanic background, proven aided recall increased significantly with font size; doubling size from 10 to 20 point font would increase recall from 63% to 76%. Although not statistically significant, recall was somewhat better for high contrast warnings. Ten per cent of the sample mistakenly recalled the warning where none existed. As demonstrated by substantially greater recall among ads that included health warnings over ads that had none, health warnings retained their value to consumers despite years of exposure (that can produce false recall). Larger health warnings would enhance recall, and the proposed model can be used to estimate potential recall that affects communication, perceived health risk, and behaviour modification.
Wardak, Mirwais; Wong, Koon-Pong; Shao, Weber; Dahlbom, Magnus; Kepe, Vladimir; Satyamurthy, Nagichettiar; Small, Gary W.; Barrio, Jorge R.; Huang, Sung-Cheng
2010-01-01
Head movement during a PET scan (especially, dynamic scan) can affect both the qualitative and quantitative aspects of an image, making it difficult to accurately interpret the results. The primary objective of this study was to develop a retrospective image-based movement correction (MC) method and evaluate its implementation on dynamic [18F]-FDDNP PET images of cognitively intact controls and patients with Alzheimer’s disease (AD). Methods Dynamic [18F]-FDDNP PET images, used for in vivo imaging of beta-amyloid plaques and neurofibrillary tangles, were obtained from 12 AD and 9 age-matched controls. For each study, a transmission scan was first acquired for attenuation correction. An accurate retrospective MC method that corrected for transmission-emission misalignment as well as emission-emission misalignment was applied to all studies. No restriction was assumed for zero movement between the transmission scan and first emission scan. Logan analysis with cerebellum as the reference region was used to estimate various regional distribution volume ratio (DVR) values in the brain before and after MC. Discriminant analysis was used to build a predictive model for group membership, using data with and without MC. Results MC improved the image quality and quantitative values in [18F]-FDDNP PET images. In this subject population, medial temporal (MTL) did not show a significant difference between controls and AD before MC. However, after MC, significant differences in DVR values were seen in frontal, parietal, posterior cingulate (PCG), MTL, lateral temporal (LTL), and global between the two groups (P < 0.05). In controls and AD, the variability of regional DVR values (as measured by the coefficient of variation) decreased on average by >18% after MC. Mean DVR separation between controls and ADs was higher in frontal, MTL, LTL and global after MC. Group classification by discriminant analysis based on [18F]-FDDNP DVR values was markedly improved after MC. Conclusion The streamlined and easy to use MC method presented in this work significantly improves the image quality and the measured tracer kinetics of [18F]-FDDNP PET images. The proposed MC method has the potential to be applied to PET studies on patients having other disorders (e.g., Down syndrome and Parkinson’s disease) and to brain PET scans with other molecular imaging probes. PMID:20080894
2009 Alzheimer's disease facts and figures.
2009-05-01
Alzheimer's disease (AD) is the sixth leading cause of all deaths in the United States, and the fifth leading cause of death in Americans aged 65 and older. Whereas other major causes of death have been on the decrease, deaths attributable to AD have been rising dramatically. Between 2000 and 2006, heart-disease deaths decreased nearly 12%, stroke deaths decreased 18%, and prostate cancer-related deaths decreased 14%, whereas deaths attributable to AD increased 47%. An estimated 5.3 million Americans have AD; the approximately 200,000 persons under age 65 years with AD comprise the younger-onset AD population. Every 70 seconds, someone in America develops AD; by 2050, this time is expected to decrease to every 33 seconds. Over the coming decades, the "baby-boom" population is projected to add 10 million people to these numbers. In 2050, the incidence of AD is expected to approach nearly a million people per year, with a total estimated prevalence of 11 to 16 million people. Significant cost implications related to AD and other dementias include an estimated $148 billion annually in direct (Medicare/Medicaid) and indirect (e.g., decreased business productivity) costs. Not included in these figures is the $94 billion in unpaid services to individuals with AD provided annually by an estimated 10 million caregivers. Mild cognitive impairment (MCI) is an important component in the continuum from healthy cognition to dementia. Understanding which individuals with MCI are at highest risk for eventually developing AD is key to our ultimate goal of preventing AD. This report provides information meant to increase an understanding of the public-health impact of AD, including incidence and prevalence, mortality, lifetime risks, costs, and impact on family caregivers. This report also sets the stage for a better understanding of the relationship between MCI and AD.
Disrupted Structural Brain Network in AD and aMCI: A Finding of Long Fiber Degeneration.
Fang, Rong; Yan, Xiao-Xiao; Wu, Zhi-Yuan; Sun, Yu; Yin, Qi-Hua; Wang, Ying; Tang, Hui-Dong; Sun, Jun-Feng; Miao, Fei; Chen, Sheng-Di
2015-01-01
Although recent evidence has emerged that Alzheimer's disease (AD) and amnestic mild cognitive impairment (aMCI) patients show both regional brain abnormalities and topological degeneration in brain networks, our understanding of the effects of white matter fiber aberrations on brain network topology in AD and aMCI is still rudimentary. In this study, we investigated the regional volumetric aberrations and the global topological abnormalities in AD and aMCI patients. The results showed a widely distributed atrophy in both gray and white matters in the AD and aMCI groups. In particular, AD patients had weaker connectivity with long fiber length than aMCI and normal control (NC) groups, as assessed by fractional anisotropy (FA). Furthermore, the brain networks of all three groups exhibited prominent economical small-world properties. Interestingly, the topological characteristics estimated from binary brain networks showed no significant group effect, indicating a tendency of preserving an optimal topological architecture in AD and aMCI during degeneration. However, significantly longer characteristic path length was observed in the FA weighted brain networks of AD and aMCI patients, suggesting dysfunctional global integration. Moreover, the abnormality of the characteristic path length was negatively correlated with the clinical ratings of cognitive impairment. Thus, the results therefore suggested that the topological alterations in weighted brain networks of AD are induced by the loss of connectivity with long fiber lengths. Our findings provide new insights into the alterations of the brain network in AD and may indicate the predictive value of the network metrics as biomarkers of disease development.
Relation of genomic variants for Alzheimer disease dementia to common neuropathologies
Yu, Lei; Buchman, Aron S.; Schneider, Julie A.; De Jager, Philip L.; Bennett, David A.
2016-01-01
Objective: To investigate the associations of previously reported Alzheimer disease (AD) dementia genomic variants with common neuropathologies. Methods: This is a postmortem study including 1,017 autopsied participants from 2 clinicopathologic cohorts. Analyses focused on 22 genomic variants associated with AD dementia in large-scale case-control genome-wide association study (GWAS) meta-analyses. The neuropathologic traits of interest were a pathologic diagnosis of AD according to NIA-Reagan criteria, macroscopic and microscopic infarcts, Lewy bodies (LB), and hippocampal sclerosis. For each variant, multiple logistic regression was used to investigate its association with neuropathologic traits, adjusting for age, sex, and subpopulation structure. We also conducted power analyses to estimate the sample sizes required to detect genome-wide significance (p < 5 × 10−8) for pathologic AD for all variants. Results: APOE ε4 allele was associated with greater odds of pathologic AD (odds ratio [OR] 3.82, 95% confidence interval [CI] 2.67–5.46, p = 1.9 × 10−13), while ε2 allele was associated with lower odds of pathologic AD (OR 0.42, 95% CI 0.30–0.61, p = 3.1 × 10−6). Four additional genomic variants including rs6656401 (CR1), rs1476679 (ZCWPW1), rs35349669 (INPP5D), and rs17125944 (FERMT2) had p values less than 0.05. Remarkably, half of the previously reported AD dementia variants are not likely to be detected for association with pathologic AD with a sample size in excess of the largest GWAS meta-analyses of AD dementia. Conclusions: Many recently discovered genomic variants for AD dementia are not associated with the pathology of AD. Some genomic variants for AD dementia appear to be associated with other common neuropathologies. PMID:27371493
Relation of genomic variants for Alzheimer disease dementia to common neuropathologies.
Farfel, Jose M; Yu, Lei; Buchman, Aron S; Schneider, Julie A; De Jager, Philip L; Bennett, David A
2016-08-02
To investigate the associations of previously reported Alzheimer disease (AD) dementia genomic variants with common neuropathologies. This is a postmortem study including 1,017 autopsied participants from 2 clinicopathologic cohorts. Analyses focused on 22 genomic variants associated with AD dementia in large-scale case-control genome-wide association study (GWAS) meta-analyses. The neuropathologic traits of interest were a pathologic diagnosis of AD according to NIA-Reagan criteria, macroscopic and microscopic infarcts, Lewy bodies (LB), and hippocampal sclerosis. For each variant, multiple logistic regression was used to investigate its association with neuropathologic traits, adjusting for age, sex, and subpopulation structure. We also conducted power analyses to estimate the sample sizes required to detect genome-wide significance (p < 5 × 10(-8)) for pathologic AD for all variants. APOE ε4 allele was associated with greater odds of pathologic AD (odds ratio [OR] 3.82, 95% confidence interval [CI] 2.67-5.46, p = 1.9 × 10(-13)), while ε2 allele was associated with lower odds of pathologic AD (OR 0.42, 95% CI 0.30-0.61, p = 3.1 × 10(-6)). Four additional genomic variants including rs6656401 (CR1), rs1476679 (ZCWPW1), rs35349669 (INPP5D), and rs17125944 (FERMT2) had p values less than 0.05. Remarkably, half of the previously reported AD dementia variants are not likely to be detected for association with pathologic AD with a sample size in excess of the largest GWAS meta-analyses of AD dementia. Many recently discovered genomic variants for AD dementia are not associated with the pathology of AD. Some genomic variants for AD dementia appear to be associated with other common neuropathologies. © 2016 American Academy of Neurology.
Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.
Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H
2014-01-01
Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.
ERIC Educational Resources Information Center
UCLA IDEA, 2012
2012-01-01
Value added measures (VAM) uses changes in student test scores to determine how much "value" an individual teacher has "added" to student growth during the school year. Some policymakers, school districts, and educational advocates have applauded VAM as a straightforward measure of teacher effectiveness: the better a teacher,…
Accurate segmentation framework for the left ventricle wall from cardiac cine MRI
NASA Astrophysics Data System (ADS)
Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.
2013-10-01
We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.
Economics of movable interior blankets for greenhouses
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, G.B.; Fohner, G.R.; Albright, L.D.
1981-01-01
A model for evaluating the economic impact of investment in a movable interior blanket was formulated. The method of analysis was net present value (NPV), in which the discounted, after-tax cash flow of costs and benefits was computed for the useful life of the system. An added feature was a random number component which permitted any or all of the input parameters to be varied within a specified range. Results from 100 computer runs indicated that all of the NPV estimates generated were positive, showing that the investment was profitable. However, there was a wide range of NPV estimates, frommore » $16.00/m/sup 2/ to $86.40/m/sup 2/, with a median value of $49.34/m/sup 2/. Key variables allowed to range in the analysis were: (1) the cost of fuel before the blanket is installed; (2) the percent fuel savings resulting from use of the blanket; (3) the annual real increase in the cost of fuel; and (4) the change in the annual value of the crop. The wide range in NPV estimates indicates the difficulty in making general recommendations regarding the economic feasibility of the investment when uncertainty exists as to the correct values for key variables in commercial settings. The results also point out needed research into the effect of the blanket on the crop, and on performance characteristics of the blanket.« less
A Brief History of Educational "Value-Added": How Did We Get to Where We Are?
ERIC Educational Resources Information Center
Saunders, Lesley
1999-01-01
Explains how and why the economics concept "value added" came to be used in an educational context, focusing on early usage in the United Kingdom. The term has been developed, used, and defined in various, conflicting ways. Some ambiguities cannot be eliminated. Value-added effectiveness measures involve value judgments. (44 references)…
Cost of tobacco-related diseases, including passive smoking, in Hong Kong.
McGhee, S M; Ho, L M; Lapsley, H M; Chau, J; Cheung, W L; Ho, S Y; Pow, M; Lam, T H; Hedley, A J
2006-04-01
Costs of tobacco-related disease can be useful evidence to support tobacco control. In Hong Kong we now have locally derived data on the risks of smoking, including passive smoking. To estimate the health-related costs of tobacco from both active and passive smoking. Using local data, we estimated active and passive smoking-attributable mortality, hospital admissions, outpatient, emergency and general practitioner visits for adults and children, use of nursing homes and domestic help, time lost from work due to illness and premature mortality in the productive years. Morbidity risk data were used where possible but otherwise estimates based on mortality risks were used. Utilisation was valued at unit costs or from survey data. Work time lost was valued at the median wage and an additional costing included a value of USD 1.3 million for a life lost. In the Hong Kong population of 6.5 million in 1998, the annual value of direct medical costs, long term care and productivity loss was USD 532 million for active smoking and USD 156 million for passive smoking; passive smoking accounted for 23% of the total costs. Adding the value of attributable lives lost brought the annual cost to USD 9.4 billion. The health costs of tobacco use are high and represent a net loss to society. Passive smoking increases these costs by at least a quarter. This quantification of the costs of tobacco provides strong motivation for legislative action on smoke-free areas in the Asia Pacific Region and elsewhere.
Expensing stock options: a fair-value approach.
Kaplan, Robert S; Palepu, Krishna G
2003-12-01
Now that companies such as General Electric and Citigroup have accepted the premise that employee stock options are an expense, the debate is shifting from whether to report options on income statements to how to report them. The authors present a new accounting mechanism that maintains the rationale underlying stock option expensing while addressing critics' concerns about measurement error and the lack of reconciliation to actual experience. A procedure they call fair-value expensing adjusts and eventually reconciles cost estimates made at grant date with subsequent changes in the value of the options, and it does so in a way that eliminates forecasting and measurement errors over time. The method captures the chief characteristic of stock option compensation--that employees receive part of their compensation in the form of a contingent claim on the value they are helping to produce. The mechanism involves creating entries on both the asset and equity sides of the balance sheet. On the asset side, companies create a prepaid-compensation account equal to the estimated cost of the options granted; on the owners'-equity side, they create a paid-in capital stock-option account for the same amount. The prepaid-compensation account is then expensed through the income statement, and the stock option account is adjusted on the balance sheet to reflect changes in the estimated fair value of the granted options. The amortization of prepaid compensation is added to the change in the option grant's value to provide the total reported expense of the options grant for the year. At the end of the vesting period, the company uses the fair value of the vested option to make a final adjustment on the income statement to reconcile any difference between that fair value and the total of the amounts already reported.
Subirats, Xavier; Bosch, Elisabeth; Rosés, Martí
2007-01-05
The use of methanol-aqueous buffer mobile phases in HPLC is a common election when performing chromatographic separations of ionisable analytes. The addition of methanol to the aqueous buffer to prepare such a mobile phase changes the buffer capacity and the pH of the solution. In the present work, the variation of these buffer properties is studied for acetic acid-acetate, phosphoric acid-dihydrogenphosphate-hydrogenphosphate, citric acid-dihydrogencitrate-hydrogencitrate-citrate, and ammonium-ammonia buffers. It is well established that the pH change of the buffers depends on the initial concentration and aqueous pH of the buffer, on the percentage of methanol added, and on the particular buffer used. The proposed equations allow the pH estimation of methanol-water buffered mobile phases up to 80% in volume of organic modifier from initial aqueous buffer pH and buffer concentration (before adding methanol) between 0.001 and 0.01 mol L(-1). From both the estimated pH values of the mobile phase and the estimated pKa of the ionisable analytes, it is possible to predict the degree of ionisation of the analytes and therefore, the interpretation of acid-base analytes behaviour in a particular methanol-water buffered mobile phase.
Metrix Matrix: A Cloud-Based System for Tracking Non-Relative Value Unit Value-Added Work Metrics.
Kovacs, Mark D; Sheafor, Douglas H; Thacker, Paul G; Hardie, Andrew D; Costello, Philip
2018-03-01
In the era of value-based medicine, it will become increasingly important for radiologists to provide metrics that demonstrate their value beyond clinical productivity. In this article the authors describe their institution's development of an easy-to-use system for tracking value-added but non-relative value unit (RVU)-based activities. Metrix Matrix is an efficient cloud-based system for tracking value-added work. A password-protected home page contains links to web-based forms created using Google Forms, with collected data populating Google Sheets spreadsheets. Value-added work metrics selected for tracking included interdisciplinary conferences, hospital committee meetings, consulting on nonbilled outside studies, and practice-based quality improvement. Over a period of 4 months, value-added work data were collected for all clinical attending faculty members in a university-based radiology department (n = 39). Time required for data entry was analyzed for 2 faculty members over the same time period. Thirty-nine faculty members (equivalent to 36.4 full-time equivalents) reported a total of 1,223.5 hours of value-added work time (VAWT). A formula was used to calculate "value-added RVUs" (vRVUs) from VAWT. VAWT amounted to 5,793.6 vRVUs or 6.0% of total work performed (vRVUs plus work RVUs [wRVUs]). Were vRVUs considered equivalent to wRVUs for staffing purposes, this would require an additional 2.3 full-time equivalents, on the basis of average wRVU calculations. Mean data entry time was 56.1 seconds per day per faculty member. As health care reimbursement evolves with an emphasis on value-based medicine, it is imperative that radiologists demonstrate the value they add to patient care beyond wRVUs. This free and easy-to-use cloud-based system allows the efficient quantification of value-added work activities. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soliman, A; Hashemi, M; Safigholi, H
Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation timesmore » and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T{sub 2}* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R{sup 2} = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R{sup 2}=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.« less
NASA Astrophysics Data System (ADS)
He, H.; Li, X.; Cheng, X.
2016-12-01
Sea animals are the "bio-indicators" of the climate change in the Antarctic. The abundant nutrient components in their excreta such as carbon (C) and nitrogen (N) promote the emissions of greenhouse gases (GHGs) including methane (CH4) and nitrous oxide (N2O). Adélie Penguins are important sea animals, their colonies therefore become the potential "hotspots" of the GHGs emissions. Some field observations have been carried out to study the penguin excreta on CH4 and N2O emissions in the Antarctic peninsula. However, due to the lacking of the penguin population data, the total emissions of GHGs have not been estimated at regional scale. This study aimed to extract penguin information from two period aerial photographs respectively in 1983 and 2012 using object-oriented method in Victoria Land, Antarctic, and then estimate the Adélie penguin populations on Inexpressible Island combined with the shadow analysis. Meanwhile, a GHGs model was developed to estimate CH4 and N2O emissions from Adelie penguins based on the CH4 and N2O fluxes of penguin guanos, the number of penguins, and the fresh weight of penguin guanos and so on. The results indicated that object-oriented method was effective in penguin information extraction from high-resolution images, and there were 17120 and 21183 Adélie penguins respectively in 1983 and 2012, respectively. The main reasons for the increase in penguin populations from 1983 to 2012 might be explained from physical environment and biological environment, such as the rising temperatures and reduced Antarctic toothfishes. And the total CH4 and N2O emissions from penguins on Inexpressible Island during breeding season were 246 kg CH4 and 2.67 kg N2O in 1983, and 304 kg CH4 and 3.31 kg N2O in 2012. Our study aimed to provide important reference value for the estimation of GHG budget in Antarctic.
Secure detection in quantum key distribution by real-time calibration of receiver
NASA Astrophysics Data System (ADS)
Marøy, Øystein; Makarov, Vadim; Skaar, Johannes
2017-12-01
The single-photon detectionefficiency of the detector unit is crucial for the security of common quantum key distribution protocols like Bennett-Brassard 1984 (BB84). A low value for the efficiency indicates a possible eavesdropping attack that exploits the photon receiver’s imperfections. We present a method for estimating the detection efficiency, and calculate the corresponding secure key generation rate. The estimation is done by testing gated detectors using a randomly activated photon source inside the receiver unit. This estimate gives a secure rate for any detector with non-unity single-photon detection efficiency, both inherit or due to blinding. By adding extra optical components to the receiver, we make sure that the key is extracted from photon states for which our estimate is valid. The result is a quantum key distribution scheme that is secure against any attack that exploits detector imperfections.
Gutierrez-Villalobos, Jose M.; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A.; Martínez-Hernández, Moisés A.
2015-01-01
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor. PMID:26131677
Gutierrez-Villalobos, Jose M; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A; Martínez-Hernández, Moisés A
2015-06-29
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor.
The influence of preburial insect access on the decomposition rate.
Bachmann, Jutta; Simmons, Tal
2010-07-01
This study compared total body score (TBS) in buried remains (35 cm depth) with and without insect access prior to burial. Sixty rabbit carcasses were exhumed at 50 accumulated degree day (ADD) intervals. Weight loss, TBS, intra-abdominal decomposition, carcass/soil interface temperature, and below-carcass soil pH were recorded and analyzed. Results showed significant differences (p < 0.001) in decomposition rates between carcasses with and without insect access prior to burial. An approximately 30% enhanced decomposition rate with insects was observed. TBS was the most valid tool in postmortem interval (PMI) estimation. All other variables showed only weak relationships to decomposition stages, adding little value to PMI estimation. Although progress in estimating the PMI for surface remains has been made, no previous studies have accomplished this for buried remains. This study builds a framework to which further comparable studies can contribute, to produce predictive models for PMI estimation in buried human remains.
On the value of the phenotypes in the genomic era.
Gonzalez-Recio, O; Coffey, M P; Pryce, J E
2014-12-01
Genetic improvement programs around the world rely on the collection of accurate phenotypic data. These phenotypes have an inherent value that can be estimated as the contribution of an additional record to genetic gain. Here, the contribution of phenotypes to genetic gain was calculated using traditional progeny testing (PT) and 2 genomic selection (GS) strategies that, for simplicity, included either males or females in the reference population. A procedure to estimate the theoretical economic contribution of a phenotype to a breeding program is described for both GS and PT breeding programs through the increment in genetic gain per unit of increase in estimated breeding value reliability obtained when an additional phenotypic record is added. The main factors affecting the value of a phenotype were the economic value of the trait, the number of phenotypic records already available for the trait, and its heritability. Furthermore, the value of a phenotype was affected by several other factors, including the cost of establishing the breeding program and the cost of phenotyping and genotyping. The cost of achieving a reliability of 0.60 was assessed for different reference populations for GS. Genomic reference populations of more sires with small progeny group sizes (e.g., 20 equivalent daughters) had a lower cost than those reference populations with either large progeny group sizes for fewer genotyped sires, or female reference populations, unless the heritability was large and the cost of phenotyping exceeded a few hundred dollars; then, female reference populations were preferable from an economic perspective. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Antolín-Rodríguez, Juan M; Sánchez-Báscones, Mercedes; Martín-Ramos, Pablo; Bravo-Sánchez, Carmen T; Martín-Gil, Jesús
2016-06-01
Polychlorinated biphenyl (PCB) pollution related to the use of organic waste as fertilizers in agricultural soils is a cause of major concern. In the study presented herein, PCB concentration was studied through a field trial conducted in two agricultural soils in the province of Palencia (Spain) over a 4-year period, assessing the impact of irrigation and of different types of organic waste materials. The amounts of organic waste added to the soil were calculated according to the nitrogen needs of the crop, and the concentration of PCBs was determined before and after the application of the organic waste. The resulting persistence of the total PCB content in the agricultural soils, compared with the PCB concentration in the original soils, ranged from 27% to 90%, with the lowest value corresponding to irrigated soils treated with municipal solid waste compost (MSWC) and the highest value to non-irrigated soils treated with composted sewage sludge (CSS). An estimate of the PCB content in agricultural soils after the application of organic waste materials until year 2050 was obtained, resulting in a value below 5 ng·g(-1), considered a background value for soils in sites far away from potential pollution sources.
Determinants of tobacco use by students
Vargas, Lorena Silva; Lucchese, Roselma; da Silva, Andrécia Cósmem; Guimarães, Rafael Alves; Vera, Ivânia; de Castro, Paulo Alexandre
2017-01-01
ABSTRACT OBJECTIVE Estimate the prevalence and determinants of tobacco use by students. METHODS This cross-sectional study, carried out between 2013 and 2014, evaluates 701 public school students between 10 and 79 years of age living in a city in the countryside of the State of Goias, Midwest of Brazil. A structured questionnaire collected the data and the predictor variables were demographic data, family nucleus, religion, physical activity practice, family functionality and parental smoking. Two multivariable models were implemented, the first for occasional tobacco use and the second for regular use, acquiring the measure of prevalence ratio (PR) and their respective 95%CI. Variables with p < 0.10 were included in Poisson regression models with robust variance to obtain the adjusted PR (adPR) and 95%CI. The Wald Chi-Squared test examined the differences between proportions, and values with p < 0.05 were statistically significant. RESULTS The prevalence of occasional and regular tobacco use were 33.4% (95%CI 29.8–36.9) and 6.7% (95%CI 5.0–8.8), respectively. The factors associated with occasional tobacco consumption were age of 15 to 17 years (adPR = 1.98) and above 18 years (adPR = 3.87), male gender (adPR = 1.23), moderate family dysfunction (adPR = 1.30), high family dysfunction (adPR = 1.97) and parental smoking (adPR = 1.60). In regards to regular consumption of tobacco, age above 18 years (adPR = 4.63), lack of religion (adPR = 2.08), high family dysfunction (adPR = 2.35) and parental smoking (adPR = 2.89) remained associated. CONCLUSIONS Students exhibit a high prevalence of occasional and regular tobacco use. This consumption relates to sociodemographic variables and family dysfunction. PMID:28492760
Pujol, Laure; Johnson, Nicholas Brian; Magras, Catherine; Albert, Isabelle; Membré, Jeanne-Marie
2015-10-15
In a previous study, a quantitative microbial exposure assessment (QMEA) model applied to an aseptic-UHT food process was developed [Pujol, L., Albert, I., Magras, C., Johnson, N. B., Membré, J. M. Probabilistic exposure assessment model to estimate aseptic UHT product failure rate. 2015 International Journal of Food Microbiology. 192, 124-141]. It quantified Sterility Failure Rate (SFR) associated with Bacillus cereus and Geobacillus stearothermophilus per process module (nine modules in total from raw material reception to end-product storage). Previously, the probabilistic model inputs were set by experts (using knowledge and in-house data). However, only the variability dimension was taken into account. The model was then improved using expert elicitation knowledge in two ways. First, the model was refined by adding the uncertainty dimension to the probabilistic inputs, enabling to set a second order Monte Carlo analysis. The eight following inputs, and their impact on SFR, are presented in detail in this present study: D-value for each bacteria of interest (B. cereus and G. stearothermophilus) associated with the inactivation model for the UHT treatment step, i.e., two inputs; log reduction (decimal reduction) number associated with the inactivation model for the packaging sterilization step for each bacterium and each part of the packaging (product container and sealing component), i.e., four inputs; and bacterial spore air load of the aseptic tank and the filler cabinet rooms, i.e., two inputs. Second, the model was improved by leveraging expert knowledge to develop further the existing model. The proportion of bacteria in the product which settled on surface of pipes (between the UHT treatment and the aseptic tank on one hand, and between the aseptic tank and the filler cabinet on the other hand) leading to a possible biofilm formation for each bacterium, was better characterized. It was modeled as a function of the hygienic design level of the aseptic-UHT line: the experts provided the model structure and most of the model parameters values. Mean of SFR was estimated to 10×10(-8) (95% Confidence Interval=[0×10(-8); 350×10(-8)]) and 570×10(-8) (95% CI=[380×10(-8); 820×10(-8)]) for B. cereus and G. stearothermophilus, respectively. These estimations were more accurate (since the confidence interval was provided) than those given by the model with only variability (for which the estimates were 15×10(-8) and 580×10(-8) for B. cereus and G. stearothermophilus, respectively). The updated model outputs were also compared with those obtained when inputs were described by a generic distribution, without specific information related to the case-study. Results showed that using a generic distribution can lead to unrealistic estimations (e.g., 3,181,000 product units contaminated by G. stearothermophilus among 10(8) product units produced) and emphasized the added value of eliciting information from experts from the relevant specialist field knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Karl, Andrew T.; Yang, Yan; Lohr, Sharon L.
2013-01-01
Value-added models have been widely used to assess the contributions of individual teachers and schools to students' academic growth based on longitudinal student achievement outcomes. There is concern, however, that ignoring the presence of missing values, which are common in longitudinal studies, can bias teachers' value-added scores.…
Comparison of molecular breeding values based on within- and across-breed training in beef cattle.
Kachman, Stephen D; Spangler, Matthew L; Bennett, Gary L; Hanford, Kathryn J; Kuehn, Larry A; Snelling, Warren M; Thallman, R Mark; Saatchi, Mahdi; Garrick, Dorian J; Schnabel, Robert D; Taylor, Jeremy F; Pollak, E John
2013-08-16
Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set.
NASA Astrophysics Data System (ADS)
di Luca, Alejandro; de Elía, Ramón; Laprise, René
2012-03-01
Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions.
Perendeci, Altinay; Arslan, Sever; Tanyolaç, Abdurrahman; Celebi, Serdar S
2009-10-01
A conceptual neural fuzzy model based on adaptive-network based fuzzy inference system, ANFIS, was proposed using available input on-line and off-line operational variables for a sugar factory anaerobic wastewater treatment plant operating under unsteady state to estimate the effluent chemical oxygen demand, COD. The predictive power of the developed model was improved as a new approach by adding the phase vector and the recent values of COD up to 5-10 days, longer than overall retention time of wastewater in the system. History of last 10 days for COD effluent with two-valued phase vector in the input variable matrix including all parameters had more predictive power. History of 7 days with two-valued phase vector in the matrix comprised of only on-line variables yielded fairly well estimations. The developed ANFIS model with phase vector and history extension has been able to adequately represent the behavior of the treatment system.
Beyond Test Scores: Adding Value to Assessment
ERIC Educational Resources Information Center
Rothman, Robert
2010-01-01
At a time when teacher quality has emerged as a key factor in student learning, a statistical technique that determines the "value added" that teachers bring to student achievement is getting new scrutiny. Value-added measures compare students' growth in achievement to their expected growth, based on prior achievement and demographic…
Methodological Concerns about the Education Value-Added Assessment System
ERIC Educational Resources Information Center
Amrein-Beardsley, Audrey
2008-01-01
Value-added models help to evaluate the knowledge that school districts, schools, and teachers add to student learning as students progress through school. In this article, the well-known Education Value-Added Assessment System (EVAAS) is examined. The author presents a practical investigation of the methodological issues associated with the…
Gagliano, M C; Braguglia, C M; Gallipoli, A; Gianico, A; Rossetti, S
2015-05-01
Anaerobic digestion (AD) is one of the few sustainable technologies that both produce energy and treat waste streams. Driven by a complex and diverse community of microbes, AD may be affected by different factors, many of which also influence the composition and activity of the microbial community. In this study, the biodiversity of microbial populations in innovative mesophilic/thermophilic temperature-phased AD of sludge was evaluated by means of fluorescence in situ hybridization (FISH). The increase of digestion temperature drastically affected the microbial composition and selected specialized biomass. Hydrogenotrophic Methanobacteriales and the protein fermentative bacterium Coprothermobacter spp. were identified in the thermophilic anaerobic biomass. Shannon-Weaver diversity (H') and evenness (E) indices were calculated using FISH data. Species richness was lower under thermophilic conditions compared with the values estimated in mesophilic samples, and it was flanked by similar trend of the evenness indicating that thermophilic communities may be therefore more susceptible to sudden changes and less prompt to adapting to operative variations.
[A medico-genetic description of inhabitants of two regions of the Kransnodar Krai].
Mamedova, R A; Kadoshnikova, M Iu; Galkina, V A; Klebnikova, O B; Mikhaĭlova, L K; Rudenskaia, G E; Ginter, E K
1999-01-01
The spectrum and prevalence rate of hereditary pathology in Kanevskii and Bryukhovetskii raions (districts) of Krasnodar krai (territory) were analyzed. The total size of the studied population was 145,937. The prevalence rate of monogenic hereditary pathology was estimated. This value was 1.08 +/- 0.08, 0.72 +/- 0.07, and 0.20 +/- 0.06 per 1000 people for autosomal dominant (AD), autosomal recessive (AR), and X-linked (XL) recessive diseases, respectively. Forty-two AD (158 affected persons in 82 families), 32 AR (105 affected persons in 82 families), and 6 XL disease entities (13 affected persons in 8 families) were found. A slight genetic subdivision was found in the populations of Kanevskii and Bryukhovetskii raions. However, it was not found to affect the prevalence of hereditary pathology.
Research on BIM-based building information value chain reengineering
NASA Astrophysics Data System (ADS)
Hui, Zhao; Weishuang, Xie
2017-04-01
The achievement of value and value-added factor to the building engineering information is accomplished through a chain-flow, that is, building the information value chain. Based on the deconstruction of the information chain on the construction information in the traditional information mode, this paper clarifies the value characteristics and requirements of each stage of the construction project. In order to achieve building information value-added, the paper deconstructs the traditional building information value chain, reengineer the information value chain model on the basis of the theory and techniques of BIM, to build value-added management model and analyse the value of the model.
ERIC Educational Resources Information Center
Sinharay, Sandip
2010-01-01
Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman (2008) suggested a method based on classical test theory to determine whether subscores have added value over total scores. This paper provides a literature review and reports when subscores were found to have added value for…
Menday, Hannah; Neal, Bruce; Wu, Jason H Y; Crino, Michelle; Baines, Surinder; Petersen, Kristina S
2017-12-01
The Australian Government has introduced a voluntary front-of-package labeling system that includes total sugar in the calculation. Our aim was to determine the effect of substituting added sugars for total sugars when calculating Health Star Ratings (HSR) and identify whether use of added sugars improves the capacity to distinguish between core and discretionary food products. This study included packaged food and beverage products available in Australian supermarkets (n=3,610). The product categories included in the analyses were breakfast cereals (n=513), fruit (n=571), milk (n=309), non-alcoholic beverages (n=1,040), vegetables (n=787), and yogurt (n=390). Added sugar values were estimated for each product using a validated method. HSRs were then estimated for every product according to the established method using total sugar, and then by substituting added sugar for total sugar. The scoring system was not modified when added sugar was used in place of total sugar in the HSR calculation. Products were classified as core or discretionary based on the Australian Dietary Guidelines. To investigate whether use of added sugar in the HSR algorithm improved the distinction between core and discretionary products as defined by the Australian Dietary Guidelines, the proportion of core products that received an HSR of ≥3.5 stars and the proportion of discretionary products that received an HSR of <3.5 stars, for algorithms based upon total vs added sugars were determined. There were 2,263 core and 1,347 discretionary foods; 1,684 of 3,610 (47%) products contained added sugar (median 8.4 g/100 g, interquartile range=5.0 to 12.2 g). When the HSR was calculated with added sugar instead of total sugar, an additional 166 (7.3%) core products received an HSR of ≥3.5 stars and 103 (7.6%) discretionary products received a rating of ≥3.5 stars. The odds of correctly identifying a product as core vs discretionary were increased by 61% (odds ratio 1.61, 95% CI 1.26 to 2.06; P<0.001) when the algorithm was based on added compared to total sugars. In the six product categories examined, substitution of added sugars for total sugars better aligned the HSR with the Australian Dietary Guidelines. Future work is required to investigate the impact in other product categories. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Coon, Keith D; Valla, Jon; Szelinger, Szabolics; Schneider, Lonnie E; Niedzielko, Tracy L; Brown, Kevin M; Pearson, John V; Halperin, Rebecca; Dunckley, Travis; Papassotiropoulos, Andreas; Caselli, Richard J; Reiman, Eric M; Stephan, Dietrich A
2006-08-01
The role of mitochondrial dysfunction in the pathogenesis of Alzheimer's disease (AD) has been well documented. Though evidence for the role of mitochondria in AD seems incontrovertible, the impact of mitochondrial DNA (mtDNA) mutations in AD etiology remains controversial. Though mutations in mitochondrially encoded genes have repeatedly been implicated in the pathogenesis of AD, many of these studies have been plagued by lack of replication as well as potential contamination of nuclear-encoded mitochondrial pseudogenes. To assess the role of mtDNA mutations in the pathogenesis of AD, while avoiding the pitfalls of nuclear-encoded mitochondrial pseudogenes encountered in previous investigations and showcasing the benefits of a novel resequencing technology, we sequenced the entire coding region (15,452 bp) of mtDNA from 19 extremely well-characterized AD patients and 18 age-matched, unaffected controls utilizing a new, reliable, high-throughput array-based resequencing technique, the Human MitoChip. High-throughput, array-based DNA resequencing of the entire mtDNA coding region from platelets of 37 subjects revealed the presence of 208 loci displaying a total of 917 sequence variants. There were no statistically significant differences in overall mutational burden between cases and controls, however, 265 independent sites of statistically significant change between cases and controls were identified. Changed sites were found in genes associated with complexes I (30.2%), III (3.0%), IV (33.2%), and V (9.1%) as well as tRNA (10.6%) and rRNA (14.0%). Despite their statistical significance, the subtle nature of the observed changes makes it difficult to determine whether they represent true functional variants involved in AD etiology or merely naturally occurring dissimilarity. Regardless, this study demonstrates the tremendous value of this novel mtDNA resequencing platform, which avoids the pitfalls of erroneously amplifying nuclear-encoded mtDNA pseudogenes, and our proposed analysis paradigm, which utilizes the availability of raw signal intensity values for each of the four potential alleles to facilitate quantitative estimates of mtDNA heteroplasmy. This information provides a potential new target for burgeoning diagnostics and therapeutics that could truly assist those suffering from this devastating disorder.
NASA Astrophysics Data System (ADS)
Fanjat, G.; Camps, P.; Alva Valdivia, L. M.; Sougrati, M. T.; Cuevas-Garcia, M.; Perrin, M.
2013-02-01
We present archeointensity data carried out on pieces of incense burners from the ancient Maya city of Palenque, Chiapas, Mexico, covering much of the Mesoamerican Classic period, from A.D. 400 to A.D. 850. We worked on pieces from 24 incense burners encompassing the five Classic ceramic phases of Palenque: Motiepa (A.D. 400-500), Cascadas (A.D. 500-600), Otulum (A.D. 600-700), Murcielagos (A.D. 700-770), and Balunté (A.D. 770-850). All the samples come from highly elaborate, flanged pedestal of incense burners that are undoubtedly assigned to a ceramic phase by means of their iconographic, morphological and stylistic analyses. Archeointensity measurements were performed with the Thellier-Thellier's method on pre-selected samples by means of their magnetic properties. We obtained archeointensities of very good technical quality from 19 of 24 pieces, allowing the determination of a precise mean value for each ceramic phase, between 29.1±0.9 μT and 32.5±1.2 μT. The firing temperatures of ceramics were estimated with Mössbauer spectroscopy between 700 °C and 1000 °C. These values ensure that a full thermo-remanent magnetization was acquired during the original heating. Our results suggest a relative stability of the field intensity during more than 400 years in this area. The abundance of archeological material in Mesoamerica contrasts with the small amount of archeomagnetic data available that are, in addition, of uneven quality. Thus, it is not possible to establish a trend of intensity variations in Mesoamerica, even using the global databases and secular variation predictions from global models. In this context, our high technical quality data represent a strong constraint for the Mesoamerican secular variation curve during the first millennium AD. The corresponding Virtual Axial Dipole Moments (VADM) are substantially smaller than the ones predicted by the last global geomagnetic models CALS3k.4, suggesting the need for additional data to develop a regional model and a reference curve for Mesoamerica.
Christine Todoroki; Eini Lowell
2006-01-01
The silvicultural practice of pruning juvenile stems is a value-adding operation due to the formation of knot-free wood after the pruned branch stubs have healed. However it is not until after the log has been processed that the added value is realized. The motivation for this paper stems from wanting to extract as much of that added value as possible while minimizing...
7 CFR 1980.391 - Equity sharing.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., principal reduction, and value added by any capital improvements. (i) Market value. Market value of the... amount of the insurance payment is generally a good indication of value; however, tax records or... exceed market value contribution as indicated by a sales comparison analysis. Generally, the value added...
7 CFR 1980.391 - Equity sharing.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., principal reduction, and value added by any capital improvements. (i) Market value. Market value of the... amount of the insurance payment is generally a good indication of value; however, tax records or... exceed market value contribution as indicated by a sales comparison analysis. Generally, the value added...
7 CFR 1980.391 - Equity sharing.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., principal reduction, and value added by any capital improvements. (i) Market value. Market value of the... amount of the insurance payment is generally a good indication of value; however, tax records or... exceed market value contribution as indicated by a sales comparison analysis. Generally, the value added...
7 CFR 1980.391 - Equity sharing.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., principal reduction, and value added by any capital improvements. (i) Market value. Market value of the... amount of the insurance payment is generally a good indication of value; however, tax records or... exceed market value contribution as indicated by a sales comparison analysis. Generally, the value added...
7 CFR 1980.391 - Equity sharing.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., principal reduction, and value added by any capital improvements. (i) Market value. Market value of the... amount of the insurance payment is generally a good indication of value; however, tax records or... exceed market value contribution as indicated by a sales comparison analysis. Generally, the value added...
7 CFR 766.202 - Determining the shared appreciation due.
Code of Federal Regulations, 2011 CFR
2011-01-01
... resulting from capital improvements added during the term of the SAA (contributory value). The market value... added to the real estate security since the execution of the SAA. (2) The appraisal must specifically... contributory value of capital improvements added during the term of the SAA will be deducted from the market...
7 CFR 766.202 - Determining the shared appreciation due.
Code of Federal Regulations, 2010 CFR
2010-01-01
... resulting from capital improvements added during the term of the SAA (contributory value). The market value... added to the real estate security since the execution of the SAA. (2) The appraisal must specifically... contributory value of capital improvements added during the term of the SAA will be deducted from the market...
7 CFR 766.202 - Determining the shared appreciation due.
Code of Federal Regulations, 2013 CFR
2013-01-01
... resulting from capital improvements added during the term of the SAA (contributory value). The market value... added to the real estate security since the execution of the SAA. (2) The appraisal must specifically... contributory value of capital improvements added during the term of the SAA will be deducted from the market...
7 CFR 766.202 - Determining the shared appreciation due.
Code of Federal Regulations, 2014 CFR
2014-01-01
... resulting from capital improvements added during the term of the SAA (contributory value). The market value... added to the real estate security since the execution of the SAA. (2) The appraisal must specifically... contributory value of capital improvements added during the term of the SAA will be deducted from the market...
English Value-Added Measures: Examining the Limitations of School Performance Measurement
ERIC Educational Resources Information Center
Perry, Thomas
2016-01-01
Value-added "Progress" measures are to be introduced for all English schools in 2016 as "headline" measures of school performance. This move comes despite research highlighting high levels of instability in value-added measures and concerns about the omission of contextual variables in the planned measure. This article studies…
Value-Added Results for Public Virtual Schools in California
ERIC Educational Resources Information Center
Ford, Richard; Rice, Kerry
2015-01-01
The objective of this paper is to present value-added calculation methods that were applied to determine whether online schools performed at the same or different levels relative to standardized testing. This study includes information on how we approached our value added model development and the results for 32 online public high schools in…
Western hardwoods : value-added research and demonstration program
D. W. Green; W. W. Von Segen; S. A. Willits
1995-01-01
Research results from the value-added research and demonstration program for western hardwoods are summarized in this report. The intent of the program was to enhance the economy of the Pacific Northwest by helping local communities and forest industries produce wood products more efficiently. Emphasis was given to value-added products and barriers to increased...
Stability of Teacher Value-Added Rankings across Measurement Model and Scaling Conditions
ERIC Educational Resources Information Center
Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong
2017-01-01
Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…
Evaluating Special Educator Effectiveness: Addressing Issues Inherent to Value-Added Modeling
ERIC Educational Resources Information Center
Steinbrecher, Trisha D.; Selig, James P.; Cosbey, Joanna; Thorstensen, Beata I.
2014-01-01
States are increasingly using value-added approaches to evaluate teacher effectiveness. There is much debate regarding whether these methods should be employed and, if employed, what role such methods should play in comprehensive teacher evaluation systems. In this article, we consider the use of value-added modeling (VAM) to evaluate special…
The Reliability, Impact, and Cost-Effectiveness of Value-Added Teacher Assessment Methods
ERIC Educational Resources Information Center
Yeh, Stuart S.
2012-01-01
This article reviews evidence regarding the intertemporal reliability of teacher rankings based on value-added methods. Value-added methods exhibit low reliability, yet are broadly supported by prominent educational researchers and are increasingly being used to evaluate and fire teachers. The article then presents a cost-effectiveness analysis…
The Potential Consequence of Using Value-Added Models to Evaluate Teachers
ERIC Educational Resources Information Center
Shen, Zuchao; Simon, Carlee Escue; Kelcey, Ben
2016-01-01
Value-added models try to separate the contribution of individual teachers or schools to students' learning growth measured by standardized test scores. There is a policy trend to use value-added modeling to evaluate teachers because of its face validity and superficial objectiveness. This article investigates the potential long term consequences…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.
Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of themore » center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.« less
Light, Emily M W; Kline, Allison S; Drosky, Megan A; Chapman, Larry S
2015-08-01
The objective of this study is to measure the return on investment (ROI) of the Price Chopper/Golub Corporation employee population who participate in wellness programs available to them. Medical claims data, risk level, and presence of comorbidities such as diabetes and heart disease were compared in a matched retrospective cohort of participants and nonparticipants, with 2008, 2009, and 2010 serving as measurement years. Program costs and estimated savings were used to calculate an ROI of $4.33 for every dollar invested in wellness programs. Reductions in medical costs were observed at several risk and participation levels, with an average savings of $133 per participant and a 3-year savings estimate of $285,706. The positive ROI and savings estimate indicate that wellness interventions added economic value to Price Chopper/Golub Corporation.
The indirect costs of cancer-related absenteeism in the workplace in Poland.
Macioch, Tomasz; Hermanowski, Tomasz
2011-12-01
The aim of this study was to evaluate cancer-related absenteeism costs in Poland. Data on sickness absences and disability were retrieved from the Department of Statistics of the Social Insurance Institution. The cost of lost productivity owing to premature death was estimated from data retrieved from the Polish National Cancer Registry. Absenteeism costs were estimated on the basis of the measure of gross value added per employee. The costs of lost productivity owing to sick leave, disability, and premature death were estimated to be 1.572 billion EUR, 0.504 billion EUR, and 0.535 billion EUR, respectively, in 2009. The indirect costs of lost productivity owing to cancer-related sick leave, disability, and premature death have a substantial effect on the Polish economy. In 2009, they accounted for more than 0.8% of GDP.
Candidate gene analysis for Alzheimer's disease in adults with Down syndrome.
Lee, Joseph H; Lee, Annie J; Dang, Lam-Ha; Pang, Deborah; Kisselev, Sergey; Krinsky-McHale, Sharon J; Zigman, Warren B; Luchsinger, José A; Silverman, Wayne; Tycko, Benjamin; Clark, Lorraine N; Schupf, Nicole
2017-08-01
Individuals with Down syndrome (DS) overexpress many genes on chromosome 21 due to trisomy and have high risk of dementia due to the Alzheimer's disease (AD) neuropathology. However, there is a wide range of phenotypic differences (e.g., age at onset of AD, amyloid β levels) among adults with DS, suggesting the importance of factors that modify risk within this particularly vulnerable population, including genotypic variability. Previous genetic studies in the general population have identified multiple genes that are associated with AD. This study examined the contribution of polymorphisms in these genes to the risk of AD in adults with DS ranging from 30 to 78 years of age at study entry (N = 320). We used multiple logistic regressions to estimate the likelihood of AD using single-nucleotide polymorphisms (SNPs) in candidate genes, adjusting for age, sex, race/ethnicity, level of intellectual disability and APOE genotype. This study identified multiple SNPs in APP and CST3 that were associated with AD at a gene-wise level empirical p-value of 0.05, with odds ratios in the range of 1.5-2. SNPs in MARK4 were marginally associated with AD. CST3 and MARK4 may contribute to our understanding of potential mechanisms where CST3 may contribute to the amyloid pathway by inhibiting plaque formation, and MARK4 may contribute to the regulation of the transition between stable and dynamic microtubules. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
Kaitaniemi, Pekka
2008-04-09
Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.
Benson, L.V.; Ramsey, D.K.; Stahle, D.W.; Petersen, K.L.
2013-01-01
In this paper, we present a model of prehistoric southwestern Colorado maize productivity. The model is based on a tree-ring reconstruction of water-year precipitation for Mesa Verde for the period A.D. 480 to 2011. Correlation of historic Mesa Verde precipitation with historic precipitation at 11 other weather stations enabled the construction of an elevation-dependent precipitation function. Prehistoric water-year precipitation values for Mesa Verde together with the elevation-dependent precipitation function allowed construction of the elevation of southwest Colorado precipitation contours for each year since A.D. 480, including the 30-cm contour, which represents the minimum amount of precipitation necessary for the production of maize and the 50-cm contour, which represents the optimum amount of precipitation necessary for the production of maize. In this paper, calculations of prehistoric maize productivity and field life for any specific elevation are also demonstrated. These calculations were performed using organic nitrogen measurements made on seven southwestern Colorado soil groups together with values of reconstructed water-year precipitation and estimations of the organic nitrogen mineralization rate.
Malinowski, Krzysztof Piotr; Kawalec, Paweł Piotr; Moćko, Paweł
2016-01-01
The aim of this study is to assess the indirect costs of six major autoimmune diseases including seropositive rheumatoid arthritis, other types of rheumatoid arthritis, psoriasis, multiple sclerosis, Type 1 diabetes, and ulcerative colitis. Relevant data for 2012 on sick leave and short- and long-term work disabilities were obtained from the Social Insurance Institution in Poland. Indirect costs were estimated using the human capital approach based on gross domestic product per capita, gross value added per worker, and gross income per worker in Poland in 2012 and expressed in euro. We recorded data on the total number of 45,500 patients. The total indirect costs were EUR 146,862,569; 353,683,508; and 108,154,271, calculated using gross domestic product, gross value added, and gross income, respectively. Considering only data on absenteeism collected by the Social Insurance Institution in Poland, we can conclude that the selected autoimmune diseases are associated with great indirect costs.
Added Sugar Consumption and Chronic Oral Disease Burden among Adolescents in Brazil.
Carmo, C D S; Ribeiro, M R C; Teixeira, J X P; Alves, C M C; Franco, M M; França, A K T C; Benatti, B B; Cunha-Cruz, J; Ribeiro, C C C
2018-05-01
Chronic oral diseases are rarely studied together, especially with an emphasis on their common risk factors. This study examined the association of added sugar consumption on "chronic oral disease burden" among adolescents, with consideration of obesity and systemic inflammation pathways through structural equation modeling. A cross-sectional study was conducted of a complex random sample of adolescent students enrolled at public schools in São Luís, Brazil ( n = 405). The outcome was chronic oral disease burden, a latent variable based on the presence of probing depth ≥4 mm, bleeding on probing, caries, and clinical consequences of untreated caries. The following hypotheses were tested: 1) caries and periodontal diseases among adolescents are correlated with each other; 2) added sugar consumption and obesity are associated with chronic oral disease burden; and 3) chronic oral disease burden is linked to systemic inflammation. Models were adjusted for socioeconomic status, added sugar consumption, oral hygiene behaviors, obesity, and serum levels of interleukin 6 (IL-6). All estimators of the latent variable chronic oral disease burden involved factor loadings ≥0.5 and P values <0.001, indicating good fit. Added sugar consumption (standardized coefficient [SC] = 0.212, P = 0.005), high IL-6 levels (SC = 0.130, P = 0.036), and low socioeconomic status (SC = -0.279, P = 0.001) were associated with increased chronic oral disease burden values. Obesity was associated with high IL-6 levels (SC = 0.232, P = 0.001). Visible plaque index was correlated with chronic oral disease burden (SC = 0.381, P < 0.001). Our finding that caries and periodontal diseases are associated with each other and with added sugar consumption, obesity, and systemic inflammation reinforces the guidance of the World Health Organization that any approach intended to prevent noncommunicable diseases should be directed toward common risk factors.
Meta-analysis of selenium accumulation and expression of antioxidant enzymes in chicken tissues.
Zoidis, E; Demiris, N; Kominakis, A; Pappas, A C
2014-04-01
A meta-analysis integrating results of 40 selenium (Se) supplementation experiments that originated from 35 different controlled randomized trials was carried out in an attempt to identify significant factors that affect tissue Se accumulation in chicken. Examined factors included: Se source (12 different sources examined), type of chicken (laying hens or broilers), age of birds at the beginning of supplementation, duration of supplementation, year during which the study was conducted, sex of birds, number of chickens per treatment, method of analysis, tissue type, concentration of Se determined and Se added to feed. A correlation analysis was also carried out between tissue Se concentration and glutathione peroxidase activity. Data analysis showed that the factors significantly affecting tissue Se concentration include type of chicken (P=0.006), type of tissue (P<0.001) and the analytical method used (P=0.014). Although Se source was not found to affect tissue Se concentration (overall P>0.05), certain inorganic (sodium selenite), calcium selenite, sodium selenate and organic sources (B-Traxim Se), Se-yeast, Se-malt, Se-enriched cabbage and Se-enriched garlic as well as background Se level from feed ingredients were found to significantly affect tissue Se concentration. The Se accumulation rate (estimated as linear regression coefficient of Se concentrations to Se added to feed) discriminated between the various tissues with highest values estimated in the leg muscle and lowest in blood plasma. Correlation analysis has also shown that tissue Se concentration (pooled data) was correlated to Se added to feed (r=0.529, P<0.01, log values) and to glutathione peroxidase activity (r=0.332, P=0.0478), with the latter not being correlated with Se added to feed. Although significant factors affecting Se concentration were reported in the present study, they do not necessarily indicate the in vivo function of the antioxidant system or the level of accumulated Se as other factors, not examined in the present study, may interact at the level of trace element absorption, distribution and retention.
Borsatti, L; Vieira, S L; Stefanello, C; Kindlein, L; Oviedo-Rondón, E O; Angel, C R
2018-01-01
A study was conducted to determine the AMEn contents of fat by-products from the soybean oil industry for broiler chickens. A total of 390 slow-feathering Cobb × Cobb 500 male broilers were randomly distributed into 13 treatments having 6 replicates of 5 birds each. Birds were fed a common starter diet from placement to 21 d. Experimental corn-soy diets were composed of four fat sources, added at 3 increasing levels each, and were fed from 21 to 28 d. Fat sources utilized were acidulated soybean soapstock (ASS), glycerol (GLY), lecithin (LEC), and a mixture (MIX) containing 85% ASS, 10% GLY and 5% LEC. A 4 × 3 + 1 factorial arrangement was used with 4 by-products (ASS, GLY, LEC, or MIX), 3 inclusion levels and 1 basal diet. Each of the four fat by-product sources was included in the diets as follow: 2% of by-products (98% basal + 2% by-product), 4% (96% basal + 4% by-product), or 6% (94% basal + 6% by-product). Birds were submitted to 94, 96, 98, and 100% of ad libitum feed intake; therefore, the differences in AMEn consumption were only due to the added by-product. Total excreta were collected twice daily for 72 h to determine apparent metabolizable energy contents starting at 25 d. The AMEn intake was regressed against feed intake and the slope was used to estimate AMEn values for each fat source. Linear regression equations (P < 0.05) estimated for each by-product were as follow: 7,153X - 451.9 for ASS; 3,916X - 68.2 for GLY; 7,051X - 448.3 for LEC, and 8,515X - 622.3 for MIX. Values of AMEn were 7,153, 3,916, 7,051, and 8,515 kcal/kg DM for ASS, GLY, LEC, and MIX, respectively. The present study generated AMEn for fat by-products data that can be used in poultry feed formulation. It also provides indications that, by adding the 3 by-products in the proportions present in the MIX, considerable economic advantage can be attained. © 2017 Poultry Science Association Inc.
320-row CT renal perfusion imaging in patients with aortic dissection: A preliminary study.
Liu, Dongting; Liu, Jiayi; Wen, Zhaoying; Li, Yu; Sun, Zhonghua; Xu, Qin; Fan, Zhanming
2017-01-01
To investigate the clinical value of renal perfusion imaging in patients with aortic dissection (AD) using 320-row computed tomography (CT), and to determine the relationship between renal CT perfusion imaging and various factors of aortic dissection. Forty-three patients with AD who underwent 320-row CT renal perfusion before operation were prospectively enrolled in this study. Diagnosis of AD was confirmed by transthoracic echocardiography. Blood flow (BF) of bilateral renal perfusion was measured and analyzed. CT perfusion imaging signs of AD in relation to the type of AD, number of entry tears and the false lumen thrombus were observed and compared. The BF values of patients with type A AD were significantly lower than those of patients with type B AD (P = 0.004). No significant difference was found in the BF between different numbers of intimal tears (P = 0.288), but BF values were significantly higher in cases with a false lumen without thrombus and renal arteries arising from the true lumen than in those with thrombus (P = 0.036). The BF values measured between the true lumen, false lumen and overriding groups were different (P = 0.02), with the true lumen group having the highest. Also, the difference in BF values between true lumen and false lumen groups was statistically significant (P = 0.016), while no statistical significance was found in the other two groups (P > 0.05). The larger the size of intimal entry tears, the greater the BF values (P = 0.044). This study shows a direct correlation between renal CT perfusion changes and AD, with the size, number of intimal tears, different types of AD, different renal artery origins and false lumen thrombosis, significantly affecting the perfusion values.
Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis
NASA Astrophysics Data System (ADS)
Das, Samiran
2018-04-01
The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.
Using School Lotteries to Evaluate the Value-Added Model
ERIC Educational Resources Information Center
Deutsch, Jonah
2013-01-01
There has been an active debate in the literature over the validity of value-added models. In this study, the author tests the central assumption of value-added models that school assignment is random relative to expected test scores conditional on prior test scores, demographic variables, and other controls. He uses a Chicago charter school's…
26 CFR 31.3402(t)-4 - Certain payments excepted from withholding.
Code of Federal Regulations, 2012 CFR
2012-04-01
... contemplated in the agreement. (n) Sales tax, excise tax, value-added tax, and other taxes. For purposes of this section, section 3402(t) withholding applies to any payment of sales tax, excise tax, value-added.... Notwithstanding the foregoing, the payment of sales tax, excise tax, value-added tax, or other tax may be excluded...
26 CFR 1.412(c)(2)-1 - Valuation of plan assets; reasonable actuarial valuation methods.
Code of Federal Regulations, 2014 CFR
2014-04-01
... computed by— (i) Determining the fair market value of plan assets at least annually, (ii) Adding the...) In determining the adjusted value of plan assets for a prior valuation date, there is added to the... market value, amounts are subtracted from this account and added, to the extent necessary, to raise the...
Exploring Value-Added Options - Opportunities in Mouldings and Millwork
Bob Smith; Philip A. Araman
1997-01-01
The millwork industry, which includes manufacture of doors, windows, stair parts, blinds, mouldings, picture frame material, and assorted trim, can be a lucrative value-added opportunity for sawmills. Those entering the value-added millwork market often find that it is a great opportunity to generate greater profits from upper grades and utility species, such as yellow...
Will Courts Shape Value-Added Methods for Teacher Evaluation? ACT Working Paper Series. WP-2014-2
ERIC Educational Resources Information Center
Croft, Michelle; Buddin, Richard
2014-01-01
As more states begin to adopt teacher evaluation systems based on value-added measures, legal challenges have been filed both seeking to limit the use of value-added measures ("Cook v. Stewart") and others seeking to require more robust evaluation systems ("Vergara v. California"). This study reviews existing teacher evaluation…
Castel, Alan D.; Balota, David A.; McCabe, David P.
2009-01-01
Selecting what is important to remember, attending to this information, and then later recalling it can be thought of in terms of the strategic control of attention and the efficient use of memory. In order to examine whether aging and Alzheimer's disease (AD) influenced this ability, the present study used a selectivity task, where studied items were worth various point values and participants were asked to maximize the value of the items they recalled. Relative to younger adults (N=35) and healthy older adults (N=109), individuals with very mild AD (N=41) and mild AD (N=13) showed impairments in the strategic and efficient encoding and recall of high value items. Although individuals with AD recalled more high value items than low value items, they did not efficiently maximize memory performance (as measured by a selectivity index) relative to healthy older adults. Performance on complex working memory span tasks was related to the recall of the high value items but not low value items. This pattern suggests that relative to healthy aging, AD leads to impairments in strategic control at encoding and value-directed remembering. PMID:19413444
Crustal structure of an exhumed IntraCONtinental Sag (ICONS): the Mekele Basin in Northern Ethiopia.
NASA Astrophysics Data System (ADS)
Alemu, T. B.; Abdelsalam, M. G.
2017-12-01
The Mekele Sedimentary Basin (MSB) in Ethiopia is a Paleozoic-Mesozoic IntraCONtinental Sag (ICONS) exposed due to Cenozoic domal and rift flank uplift associated with the Afar mantle plume and Afar Depression (AD). ICONS are formed over stable lithosphere, and in contrast to rift and foreland basins, show circular-elliptical shape in map view, saucer shaped in cross section, and concentric gravity minima. Surface geological features of the MSB have been shown to exhibit geologic characteristics similar to those of other ICONS. We used the World Gravity Map (WGM 2012) data to investigate subsurface-crustal structure of the MSB. We also used 2D power spectrum analysis and inversion of the gravity field to estimate the Moho depth. Our results show the Bouguer anomalies of the WGM 2012 ranges between 130 mGal and - 110 mGal with the highest values within the AD. Despite the effect of the AD on the gravity anomalies, the MSB is characterized by the presence of gravity low anomaly that reaches in places -110 mGal, especially in its western part. The Moho depth estimates, from both spectral analysis and inversion of the gravity data, is between 36 and 40 km depth over most of the western and southern margins of the MSB. However, as the AD is approached, in the eastern margins of the MSB, crustal thickness estimates are highly affected by the anomalously thin and magmatic segment of the AD, and the Moho depth range between 30 and 25 km. Our results are consistent with that of seismic studies in areas far from the MSB, but within the Northwestern Ethiopian Plateau where the MSB is located. Those studies have reported an abrupt decrease in Moho depth from 40 km beneath the Northwestern plateau, to 20 km in the adjacent AD. Though the MSB is small (100 kmX100 km) compared to other ICONS, and affected by the neighboring AD, it is characterized by elliptical gravity minima and a relatively thicker crust that gradually thickens away from the rift. In addition, seismic imaging of faster shear wave velocity beneath the southwestern MSB at 80 km depth by previous studies mimic the surface and shallow subsurface features that we interpret as indicative of major characteristics of ICONS. Due to their location away from active plate boundaries, most ICONS are buried since the time of their formation. The MSB represents a rare example of a completely exhumed ICONS.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
78 FR 66254 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-05
... AD. Costs of Compliance We estimate that this AD affects 84 airplanes of U.S. registry. We estimate... inspection cycle. We have received no definitive data that would enable us to provide a cost estimate for the... 2 of paragraph 1.E., ``Compliance,'' of Boeing Service Bulletin 747-53A2688, Revision 1, dated...
Roth, Christopher J; Boll, Daniel T; Wall, Lisa K; Merkle, Elmar M
2010-08-01
The purpose of this investigation was to assess workflow for medical imaging studies, specifically comparing liver and knee MRI examinations by use of the Lean Six Sigma methodologic framework. The hypothesis tested was that the Lean Six Sigma framework can be used to quantify MRI workflow and to identify sources of inefficiency to target for sequence and protocol improvement. Audio-video interleave streams representing individual acquisitions were obtained with graphic user interface screen capture software in the examinations of 10 outpatients undergoing MRI of the liver and 10 outpatients undergoing MRI of the knee. With Lean Six Sigma methods, the audio-video streams were dissected into value-added time (true image data acquisition periods), business value-added time (time spent that provides no direct patient benefit but is requisite in the current system), and non-value-added time (scanner inactivity while awaiting manual input). For overall MRI table time, value-added time was 43.5% (range, 39.7-48.3%) of the time for liver examinations and 89.9% (range, 87.4-93.6%) for knee examinations. Business value-added time was 16.3% of the table time for the liver and 4.3% of the table time for the knee examinations. Non-value-added time was 40.2% of the overall table time for the liver and 5.8% for the knee examinations. Liver MRI examinations consume statistically significantly more non-value-added and business value-added times than do knee examinations, primarily because of respiratory command management and contrast administration. Workflow analyses and accepted inefficiency reduction frameworks can be applied with use of a graphic user interface screen capture program.
[The contribution of food and airborne allergens in the pathogenesis of atopic dermatitis].
Dynowska, Dorota; Kolarzyk, Emilia; Schlegel-Zawadzka, Małgorzata; Dynowski, Wojciech
2002-01-01
Food hypersensitivity and airborne allergens may play a role in the pathogenesis of atopic dermatitis (AD). The aim of this study was to evaluate the kind of food and airborne allergens which may most often induce and intensify AD lesions and also to assess the variability and the kind of allergens leading to AD. The subjects of this study were 610 persons, aged 3 months-70 years. The clinical status of the patients was estimated by an atopic dermatitis symptom score scale (SCORAD). The laboratory examinations differentiated inflammatory processes from allergic reactions. The skin prick tests (SPT), serum total IgE and specific IgE-antibody levels to chosen food products and standard airborne allergens with the immuno-enzymes method ELISA-DPC were performed. The elevated values of the total IgE were proved in 46.1% children from group 0-15 years and in 31.4% of adolescents and adult persons (above 15 year of age). On the basis of positive SPT and positive specific IgE values it was shown, that most frequent food allergens were: egg protein (13.0%), cow milk (9.5%), egg yolk (8.4%), wheat (3.6%) and chocolate (1.8%). The most often airborne allergens connected with AD were: grass (11.6%), moulds (10.2%), house dust mites (9.3%), pollen like hazel (8.0%) and weeds (6.7%), animal allergens coming from cats (7.2%) and dogs (6.1%). The food hypersensitivity was particularly manifested in children. It may be the predictor of potential future development of allergic disease as well as the indicator of the allergic march.
Microbial production of value-added nutraceuticals.
Wang, Jian; Guleria, Sanjay; Koffas, Mattheos Ag; Yan, Yajun
2016-02-01
Nutraceuticals are important natural bioactive compounds that confer health-promoting and medical benefits to humans. Globally growing demands for value-added nutraceuticals for prevention and treatment of human diseases have rendered nutraceuticals a multi-billion dollar market. However, supply limitations and extraction difficulties from natural sources such as plants, animals or fungi, restrict the large-scale use of nutraceuticals. Metabolic engineering via microbial production platforms has been advanced as an eco-friendly alternative approach for production of value-added nutraceuticals from simple carbon sources. Microbial platforms like the most widely used Escherichia coli and Saccharomyces cerevisiae have been engineered as versatile cell factories for production of diverse and complex value-added chemicals such as phytochemicals, prebiotics, polysaccaharides and poly amino acids. This review highlights the recent progresses in biological production of value-added nutraceuticals via metabolic engineering approaches. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cullinane Thomas, Catherine M.; Koontz, Lynne; Cornachione, Egan
2018-01-01
The National Park Service (NPS) manages the Nation’s most iconic destinations that attract millions of visitors from across the Nation and around the world. Trip-related spending by NPS visitors generates and supports a considerable amount of economic activity within park gateway communities. This economic effects analysis measures how NPS visitor spending cycles through local economies, generating business sales and supporting jobs and income. In 2017, the National Park System received an estimated 331 million recreation visits. Visitors to National Parks spent an estimated \\$18.2 billion in local gateway regions (defined as communities within 60 miles of a park). The contribution of this spending to the national economy was 306 thousand jobs, \\$11.9 billion in labor income, \\$20.3 billion in value added, and \\$35.8 billion in economic output. The lodging sector saw the highest direct contributions with \\$5.5 billion in economic output directly contributed to local gateway economies nationally. The sector with the next greatest direct contributions was the restaurants and bars sector, with \\$3.7 billion in economic output directly contributed to local gateway economies nationally. Results from the Visitor Spending Effects report series are available online via an interactive tool. Users can view year-by-year trend data and explore current year visitor spending, jobs, labor income, value added, and economic output effects by sector for national, state, and local economies. This interactive tool is available at https://www.nps.gov/subjects/socialscience/vse.htm.
Cullinane Thomas, Catherine; Koontz, Lynne
2017-01-01
The National Park Service (NPS) manages the Nation’s most iconic destinations that attract millions of visitors from across the Nation and around the world. Trip-related spending by NPS visitors generates and supports a considerable amount of economic activity within park gateway communities. This economic effects analysis measures how NPS visitor spending cycles through local economies, generating business sales and supporting jobs and income.In 2016, the National Park System received an estimated 330,971,689 recreation visits. Visitors to National Parks spent an estimated \\$18.4 billion in local gateway regions (defined as communities within 60 miles of a park). The contribution of this spending to the national economy was 318 thousand jobs, \\$12.0 billion in labor income, \\$19.9 billion in value added, and \\$34.9 billion in economic output. The lodging sector saw the highest direct contributions with \\$5.7 billion in economic output directly contributed to local gateway economies nationally. The sector with the next greatest direct contributions was the restaurants and bars sector, with \\$3.7 billion in economic output directly contributed to local gateway economies nationally.Results from the Visitor Spending Effects report series are available online via an interactive tool. Users can view year-by-year trend data and explore current year visitor spending, jobs, labor income, value added, and economic output effects by sector for national, state, and local economies. This interactive tool is available at https://www.nps.gov/subjects/socialscience/vse.htm.
NASA Astrophysics Data System (ADS)
Soares, P. M. M.; Cardoso, R. M.
2017-12-01
Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons and regions. For temperature, the added value is smaller. AcknowledgmentsThe authors wish to acknowledge SOLAR (PTDC/GEOMET/7078/2014) and FCT UID/GEO/50019/ 2013 (Instituto Dom Luiz) projects.
Evaluation of rebreathing potential on bedding for infant use.
Kanetake, Jun; Aoki, Yasuhiro; Funayama, Masato
2003-06-01
Rebreathing is thought to be associated with sudden infant death syndrome (SIDS). The aim of the present study was to evaluate the rebreathing potential of different types of Japanese infant bedding. The rebreathing potential of various combinations of infant bedding was measured using a mechanically simulated breathing model. The types of bedding included five types of mattresses, four types of o-nesyo sheets (waterproof sheets) and a towel. The half-life of the expiratory CO2 concentration, t1/2-value was calculated as the index of the rebreathing potential. The softness of the bedding was also measured. There was a moderate proportional correlation between the t1/2-value and the softness (correlation coefficient = 0.509). When a new hard infant mattress was used, the t1/2-values were 13.6-14.1 s, and when o-nesyo sheet was added, the values were 14.1-16.2 s. When other mattresses were used with the o-nesyo sheet, the values were 14.1-19.2 s. Adding a towel onto the bedding, the t1/2-value (18.5-22.3 s) was prolonged without exception. It is difficult to estimate the rebreathing potential of the bedding on the basis its appearance or its softness. All infants should be placed on appropriate bedding in case they turn to a prone-sleeping position. Our recommendations to avoid rebreathing are as follows: (i) a new hard mattress specifically designed for babies should be used; (ii) a towel should not be used; (iii) an o-nesyo sheet may be used with a new hard infant mattress if necessary.
Mafirakureva, Nyashadzaishe; Mapako, Tonderai; Khoza, Star; Emmanuel, Jean C; Marowa, Lucy; Mvere, David; Postma, Maarten J; van Hulst, Marinus
2016-12-01
The aim of this study was to assess the cost effectiveness of introducing individual-donation nucleic acid testing (ID-NAT), in addition to serologic tests, compared with the exclusive use of serologic tests for the identification of hepatitis B virus (HBV), hepatitis C virus (HCV), and human immunodeficiency virus (HIV) I and II among blood donors in Zimbabwe. The costs, health consequences, and cost effectiveness of adding ID-NAT to serologic tests, compared with serologic testing alone, were estimated from a health care perspective using a decision-analytic model. The introduction of ID-NAT in addition to serologic tests would lower the risk of HBV, HCV, and HIV transmission to 46.9, 0.3, and 2.7 per 100,000 donations, respectively. ID-NAT would prevent an estimated 25, 6, and 9 HBV, HCV, and HIV transfusion-transmitted infections per 100,000 donations, respectively. The introduction of this intervention would result in an estimated 212 quality-adjusted life-years (QALYs) gained. The incremental cost-effectiveness ratio is estimated at US$17,774/QALY, a value far more than three times the gross national income per capita for Zimbabwe. Although the introduction of NAT could further improve the safety of the blood supply, current evidence suggests that it cannot be considered cost effective. Reducing the test costs for NAT through efficient donor recruitment, negotiating the price of reagents, and the efficient use of technology will improve cost effectiveness. © 2016 AABB.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
Assembling GHERG: Could "academic crowd-sourcing" address gaps in global health estimates?
Rudan, Igor; Campbell, Harry; Marušić, Ana; Sridhar, Devi; Nair, Harish; Adeloye, Davies; Theodoratou, Evropi; Chan, Kit Yee
2015-06-01
In recent months, the World Health Organization (WHO), independent academic researchers, the Lancet and PLoS Medicine journals worked together to improve reporting of population health estimates. The new guidelines for accurate and transparent health estimates reporting (likely to be named GATHER), which are eagerly awaited, represent a helpful move that should benefit the field of global health metrics. Building on this progress and drawing from a tradition of Child Health Epidemiology Reference Group (CHERG)'s successful work model, we would like to propose a new initiative - "Global Health Epidemiology Reference Group" (GHERG). We see GHERG as an informal and entirely voluntary international collaboration of academic groups who are willing to contribute to improving disease burden estimates and respect the principles of the new guidelines - a form of "academic crowd-sourcing". The main focus of GHERG will be to identify the "gap areas" where not much information is available and/or where there is a lot of uncertainty present about the accuracy of the existing estimates. This approach should serve to complement the existing WHO and IHME estimates and to represent added value to both efforts.
Revised budget for the oceanic uptake of anthropogenic carbon dioxide
Sarmiento, J.L.; Sundquist, E.T.
1992-01-01
TRACER-CALIBRATED models of the total uptake of anthropogenic CO2 by the world's oceans give estimates of about 2 gigatonnes carbon per year1, significantly larger than a recent estimate2 of 0.3-0.8 Gt C yr-1 for the synoptic air-to-sea CO2 influx. Although both estimates require that the global CO2 budget must be balanced by a large unknown terrestrial sink, the latter estimate implies a much larger terrestrial sink, and challenges the ocean model calculations on which previous CO2 budgets were based. The discrepancy is due in part to the net flux of carbon to the ocean by rivers and rain, which must be added to the synoptic air-to-sea CO2 flux to obtain the total oceanic uptake of anthropogenic CO2. Here we estimate the magnitude of this correction and of several other recently proposed adjustments to the synoptic air-sea CO2 exchange. These combined adjustments minimize the apparent inconsistency, and restore estimates of the terrestrial sink to values implied by the modelled oceanic uptake.
Inzelberg, Rivka; Massarwa, Magda; Schechtman, Edna; Strugatsky, Rosa; Farrer, Lindsay A; Friedland, Robert P
2015-01-01
Vascular risk factors and lack of formal education may increase the risk of Alzheimer's disease (AD). To determine the contribution of vascular risk factors and education to the risk of mild cognitive impairment (MCI) and AD and to estimate the risk for conversion from MCI to AD. This door-to-door survey was performed by an Arab-speaking team in Wadi Ara villages in Israel. All consenting residents aged ≥ 65 years were interviewed for medical history and underwent neurological and cognitive examinations. Individuals were cognitively classified as normal (CN), MCI, AD, vascular dementia, or unclassifiable. MCI patients were re-examined at least one year later to determine conversion to AD. The contributions of age, gender, school years, and vascular risk factors to the probability of conversion were estimated using logistic regression models. Of the 906 participants, 297 (33%) had MCI and 95 (10%) had AD. Older age (p = 0.0008), female gender (p = 0.023), low schooling (p < 0.0001), and hypertension (p = 0.0002) significantly accounted for risk of MCI versus CN, and diabetes was borderline (p = 0.051). The risk of AD versus CN was significantly associated with age (p < 0.0001), female gender (p < 0.0001), low schooling (p = 0.004) and hypertension (p = 0.049). Of the 231 subjects with MCI that were re-examined, 65 converted to AD. In this population, age, female gender, lack of formal education, and hypertension are risk factors for both AD and MCI. Conversion risk from MCI to AD could be estimated as a function of age, time interval between examinations, and hypertension.
Reflections on the added value of using mixed methods in the SCAPE study.
Murphy, Kathy; Casey, Dympna; Devane, Declan; Meskell, Pauline; Higgins, Agnes; Elliot, Naomi; Lalor, Joan; Begley, Cecily
2014-03-01
To reflect on the added value that a mixed method design gave in a large national evaluation study of specialist and advanced practice (SCAPE), and to propose a reporting guide that could help make explicit the added value of mixed methods in other studies. Recently, researchers have focused on how to carry out mixed methods research (MMR) rigorously. The value-added claims for MMR include the capacity to exploit the strengths and compensate for weakness inherent in single designs, generate comprehensive description of phenomena, produce more convincing results for funders or policy-makers and build methodological expertise. Data illustrating value added claims were drawn from the SCAPE study. Studies about the purpose of mixed methods were identified from a search of literature. The authors explain why and how they undertook components of the study, and propose a guideline to facilitate such studies. If MMR is to become the third methodological paradigm, then articulation of what extra benefit MMR adds to a study is essential. The authors conclude that MMR has added value and found the guideline useful as a way of making value claims explicit. The clear articulation of the procedural aspects of mixed-methods research, and identification of a guideline to facilitate such research, will enable researchers to learn more effectively from each other.
ERIC Educational Resources Information Center
Harris, Douglas N.
2010-01-01
In this policy brief, the author explores the problems with attainment measures when it comes to evaluating performance at the school level, and explores the best uses of value-added measures. These value-added measures, the author writes, are useful for sorting out-of-school influences from school influences or from teacher performance, giving…
Exogenous Variables and Value-Added Assessments: A Fatal Flaw
ERIC Educational Resources Information Center
Berliner, David C.
2014-01-01
Background: There has been rapid growth in value-added assessment of teachers to meet the widely supported policy goal of identifying the most effective and the most ineffective teachers in a school system. The former group is to be rewarded while the latter group is to be helped or fired for their poor performance. But, value-added approaches to…
Value-Added Dairy Products from Grass-Based Dairy Farms: A Case Study in Vermont
ERIC Educational Resources Information Center
Wang, Qingbin; Parsons, Robert; Colby, Jennifer; Castle, Jeffrey
2016-01-01
On-farm processing of value-added dairy products can be a way for small dairy farms to diversify production and increase revenue. This article examines characteristics of three groups of Vermont farmers who have grass-based dairy farms--those producing value-added dairy products, those interested in such products, and those not interested in such…
Gregory, Simon; Patterson, Fiona; Baron, Helen; Knight, Alec; Walsh, Kieran; Irish, Bill; Thomas, Sally
2016-10-01
Increasing pressure is being placed on external accountability and cost efficiency in medical education and training internationally. We present an illustrative data analysis of the value-added of postgraduate medical education. We analysed historical selection (entry) and licensure (exit) examination results for trainees sitting the UK Membership of the Royal College of General Practitioners (MRCGP) licensing examination (N = 2291). Selection data comprised: a clinical problem solving test (CPST); a situational judgement test (SJT); and a selection centre (SC). Exit data was an applied knowledge test (AKT) from MRCGP. Ordinary least squares (OLS) regression analyses were used to model differences in attainment in the AKT based on performance at selection (the value-added score). Results were aggregated to the regional level for comparisons. We discovered significant differences in the value-added score between regional training providers. Whilst three training providers confer significant value-added, one training provider was significantly lower than would be predicted based on the attainment of trainees at selection. Value-added analysis in postgraduate medical education potentially offers useful information, although the methodology is complex, controversial, and has significant limitations. Developing models further could offer important insights to support continuous improvement in medical education in future.
snpAD: An ancient DNA genotype caller.
Prüfer, Kay
2018-06-21
The study of ancient genomes can elucidate the evolutionary past. However, analyses are complicated by base-modifications in ancient DNA molecules that result in errors in DNA sequences. These errors are particularly common near the ends of sequences and pose a challenge for genotype calling. I describe an iterative method that estimates genotype frequencies and errors along sequences to allow for accurate genotype calling from ancient sequences. The implementation of this method, called snpAD, performs well on high-coverage ancient data, as shown by simulations and by subsampling the data of a high-coverage Neandertal genome. Although estimates for low-coverage genomes are less accurate, I am able to derive approximate estimates of heterozygosity from several low-coverage Neandertals. These estimates show that low heterozygosity, compared to modern humans, was common among Neandertals. The C ++ code of snpAD is freely available at http://bioinf.eva.mpg.de/snpAD/. Supplementary data are available at Bioinformatics online.
Health Behavior Changes After Genetic Risk Assessment for Alzheimer Disease: The REVEAL Study
Chao, Serena; Roberts, J. Scott; Marteau, Theresa M.; Silliman, Rebecca; Cupples, L. Adrienne; Green, Robert C.
2008-01-01
Risk information for Alzheimer disease (AD) may be communicated through susceptibility gene disclosure, even though this is not currently in clinical use. The REVEAL Study is the first randomized clinical trial of risk assessment for AD with apolipoprotein E (APOE) genotype and numerical risk estimate disclosure. We examined whether APOE genotype and numerical risk disclosure to asymptomatic individuals at high risk for AD alters health behaviors. One hundred sixty-two participants were randomized to either intervention (APOE disclosure) or control (no genotype disclosure) groups. Subjects in both groups received numerical lifetime risk estimates of future AD development based on sex and family history of AD. The intervention group received their APOE genotype. Subjects were informed that no proven preventive measures for AD existed and given an information sheet on preventative therapies under investigation. Participants who learned they were ε4 positive were significantly more likely than ε4 negative participants to report AD-specific health behavior change 1 year after disclosure (adjusted odds ratio: 2.73; 95% confidence interval: 1.14, 6.54; P = 0.02). Post hoc analyses revealed similar significant associations between numerical lifetime risk estimates and self-report of AD-specific health behavior change. Despite lack of preventive measures for AD, knowledge of APOE genotype, numerical lifetime risk, or both, influences health behavior. PMID:18317253
Health behavior changes after genetic risk assessment for Alzheimer disease: The REVEAL Study.
Chao, Serena; Roberts, J Scott; Marteau, Theresa M; Silliman, Rebecca; Cupples, L Adrienne; Green, Robert C
2008-01-01
Risk information for Alzheimer disease (AD) may be communicated through susceptibility gene disclosure, even though this is not currently in clinical use. The REVEAL Study is the first randomized clinical trial of risk assessment for AD with apolipoprotein E (APOE) genotype and numerical risk estimate disclosure. We examined whether APOE genotype and numerical risk disclosure to asymptomatic individuals at high risk for AD alters health behaviors. One hundred sixty-two participants were randomized to either intervention (APOE disclosure) or control (no genotype disclosure) groups. Subjects in both groups received numerical lifetime risk estimates of future AD development based on sex and family history of AD. The intervention group received their APOE genotype. Subjects were informed that no proven preventive measures for AD existed and given an information sheet on preventative therapies under investigation. Participants who learned they were epsilon 4 positive were significantly more likely than epsilon 4 negative participants to report AD-specific health behavior change 1 year after disclosure (adjusted odds ratio: 2.73; 95% confidence interval: 1.14, 6.54; P=0.02). Post hoc analyses revealed similar significant associations between numerical lifetime risk estimates and self-report of AD-specific health behavior change. Despite lack of preventive measures for AD, knowledge of APOE genotype, numerical lifetime risk, or both, influences health behavior.
NASA Technical Reports Server (NTRS)
Bolten, John; Crow, Wade
2012-01-01
The added value of satellite-based surface soil moisture retrievals for agricultural drought monitoring is assessed by calculating the lagged rank correlation between remotely-sensed vegetation indices (VI) and soil moisture estimates obtained both before and after the assimilation of surface soil moisture retrievals derived from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) into a soil water balance model. Higher soil moisture/VI lag correlations imply an enhanced ability to predict future vegetation conditions using estimates of current soil moisture. Results demonstrate that the assimilation of AMSR-E surface soil moisture retrievals substantially improve the performance of a global drought monitoring system - particularly in sparsely-instrumented areas of the world where high-quality rainfall observations are unavailable.
Indirect NMR spin-spin coupling constants in diatomic alkali halides
NASA Astrophysics Data System (ADS)
Jaszuński, Michał; Antušek, Andrej; Demissie, Taye B.; Komorovsky, Stanislav; Repisky, Michal; Ruud, Kenneth
2016-12-01
We report the Nuclear Magnetic Resonance (NMR) spin-spin coupling constants for diatomic alkali halides MX, where M = Li, Na, K, Rb, or Cs and X = F, Cl, Br, or I. The coupling constants are determined by supplementing the non-relativistic coupled-cluster singles-and-doubles (CCSD) values with relativistic corrections evaluated at the four-component density-functional theory (DFT) level. These corrections are calculated as the differences between relativistic and non-relativistic values determined using the PBE0 functional with 50% exact-exchange admixture. The total coupling constants obtained in this approach are in much better agreement with experiment than the standard relativistic DFT values with 25% exact-exchange, and are also noticeably better than the relativistic PBE0 results obtained with 50% exact-exchange. Further improvement is achieved by adding rovibrational corrections, estimated using literature data.
Specification of the ISS Plasma Environment Variability
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.
2002-01-01
Quantifying the spacecraft charging risks and corresponding hazards for the International Space Station (ISS) requires a plasma environment specification describing the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide estimates of long term (seasonal) mean Te and Ne values for the low Earth orbit environment. Knowledge of the Te and Ne variability as well as the likelihood of extreme deviations from the mean values are required to estimate both the magnitude and frequency of occurrence of potentially hazardous spacecraft charging environments for a given ISS construction stage and flight configuration. This paper describes the statistical analysis of historical ionospheric low Earth orbit plasma measurements used to estimate Ne, Te variability in the ISS flight environment. The statistical variability analysis of Ne and Te enables calculation of the expected frequency of Occurrence of any particular values of Ne and Te, especially those that correspond to possibly hazardous spacecraft charging environments. The database used in the original analysis included measurements from the AE-C, AE-D, and DE-2 satellites. Recent work on the database has added additional satellites to the database and ground based incoherent scatter radar observations as well. Deviations of the data values from the IRI estimated Ne, Te parameters for each data point provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. This technique, while developed specifically for the Space Station analysis, can also be generalized to provide ionospheric plasma environment risk specification models for low Earth orbit over an altitude range of 200 km through approximately 1000 km.
Adding net growth, removals, and mortality estimates for biomass and carbon in FIADB
Jeffery A. Turner
2015-01-01
Traditional growth, removals, and mortality (GRM) estimates produced from Forest Inventory and Analysis (FIA) periodic inventories were limited to changes in volume on timberland. Estimates on forestland were added in the east as the first installment of annual inventory plots was remeasured. The western FIA units have begun annual remeasurement, precipitating the need...
NASA Astrophysics Data System (ADS)
Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto
2016-08-01
We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.
Stallard, Eric; Kinosian, Bruce; Stern, Yaakov
2017-09-20
Alzheimer's disease (AD) progression varies substantially among patients, hindering calculation of residual total life expectancy (TLE) and its decomposition into disability-free life expectancy (DFLE) and disabled life expectancy (DLE) for individual patients with AD. The objective of the present study was to assess the accuracy of a new synthesis of Sullivan's life table (SLT) and longitudinal Grade of Membership (L-GoM) models that estimates individualized TLEs, DFLEs, and DLEs for patients with AD. If sufficiently accurate, such information could enhance the quality of important decisions in AD treatment and patient care. We estimated a new SLT/L-GoM model of the natural history of AD over 10 years in the Predictors 2 Study cohort: N = 229 with 6 fixed and 73 time-varying covariates over 21 examinations covering 11 measurement domains including cognitive, functional, behavioral, psychiatric, and other symptoms/signs. Total remaining life expectancy was censored at 10 years. Disability was defined as need for full-time care (FTC), the outcome most strongly associated with AD progression. All parameters were estimated via weighted maximum likelihood using data-dependent weights designed to ensure that the estimates of the prognostic subtypes were of high quality. Goodness of fit was tested/confirmed for survival and FTC disability for five relatively homogeneous subgroups defined to cover the range of patient outcomes over the 21 examinations. The substantial heterogeneity in initial patient presentation and AD progression was captured using three clinically meaningful prognostic subtypes and one terminal subtype exhibiting highly differentiated symptom severity on 7 of the 11 measurement domains. Comparisons of the observed and estimated survival and FTC disability probabilities demonstrated that the estimates were accurate for all five subgroups, supporting their use in AD life expectancy calculations. Mean 10-year TLE differed widely across subgroups: range 3.6-8.0 years, average 6.1 years. Mean 10-year DFLE differed relatively even more widely across subgroups: range 1.2-6.5 years, average 4.0 years. Mean 10-year DLE was relatively much closer: range 1.5-2.3 years, average 2.1 years. The SLT/L-GoM model yields accurate maximum likelihood estimates of TLE, DFLE, and DLE for patients with AD; it provides a realistic, comprehensive modeling framework for endpoint and resource use/cost calculations.
On Estimating End-to-End Network Path Properties
NASA Technical Reports Server (NTRS)
Allman, Mark; Paxson, Vern
1999-01-01
The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements per-formed by the connection endpoints. We consider two basic transport estimation problems: determining the setting of the retransmission timer (RTO) for are reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better.
2018-01-01
Background We assessed the cost-effectiveness of the glucagon-like peptide 1 receptor agonists liraglutide 1.8 mg and lixisenatide 20 μg (both added to basal insulin) in patients with type 2 diabetes (T2D) in Sweden. Methods The Swedish Institute for Health Economics cohort model for T2D was used to compare liraglutide and lixisenatide (both added to basal insulin), with a societal perspective and with comparative treatment effects derived by indirect treatment comparison (ITC). Drug prices were 2016 values, and all other costs 2015 values. The cost-effectiveness of IDegLira (fixed-ratio combination of insulin degludec and liraglutide) versus lixisenatide plus basal insulin was also assessed, under different sets of assumptions. Results From the ITC, decreases in HbA1c were –1.32% and –0.43% with liraglutide and lixisenatide, respectively; decreases in BMI were –1.29 and –0.65 kg/m2, respectively. An estimated 2348 cases of retinopathy, 265 of neuropathy and 991 of nephropathy would be avoided with liraglutide compared with lixisenatide in a cohort of 10,000 patients aged over 40 years. In the base-case analysis, total direct costs were higher with liraglutide than lixisenatide, but costs associated with complications were lower. The cost/quality-adjusted life-year (QALY) for liraglutide added to basal insulin was SEK30,802. Base-case findings were robust in sensitivity analyses, except when glycated haemoglobin (HbA1c) differences for liraglutide added to basal insulin were abolished, suggesting these benefits were driving the cost/QALY. With liraglutide 1.2 mg instead of liraglutide 1.8 mg (adjusted for efficacy and cost), liraglutide added to basal insulin was dominant over lixisenatide 20μg.IDegLira was dominant versus lixisenatide plus basal insulin when a defined daily dose was used in the model. Conclusions The costs/QALY for liraglutide, 1.8 or 1.2 mg, added to basal insulin, and for IDegLira (all compared with lixisenatide 20 μg added to basal insulin) were below the threshold considered low by Swedish authorities. In some scenarios, liraglutide and IDegLira were cost-saving. PMID:29408938
Ericsson, Åsa; Glah, Divina; Lorenzi, Maria; Jansen, Jeroen P; Fridhammar, Adam
2018-01-01
We assessed the cost-effectiveness of the glucagon-like peptide 1 receptor agonists liraglutide 1.8 mg and lixisenatide 20 μg (both added to basal insulin) in patients with type 2 diabetes (T2D) in Sweden. The Swedish Institute for Health Economics cohort model for T2D was used to compare liraglutide and lixisenatide (both added to basal insulin), with a societal perspective and with comparative treatment effects derived by indirect treatment comparison (ITC). Drug prices were 2016 values, and all other costs 2015 values. The cost-effectiveness of IDegLira (fixed-ratio combination of insulin degludec and liraglutide) versus lixisenatide plus basal insulin was also assessed, under different sets of assumptions. From the ITC, decreases in HbA1c were -1.32% and -0.43% with liraglutide and lixisenatide, respectively; decreases in BMI were -1.29 and -0.65 kg/m2, respectively. An estimated 2348 cases of retinopathy, 265 of neuropathy and 991 of nephropathy would be avoided with liraglutide compared with lixisenatide in a cohort of 10,000 patients aged over 40 years. In the base-case analysis, total direct costs were higher with liraglutide than lixisenatide, but costs associated with complications were lower. The cost/quality-adjusted life-year (QALY) for liraglutide added to basal insulin was SEK30,802. Base-case findings were robust in sensitivity analyses, except when glycated haemoglobin (HbA1c) differences for liraglutide added to basal insulin were abolished, suggesting these benefits were driving the cost/QALY. With liraglutide 1.2 mg instead of liraglutide 1.8 mg (adjusted for efficacy and cost), liraglutide added to basal insulin was dominant over lixisenatide 20μg.IDegLira was dominant versus lixisenatide plus basal insulin when a defined daily dose was used in the model. The costs/QALY for liraglutide, 1.8 or 1.2 mg, added to basal insulin, and for IDegLira (all compared with lixisenatide 20 μg added to basal insulin) were below the threshold considered low by Swedish authorities. In some scenarios, liraglutide and IDegLira were cost-saving.
He, Hong; Cheng, Xiao; Li, Xianglan; Zhu, Renbin; Hui, Fengming; Wu, Wenhui; Zhao, Tiancheng; Kang, Jing; Tang, Jianwu
2017-10-11
Penguin guano provides favorable conditions for production and emission of greenhouse gases (GHGs). Many studies have been conducted to determine the GHG fluxes from penguin colonies, however, at regional scale, there is still no accurate estimation of total GHG emissions. We used object-based image analysis (OBIA) method to estimate the Adélie penguin (Pygoscelis adeliae) population based on aerial photography data. A model was developed to estimate total GHG emission potential from Adélie penguin colonies during breeding seasons in 1983 and 2012, respectively. Results indicated that OBIA method was effective for extracting penguin information from aerial photographs. There were 17,120 and 21,183 Adélie penguin breeding pairs on Inexpressible Island in 1983 and 2012, respectively, with overall accuracy of the estimation of 76.8%. The main reasons for the increase in Adélie penguin populations were attributed to increase in temperature, sea ice and phytoplankton. The average estimated CH 4 and N 2 O emissions tended to be increasing during the period from 1983 to 2012 and CH 4 was the main GHG emitted from penguin colonies. Total global warming potential (GWP) of CH 4 and N 2 O emissions was 5303 kg CO 2 -eq in 1983 and 6561 kg CO 2 -eq in 2012, respectively.
Reedy, Jill; Krebs-Smith, Susan M
2008-03-01
The purpose of this research was to compare food-based recommendations and nutrient values of three food guides: the US Department of Agriculture's MyPyramid; the National Heart, Lung, and Blood Institute's Dietary Approaches to Stop Hypertension Eating Plan, and Harvard University's Healthy Eating Pyramid. Estimates of nutrient values associated with following each of the food guides at the 2,000-calorie level were made using a composite approach. This approach calculates population-weighted nutrient composites for each food group and subgroup, assuming average choices within food groups. Nutrient estimates were compared to the Dietary Reference Intakes and other goals and limits. Recommendations were similar regarding almost all food groups for both the type and amount of foods. Primary differences were seen in the types of vegetables and protein sources recommended and the amount of dairy products and total oil recommended. Overall nutrient values were also similar for most nutrients, except vitamin A, vitamin E, and calcium. These food guides were derived from different types of nutrition research, yet they share consistent messages: eat more fruits, vegetables, legumes, and whole grains; eat less added sugar and saturated fat; and emphasize plant oils.
ERIC Educational Resources Information Center
Spencer, Bryden
2016-01-01
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
A New Interpretation of Augmented Subscores and Their Added Value in Terms of Parallel Forms
ERIC Educational Resources Information Center
Sinharay, Sandip
2018-01-01
The value-added method of Haberman is arguably one of the most popular methods to evaluate the quality of subscores. The method is based on the classical test theory and deems a subscore to be of added value if the subscore predicts the corresponding true subscore better than does the total score. Sinharay provided an interpretation of the added…
ERIC Educational Resources Information Center
Isenberg, Eric; Hock, Heinrich
2011-01-01
This report presents the value-added models that will be used to measure school and teacher effectiveness in the District of Columbia Public Schools (DCPS) in the 2010-2011 school year. It updates the earlier technical report, "Measuring Value Added for IMPACT and TEAM in DC Public Schools." The earlier report described the methods used…
ERIC Educational Resources Information Center
Goldhaber, Dan
2015-01-01
The past decade has seen a tremendous amount of research on the use of value-added modeling to assess individual teachers, and a significant number of states and districts are now using, or plan to use, value added as a component of a teacher's summative performance evaluation. In this article, I explore the various mechanisms through which the…
ERIC Educational Resources Information Center
McCaffrey, Daniel F.
2012-01-01
Value-added models have caught the interest of policymakers because, unlike using student tests scores for other means of accountability, they purport to "level the playing field." That is, they supposedly reflect only a teacher's effectiveness, not whether she teaches high- or low-income students, for instance, or students in accelerated or…
Markham, Wolfgang A; Young, Robert; Sweeting, Helen; West, Patrick; Aveyard, Paul
2012-07-01
Previous studies found lower substance use in schools achieving better examination and truancy results than expected, given their pupil populations (high value-added schools). This study examines whether these findings are replicated in West Scotland and whether school ethos indicators focussing on pupils' perceptions of schooling (environment, involvement, engagement and teacher-pupil relations) mediate the associations. Teenagers from forty-one schools (S2, aged 13, n = 2268; S4, aged 15, n = 2096) previously surveyed in primary school (aged 11, n = 2482) were surveyed in the late 1990s. School value-added scores were derived from standardised residuals of two regression equations separately predicting from pupils' socio-demographic characteristics (1) proportions of pupils passing five Scottish Standard Grade Examinations, and (2) half-day truancy loss. Outcomes were current smoking, monthly drinking, ever illicit drug use. Random effects logistic regression models adjusted for potential pupil-level confounders were used to assess (1) associations between substance use and school-level value-added scores and (2) whether these associations were mediated by pupils' perceptions of schooling or other school-level factors (school roll, religious denomination and mean aggregated school-level ethos scores). Against expectations, value-added education was positively associated with smoking (Odds Ratios [95% confidence intervals] for one standard deviation increase in value-added scores were 1.28 [1.02-1.61] in S2 and 1.13 [1.00-1.27] in S4) and positively but weakly and non-significantly associated with drinking and drug use. Engagement and positive teacher-pupil relations were strongly and negatively associated with all substance use outcomes at both ages. Other school-level factors appeared weakly and largely non-significantly related to substance use. Value-added scores were unrelated to school ethos measures and no ethos measure mediated associations between value-added education and substance use. We conclude that substance use in Scotland is more likely in high value-added schools, among disengaged students and those with poorer student-teacher relationships. Understanding the underpinning mechanisms is a potentially important public health concern. Copyright © 2012 Elsevier Ltd. All rights reserved.
Markham, Wolfgang A.; Young, Robert; Sweeting, Helen; West, Patrick; Aveyard, Paul
2012-01-01
Previous studies found lower substance use in schools achieving better examination and truancy results than expected, given their pupil populations (high value-added schools). This study examines whether these findings are replicated in West Scotland and whether school ethos indicators focussing on pupils' perceptions of schooling (environment, involvement, engagement and teacher–pupil relations) mediate the associations. Teenagers from forty-one schools (S2, aged 13, n = 2268; S4, aged 15, n = 2096) previously surveyed in primary school (aged 11, n = 2482) were surveyed in the late 1990s. School value-added scores were derived from standardised residuals of two regression equations separately predicting from pupils' socio-demographic characteristics (1) proportions of pupils passing five Scottish Standard Grade Examinations, and (2) half-day truancy loss. Outcomes were current smoking, monthly drinking, ever illicit drug use. Random effects logistic regression models adjusted for potential pupil-level confounders were used to assess (1) associations between substance use and school-level value-added scores and (2) whether these associations were mediated by pupils' perceptions of schooling or other school-level factors (school roll, religious denomination and mean aggregated school-level ethos scores). Against expectations, value-added education was positively associated with smoking (Odds Ratios [95% confidence intervals] for one standard deviation increase in value-added scores were 1.28 [1.02–1.61] in S2 and 1.13 [1.00–1.27] in S4) and positively but weakly and non-significantly associated with drinking and drug use. Engagement and positive teacher–pupil relations were strongly and negatively associated with all substance use outcomes at both ages. Other school-level factors appeared weakly and largely non-significantly related to substance use. Value-added scores were unrelated to school ethos measures and no ethos measure mediated associations between value-added education and substance use. We conclude that substance use in Scotland is more likely in high value-added schools, among disengaged students and those with poorer student–teacher relationships. Understanding the underpinning mechanisms is a potentially important public health concern. PMID:22503837
Toto, Tami; Jensen, Michael; Bartholomew, Mary Jane
2012-09-22
The Navigation Best Estimate (NAVBE) VAP was developed in response to the 2012-2013 Marine ARM GPCI Investigation of Clouds (MAGIC) deployment, the first ship-based deployment of the second ARM Mobile Facility (AMF2). It has since been applied to the 2015 ARM Cloud Aerosol Precipitation EXperiment (ACAPEX) deployment. A number of different instruments on the ships collected Global Positioning System (GPS) and Inertial Navigation System (INS) measurements during the MAGIC campaign. The motivation of the NAVBE VAP is to consolidate many different sources of this information in a single, continuous datastream to be used when information is required about ship location and orientation and to provide a more complete estimate than would be available from any one instrument. The result is 10 Hz and 1-min data streams reporting ship position and attitude
Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events
Dinitz, Laura B.; Taketa, Richard A.
2013-01-01
This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
Implications of comorbidity on costs for patients with Alzheimer disease.
Kuo, Tzu-Chun; Zhao, Yang; Weir, Sharada; Kramer, Marilyn Schlein; Ash, Arlene S
2008-08-01
No prior studies have used a comprehensive clinical classification system to examine the effect of differences in overall illness burden and the presence of other diseases on costs for patients with Alzheimer disease (AD) when compared with demographically matched nondemented controls. Of a total of 627,775 enrollees who were eligible for medical and pharmacy benefits for 2003 and 2004 in the MarketScan Medicare Supplemental and Coordination of Benefits Database, we found 25,109 AD patients. For each case, 3 demographically matched nondemented controls were selected using propensity scores. Applying the diagnostic cost groups (DCGs) model to all enrollees, 2003 diagnoses were used to estimate prospective relative risk scores (RRSs) that predict 2004 costs from all illness other than AD. RRSs were then used to control for illness burden to estimate AD's independent effect on costs. Compared with the control group, the AD cohort has more comorbid conditions (8.1 vs. 6.5) and higher illness burden (1.23 vs. 1.04). Individuals with AD are more likely to have mental health conditions, neurologic conditions, cognitive disorders, cerebrovascular disease, diabetes with acute complications, and injuries. Annual costs for AD patients are $3567 (34%) higher than for controls. Excess costs attributable to AD, after controlling for non-AD illness burden, are estimated at $2307 per year with outpatient pharmacy being the key driver ($1711 in excess costs). AD patients are sicker and more expensive than demographically matched controls. Even after adjusting for differences in illness burden, costs remain higher for AD patients.
Atmospheric Turbulence Estimates from a Pulsed Lidar
NASA Technical Reports Server (NTRS)
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
Mapping apparent stress and energy radiation over fault zones of major earthquakes
McGarr, A.; Fletcher, Joe B.
2002-01-01
Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.
NASA Astrophysics Data System (ADS)
Li, Hui; He, Huizhong; Chen, Xiaoling; Zhang, Lihua
2008-12-01
C factor, known as cover and management factor in USLE, is one of the most important factors since it represents the combined effects of plant, soil cover and management on erosion, whereas it also most easily changed variables by men for it itself is time-variant and the uncertainty nature. So it's vital to compute C factor properly in order to model erosion effectively. In this paper we attempt to present a new method for calculating C value using Vegetation Index (VI) derived from multi-temporal MODIS imagery, which can estimate C factor in a more scientific way. Based on the theory that C factor is strongly correlated with VI, the average annual C value is estimated by adding the VI value of three growth phases within a year with different weights. Modified Fournier Index (MFI) is employed to determine the weight of each growth phase for the vegetation growth and agricultural activities are significantly influenced by precipitation. The C values generated by the proposed method were compared with that of other method, and the results showed that the results of our method is highly correlated with the others. This study is helpful to extract C value from satellite data in a scientific and efficient way, which in turn could be used to facilitate the prediction of erosion.
Gray Infrastructure Tool | EPA Center for Exposure ...
2016-03-07
Natural channel with flood plain panel added Default depth increment of 0.5 is used for Natural Channel with FP Units option added – SI or US units. Default option is US units Conversion options added wherever necessary Additional options added to FTABLE such as clear FTABLE Significant digits for FTABLE calculations is changed to 4 Previously a default Cd value is used for calculations (under-drain and riser) but now a user defined value is used Default values of Cd for riser orifice and under-drain textboxes is changed to 0.6 Previously a default increment value of 0.1 is used for all the channel panels but now user can specify the increment
Simpson, Emma; Hock, Emma; Stevenson, Matt; Wong, Ruth; Dracup, Naila; Wailoo, Allan; Conaghan, Philip; Estrach, Cristina; Edwards, Christopher; Wakefield, Richard
2018-04-01
Synovitis (inflamed joint synovial lining) in rheumatoid arthritis (RA) can be assessed by clinical examination (CE) or ultrasound (US). To investigate the added value of US, compared with CE alone, in RA synovitis in terms of clinical effectiveness and cost-effectiveness. Electronic databases including MEDLINE, EMBASE and the Cochrane databases were searched from inception to October 2015. A systematic review sought RA studies that compared additional US with CE. Heterogeneity of the studies with regard to interventions, comparators and outcomes precluded meta-analyses. Systematic searches for studies of cost-effectiveness and US and treatment-tapering studies (not necessarily including US) were undertaken. A model was constructed that estimated, for patients in whom drug tapering was considered, the reduction in costs of disease-modifying anti-rheumatic drugs (DMARDs) and serious infections at which the addition of US had a cost per quality-adjusted life-year (QALY) gained of £20,000 and £30,000. Furthermore, the reduction in the costs of DMARDs at which US becomes cost neutral was also estimated. For patients in whom dose escalation was being considered, the reduction in number of patients escalating treatment and in serious infections at which the addition of US had a cost per QALY gained of £20,000 and £30,000 was estimated. The reduction in number of patients escalating treatment for US to become cost neutral was also estimated. Fifty-eight studies were included. Two randomised controlled trials compared adding US to a Disease Activity Score (DAS)-based treat-to-target strategy for early RA patients. The addition of power Doppler ultrasound (PDUS) to a Disease Activity Score 28 joints-based treat-to-target strategy in the Targeting Synovitis in Early Rheumatoid Arthritis (TaSER) trial resulted in no significant between-group difference for change in Disease Activity Score 44 joints (DAS44). This study found that significantly more patients in the PDUS group attained DAS44 remission ( p = 0.03). The Aiming for Remission in Rheumatoid Arthritis (ARCTIC) trial found that the addition of PDUS and grey-scale ultrasound (GSUS) to a DAS-based strategy did not produce a significant between-group difference in the primary end point: composite DAS of < 1.6, no swollen joints and no progression in van der Heijde-modified total Sharp score (vdHSS). The ARCTIC trial did find that the erosion score of the vdHS had a significant advantage for the US group ( p = 0.04). In the TaSER trial there was no significant group difference for erosion. Other studies suggested that PDUS was significantly associated with radiographic progression and that US had added value for wrist and hand joints rather than foot and ankle joints. Heterogeneity between trials made conclusions uncertain. No studies were identified that reported the cost-effectiveness of US in monitoring synovitis. The model estimated that an average reduction of 2.5% in the costs of biological DMARDs would be sufficient to offset the costs of 3-monthly US. The money could not be recouped if oral methotrexate was the only drug used. Heterogeneity of the trials precluded meta-analysis. Therefore, no summary estimates of effect were available. Additional costs and health-related quality of life decrements, relating to a flare following tapering or disease progression, have not been included. The feasibility of increased US monitoring has not been assessed. Limited evidence suggests that US monitoring of synovitis could provide a cost-effective approach to selecting RA patients for treatment tapering or escalation avoidance. Considerable uncertainty exists for all conclusions. Future research priorities include evaluating US monitoring of RA synovitis in longitudinal clinical studies. This study is registered as PROSPERO CRD42015017216. The National Institute for Health Research Health Technology Assessment programme.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Comparison of molecular breeding values based on within- and across-breed training in beef cattle
2013-01-01
Background Although the efficacy of genomic predictors based on within-breed training looks promising, it is necessary to develop and evaluate across-breed predictors for the technology to be fully applied in the beef industry. The efficacies of genomic predictors trained in one breed and utilized to predict genetic merit in differing breeds based on simulation studies have been reported, as have the efficacies of predictors trained using data from multiple breeds to predict the genetic merit of purebreds. However, comparable studies using beef cattle field data have not been reported. Methods Molecular breeding values for weaning and yearling weight were derived and evaluated using a database containing BovineSNP50 genotypes for 7294 animals from 13 breeds in the training set and 2277 animals from seven breeds (Angus, Red Angus, Hereford, Charolais, Gelbvieh, Limousin, and Simmental) in the evaluation set. Six single-breed and four across-breed genomic predictors were trained using pooled data from purebred animals. Molecular breeding values were evaluated using field data, including genotypes for 2227 animals and phenotypic records of animals born in 2008 or later. Accuracies of molecular breeding values were estimated based on the genetic correlation between the molecular breeding value and trait phenotype. Results With one exception, the estimated genetic correlations of within-breed molecular breeding values with trait phenotype were greater than 0.28 when evaluated in the breed used for training. Most estimated genetic correlations for the across-breed trained molecular breeding values were moderate (> 0.30). When molecular breeding values were evaluated in breeds that were not in the training set, estimated genetic correlations clustered around zero. Conclusions Even for closely related breeds, within- or across-breed trained molecular breeding values have limited prediction accuracy for breeds that were not in the training set. For breeds in the training set, across- and within-breed trained molecular breeding values had similar accuracies. The benefit of adding data from other breeds to a within-breed training population is the ability to produce molecular breeding values that are more robust across breeds and these can be utilized until enough training data has been accumulated to allow for a within-breed training set. PMID:23953034
1978-04-15
12. (Part 2 of 2) 70 B1 Calculate Revised I! Allocation Error [: Estimates For Each Attribute Category] Skip Change of tltipliers - No _ D-Do For All A...passing Onl to tilt next target , thlt current Value Of the target weight is revised . Altecr every two to four targets , the Laigrange multipliers art...delete a weapon, a new set of variables is delivered by WADOUT, and STALL uses this revised in- formation to decide whether more weapons should be added
Statistical models for causation: what inferential leverage do they provide?
Freedman, David A
2006-12-01
Experiments offer more reliable evidence on causation than observational studies, which is not to gainsay the contribution to knowledge from observation. Experiments should be analyzed as experiments, not as observational studies. A simple comparison of rates might be just the right tool, with little value added by "sophisticated" models. This article discusses current models for causation, as applied to experimental and observational data. The intention-to-treat principle and the effect of treatment on the treated will also be discussed. Flaws in per-protocol and treatment-received estimates will be demonstrated.
Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.
2017-01-01
Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.
Evaluation of fetal anthropometric measures to predict the risk for shoulder dystocia.
Burkhardt, T; Schmidt, M; Kurmanavicius, J; Zimmermann, R; Schäffer, L
2014-01-01
To evaluate the quality of anthropometric measures to improve the prediction of shoulder dystocia by combining different sonographic biometric parameters. This was a retrospective cohort study of 12,794 vaginal deliveries with complete sonographic biometry data obtained within 7 days before delivery. Receiver-operating characteristics (ROC) curves of various combinations of the biometric parameters, namely, biparietal diameter (BPD), occipitofrontal diameter (OFD), head circumference, abdominal diameter (AD), abdominal circumference (AC) and femur length were analyzed. The influences of independent risk factors were calculated and their combination used in a predictive model. The incidence of shoulder dystocia was 1.14%. Different combinations of sonographic parameters showed comparable ROC curves without advantage for a particular combination. The difference between AD and BPD (AD - BPD) (area under the curve (AUC) = 0.704) revealed a significant increase in risk (odds ratio (OR) 7.6 (95% CI 4.2-13.9), sensitivity 8.2%, specificity 98.8%) at a suggested cut-off ≥ 2.6 cm. However, the positive predictive value (PPV) was low (7.5%). The AC as a single parameter (AUC = 0.732) with a cut-off ≥ 35 cm performed worse (OR 4.6 (95% CI 3.3-6.5), PPV 2.6%). BPD/OFD (a surrogate for fetal cranial shape) was not significantly different between those with and those without shoulder dystocia. The combination of estimated fetal weight, maternal diabetes, gender and AD - BPD provided a reasonable estimate of the individual risk. Sonographic fetal anthropometric measures appear not to be a useful tool to screen for the risk of shoulder dystocia due to a low PPV. However, AD - BPD appears to be a relevant risk factor. While risk stratification including different known risk factors may aid in counseling, shoulder dystocia cannot effectively be predicted. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
This study examined whether being taught by a teacher with a high "value-added" improves a student's long-term outcomes. The study analyzed more than 20 years of data for nearly one million fourth- through eighth-grade students in a large urban school district. The study reported that having a teacher with a higher level of value-added was…
Heating and thermal squeezing in parametrically driven oscillators with added noise.
Batista, Adriano A
2012-11-01
In this paper we report a theoretical model based on Green's functions, Floquet theory, and averaging techniques up to second order that describes the dynamics of parametrically driven oscillators with added thermal noise. Quantitative estimates for heating and quadrature thermal noise squeezing near and below the transition line of the first parametric instability zone of the oscillator are given. Furthermore, we give an intuitive explanation as to why heating and thermal squeezing occur. For small amplitudes of the parametric pump the Floquet multipliers are complex conjugate of each other with a constant magnitude. As the pump amplitude is increased past a threshold value in the stable zone near the first parametric instability, the two Floquet multipliers become real and have different magnitudes. This creates two different effective dissipation rates (one smaller and the other larger than the real dissipation rate) along the stable manifolds of the first-return Poincaré map. We also show that the statistical average of the input power due to thermal noise is constant and independent of the pump amplitude and frequency. The combination of these effects causes most of heating and thermal squeezing. Very good agreement between analytical and numerical estimates of the thermal fluctuations is achieved.
Pesticide bioconcentration modelling for fruit trees.
Paraíba, Lourival Costa
2007-01-01
The model presented allows simulating the pesticide concentration evolution in fruit trees and estimating the pesticide bioconcentration factor in fruits. Pesticides are non-ionic organic compounds that are degraded in soils cropped with woody species, fruit trees and other perennials. The model allows estimating the pesticide uptake by plants through the water transpiration stream and also the time in which maximum pesticide concentration occur in the fruits. The equation proposed presents the relationships between bioconcentration factor (BCF) and the following variables: plant water transpiration volume (Q), pesticide transpiration stream concentration factor (TSCF), pesticide stem-water partition coefficient (K(Wood,W)), stem dry biomass (M) and pesticide dissipation rate in the soil-plant system (k(EGS)). The modeling started and was developed from a previous model "Fruit Tree Model" (FTM), reported by Trapp and collaborators in 2003, to which was added the hypothesis that the pesticide degradation in the soil follows a first order kinetic equation. The FTM model for pesticides (FTM-p) was applied to a hypothetic mango plant cropping (Mangifera indica) treated with paclobutrazol (growth regulator) added to the soil. The model fitness was evaluated through the sensitivity analysis of the pesticide BCF values in fruits with respect to the model entry data variability.
Factors associated with added sugars intake among adolescents living in São Paulo, Brazil.
Colucci, Ana Carolina A; Cesar, Chester L G; Marchioni, Dirce M L; Fisberg, Regina M
2012-08-01
To measure added sugars intake among adolescents and describe its demographic, socioeconomic, and nutritional status determinants. The study was conducted based on a household survey carried out between March and December 2003. Food intake was assessed through 24-hour food recalls, and an adjustment approach was applied using external variance estimates derived from 195 adolescents of the same age in 2007. Population-based cross-sectional study, city of São Paulo, Brazil. Seven hundred and ninety-three male (n = 410) and female (n = 383) adolescents aged 10-19 years. MEASURE OF OUTCOME: Foods with greater contributions toward the added sugars intake were identified. Multiple linear regression analysis was performed, with calories from added sugars as the dependent continuous variable and the remaining factors (socioeconomic, demographic, lifestyle, household condition, and food intake) as independent variables. The average contribution of added sugars to total energy value was 12.28% (95% confidence interval [CI]: 11.87-12.70) with no statistically significant sex difference (p > 0.05). Soft drinks were a major source of added sugars among the adolescents (34.2% among males and 32.0% among females), followed by sugars (sucrose and honey) and chocolate powder (around 11%). In the multiple linear regression analysis, the head of household's education level and calories from protein, fats, and carbohydrates other than sugars had an independent effect on added sugars intake. This study showed that the percentage contribution of added sugars to energy intake among adolescents in the city of São Paulo, Brazil, was above the current recommended levels. Socioeconomic condition (represented by the head of the household's education level) and macronutrient intake were shown to be determinants of sugars intake.
Russo, María J; Campos, Jorge; Vázquez, Silvia; Sevlever, Gustavo; Allegri, Ricardo F
2017-01-01
Background: Ongoing research is focusing on the identification of those individuals with mild cognitive impairment (MCI) who are most likely to convert to Alzheimer's disease (AD). We investigated whether recognition memory tasks in combination with delayed recall measure of episodic memory and CSF biomarkers can predict MCI to AD conversion at 24-month follow-up. Methods: A total of 397 amnestic-MCI subjects from Alzheimer's disease Neuroimaging Initiative were included. Logistic regression modeling was done to assess the predictive value of all RAVLT measures, risk factors such as age, sex, education, APOE genotype, and CSF biomarkers for progression to AD. Estimating adjusted odds ratios was used to determine which variables would produce an optimal predictive model, and whether adding tests of interaction between the RAVLT Delayed Recall and recognition measures (traditional score and d-prime) would improve prediction of the conversion from a-MCI to AD. Results: 112 (28.2%) subjects developed dementia and 285 (71.8%) subjects did not. Of the all included variables, CSF Aβ1-42 levels, RAVLT Delayed Recall, and the combination of RAVLT Delayed Recall and d-prime were predictive of progression to AD (χ 2 = 38.23, df = 14, p < 0.001). Conclusions: The combination of RAVLT Delayed Recall and d-prime measures may be predictor of conversion from MCI to AD in the ADNI cohort, especially in combination with amyloid biomarkers. A predictive model to help identify individuals at-risk for dementia should include not only traditional episodic memory measures (delayed recall or recognition), but also additional variables (d-prime) that allow the homogenization of the assessment procedures in the diagnosis of MCI.
Russo, María J.; Campos, Jorge; Vázquez, Silvia; Sevlever, Gustavo; Allegri, Ricardo F.; Weiner, Michael W.
2017-01-01
Background: Ongoing research is focusing on the identification of those individuals with mild cognitive impairment (MCI) who are most likely to convert to Alzheimer's disease (AD). We investigated whether recognition memory tasks in combination with delayed recall measure of episodic memory and CSF biomarkers can predict MCI to AD conversion at 24-month follow-up. Methods: A total of 397 amnestic-MCI subjects from Alzheimer's disease Neuroimaging Initiative were included. Logistic regression modeling was done to assess the predictive value of all RAVLT measures, risk factors such as age, sex, education, APOE genotype, and CSF biomarkers for progression to AD. Estimating adjusted odds ratios was used to determine which variables would produce an optimal predictive model, and whether adding tests of interaction between the RAVLT Delayed Recall and recognition measures (traditional score and d-prime) would improve prediction of the conversion from a-MCI to AD. Results: 112 (28.2%) subjects developed dementia and 285 (71.8%) subjects did not. Of the all included variables, CSF Aβ1-42 levels, RAVLT Delayed Recall, and the combination of RAVLT Delayed Recall and d-prime were predictive of progression to AD (χ2 = 38.23, df = 14, p < 0.001). Conclusions: The combination of RAVLT Delayed Recall and d-prime measures may be predictor of conversion from MCI to AD in the ADNI cohort, especially in combination with amyloid biomarkers. A predictive model to help identify individuals at-risk for dementia should include not only traditional episodic memory measures (delayed recall or recognition), but also additional variables (d-prime) that allow the homogenization of the assessment procedures in the diagnosis of MCI. PMID:28344552
Effect of Mild Cognitive Impairment and Alzheimer Disease on Auditory Steady-State Responses
Shahmiri, Elaheh; Jafari, Zahra; Noroozian, Maryam; Zendehbad, Azadeh; Haddadzadeh Niri, Hassan; Yoonessi, Ali
2017-01-01
Introduction: Mild Cognitive Impairment (MCI), a disorder of the elderly people, is difficult to diagnose and often progresses to Alzheimer Disease (AD). Temporal region is one of the initial areas, which gets impaired in the early stage of AD. Therefore, auditory cortical evoked potential could be a valuable neuromarker for detecting MCI and AD. Methods: In this study, the thresholds of Auditory Steady-State Response (ASSR) to 40 Hz and 80 Hz were compared between Alzheimer Disease (AD), MCI, and control groups. A total of 42 patients (12 with AD, 15 with MCI, and 15 elderly normal controls) were tested for ASSR. Hearing thresholds at 500, 1000, and 2000 Hz in both ears with modulation rates of 40 and 80 Hz were obtained. Results: Significant differences in normal subjects were observed in estimated ASSR thresholds with 2 modulation rates in 3 frequencies in both ears. However, the difference was significant only in 500 Hz in the MCI group, and no significant differences were observed in the AD group. In addition, significant differences were observed between the normal subjects and AD patients with regard to the estimated ASSR thresholds with 2 modulation rates and 3 frequencies in both ears. A significant difference was observed between the normal and MCI groups at 2000 Hz, too. An increase in estimated 40 Hz ASSR thresholds in patients with AD and MCI suggests neural changes in auditory cortex compared to that in normal ageing. Conclusion: Auditory threshold estimation with low and high modulation rates by ASSR test could be a potentially helpful test for detecting cognitive impairment. PMID:29158880
Mayer, Flavia; Di Pucchio, Alessandra; Lacorte, Eleonora; Bacigalupo, Ilaria; Marzolini, Fabrizio; Ferrante, Gianluigi; Minardi, Valentina; Masocco, Maria; Canevelli, Marco; Di Fiandra, Teresa; Vanacore, Nicola
2018-01-01
Up to 53.7% of all cases of dementia are assumed to be due to Alzheimer disease (AD), while 15.8% are considered to be due to vascular dementia (VaD). In Europe, about 3 million cases of AD could be due to 7 potentially modifiable risk factors: diabetes, midlife hypertension and/or obesity, physical inactivity, depression, smoking, and low educational level. To estimate the number of VaD cases in Europe and the number of AD and VaD cases in Italy attributable to these 7 potentially modifiable risk factors. Assuming the nonindependence of the 7 risk factors, the adjusted combined population attributable risk (PAR) was estimated for AD and VaD. In Europe, adjusted combined PAR was 31.4% for AD and 37.8% for VaD. The total number of attributable cases was 3,033,000 for AD and 873,000 for VaD. In Italy, assuming a 20% reduction of the prevalence of each risk factor, adjusted combined PAR decreased from 45.2 to 38.9% for AD and from 53.1 to 46.6% for VaD, implying a 6.4 and 6.5% reduction in the prevalence of AD and VaD, respectively. A relevant reduction of AD and VaD cases in Europe and Italy could be obtained through primary prevention.
Estimate of body composition by Hume's equation: validation with DXA.
Carnevale, Vincenzo; Piscitelli, Pamela Angela; Minonne, Rita; Castriotta, Valeria; Cipriani, Cristiana; Guglielmi, Giuseppe; Scillitani, Alfredo; Romagnoli, Elisabetta
2015-05-01
We investigated how the Hume's equation, using the antipyrine space, could perform in estimating fat mass (FM) and lean body mass (LBM). In 100 (40 male ad 60 female) subjects, we estimated FM and LBM by the equation and compared these values with those measured by a last generation DXA device. The correlation coefficients between measured and estimated FM were r = 0.940 (p < 0.0001) and between measured and estimated LBM were r = 0.913 (p < 0.0001). The Bland-Altman plots demonstrated a fair agreement between estimated and measured FM and LBM, though the equation underestimated FM and overestimated LBM in respect to DXA. The mean difference for FM was 1.40 kg (limits of agreement of -6.54 and 8.37 kg). For LBM, the mean difference in respect to DXA was 1.36 kg (limits of agreement -8.26 and 6.52 kg). The root mean square error was 3.61 kg for FM and 3.56 kg for LBM. Our results show that in clinically stable subjects the Hume's equation could reliably assess body composition, and the estimated FM and LBM approached those measured by a modern DXA device.
Boogerd, Emiel A; Damhuis, Anouk M A; van Alfen-van der Velden, Janiëlle A A E m; Steeghs, Marley C C H; Noordam, Cees; Verhaak, Chris M; Vermaes, Ignace P R
2015-08-01
To investigate the assessment of psychosocial problems in children with type 1 diabetes by means of clinical estimations made by nurses and paediatricians and by using standardised questionnaires. Although children with type 1 diabetes and their parents show increased risk for psychosocial problems, standardised assessment of these problems lacks in diabetes care. By comparing these different modes of assessment, using a cross-sectional design, information about the additional value of using standardised questionnaires is provided. Participants were 110 children with type 1 diabetes (aged 4-16), their parents, and healthcare professionals. Children filled out the Strengths and Difficulties Questionnaire and the Paediatric Quality of Life Inventory, Diabetes Module. Parents filled out the Strengths and Difficulties Questionnaire parent-report and the Parenting Stress Index. Independently, nurses and paediatricians filled out a short questionnaire, which assessed their clinical estimations of the children's psychosocial problems and quality of life, and parents' levels of parenting stress. Reports of children and parents were compared to clinical estimations. Children in our sample showed more psychosocial problems and lower health-related quality of life than their healthy peers. In approximately half of the children, dichotomous estimations by healthcare professionals and dichotomised reports by patients and parents were in agreement. In 10% of the children, no psychosocial problems were present according to professionals' estimations, although patients and parents-reported psychosocial problems. In 40%, psychosocial problems were present according to professionals' estimations, although parents and patients did not report psychosocial problems. Children with type 1 diabetes show more psychosocial problems than healthy children. Professionals seem to tend towards overestimating psychosocial problems. Extending the assessment of psychosocial problems with routine screening on patient-reported outcomes, using validated questionnaires, could be of additional value in tailoring care to the needs of the individual child and parents. © 2015 John Wiley & Sons Ltd.
19 CFR 152.105 - Deductive value.
Code of Federal Regulations, 2010 CFR
2010-04-01
... its importation, and any Federal excise tax on, or measured by the value of, the merchandise for which... paragraph (c)(3) of this section, the value added by the processing of the merchandise after importation to...) of this section, deductions made for the value added by that processing will be based on objective...
19 CFR 152.105 - Deductive value.
Code of Federal Regulations, 2014 CFR
2014-04-01
... its importation, and any Federal excise tax on, or measured by the value of, the merchandise for which... paragraph (c)(3) of this section, the value added by the processing of the merchandise after importation to...) of this section, deductions made for the value added by that processing will be based on objective...
19 CFR 152.105 - Deductive value.
Code of Federal Regulations, 2012 CFR
2012-04-01
... its importation, and any Federal excise tax on, or measured by the value of, the merchandise for which... paragraph (c)(3) of this section, the value added by the processing of the merchandise after importation to...) of this section, deductions made for the value added by that processing will be based on objective...
19 CFR 152.105 - Deductive value.
Code of Federal Regulations, 2013 CFR
2013-04-01
... its importation, and any Federal excise tax on, or measured by the value of, the merchandise for which... paragraph (c)(3) of this section, the value added by the processing of the merchandise after importation to...) of this section, deductions made for the value added by that processing will be based on objective...
19 CFR 152.105 - Deductive value.
Code of Federal Regulations, 2011 CFR
2011-04-01
... its importation, and any Federal excise tax on, or measured by the value of, the merchandise for which... paragraph (c)(3) of this section, the value added by the processing of the merchandise after importation to...) of this section, deductions made for the value added by that processing will be based on objective...
Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi
2017-01-01
Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.
Evaluation of nitrous acid sources and sinks in urban outflow
NASA Astrophysics Data System (ADS)
Gall, Elliott T.; Griffin, Robert J.; Steiner, Allison L.; Dibb, Jack; Scheuer, Eric; Gong, Longwen; Rutter, Andrew P.; Cevik, Basak K.; Kim, Saewung; Lefer, Barry; Flynn, James
2016-02-01
Intensive air quality measurements made from June 22-25, 2011 in the outflow of the Dallas-Fort Worth (DFW) metropolitan area are used to evaluate nitrous acid (HONO) sources and sinks. A two-layer box model was developed to assess the ability of established and recently identified HONO sources and sinks to reproduce observations of HONO mixing ratios. A baseline model scenario includes sources and sinks established in the literature and is compared to scenarios including three recently identified sources: volatile organic compound-mediated conversion of nitric acid to HONO (S1), biotic emission from the ground (S2), and re-emission from a surface nitrite reservoir (S3). For all mechanisms, ranges of parametric values span lower- and upper-limit values. Model outcomes for 'likely' estimates of sources and sinks generally show under-prediction of HONO observations, implying the need to evaluate additional sources and variability in estimates of parameterizations, particularly during daylight hours. Monte Carlo simulation is applied to model scenarios constructed with sources S1-S3 added independently and in combination, generally showing improved model outcomes. Adding sources S2 and S3 (scenario S2/S3) appears to best replicate observed HONO, as determined by the model coefficient of determination and residual sum of squared errors (r2 = 0.55 ± 0.03, SSE = 4.6 × 106 ± 7.6 × 105 ppt2). In scenario S2/S3, source S2 is shown to account for 25% and 6.7% of the nighttime and daytime budget, respectively, while source S3 accounts for 19% and 11% of the nighttime and daytime budget, respectively. However, despite improved model fit, there remains significant underestimation of daytime HONO; on average, a 0.15 ppt/s unknown daytime HONO source, or 67% of the total daytime source, is needed to bring scenario S2/S3 into agreement with observation. Estimates of 'best fit' parameterizations across lower to upper-limit values results in a moderate reduction of the unknown daytime source, from 0.15 to 0.10 ppt/s.
Gui, Wen-Jun; Liu, Yi-Hua; Wang, Chun-Mei; Liang, Xiao; Zhu, Guo-Nian
2009-10-01
A heterologous direct competitive enzyme-linked immunosorbent assay (ELISA) for parathion residue determination is described based on a monoclonal antibody and a new competitor. The effects of several physicochemical factors, such as methanol concentration, ionic strength, pH value, and sample matrix, on the performance of the ELISA were optimized for the sake of obtaining a satisfactory assay sensitivity. Results showed that when the assay medium was in the optimized condition (phosphate buffer solution [PBS] containing 10% [v/v] methanol and 0.2 mol/L NaCl at a pH value of 5.0), the sensitivity (estimated as the IC(50) value) and the limit of detection (LOD, estimated as the IC(10) value) were 1.19 and 0.08 ng/ml, respectively. The precision investigation indicated that the intraassay precision values all were below 10% and that the interassay precision values ranged from 4.89 to 19.12%. In addition, the developed ELISA showed a good linear correlation (r(2)=0.9962) to gas chromatography within the analyte's concentration range of 0.1 to 16 ng/ml. When applied to the fortified samples (parathion adding level: 5-15 microg/kg), the developed ELISA presented mean recoveries of 127.46, 122.52, 91.92, 124.01, 129.72, 99.37, and 87.17% for tomato, cucumber, banana, apple, orange, pear, and sugarcane, respectively. Results indicated that the established ELISA is a potential tool for parathion residue determination.
Residents' willingness-to-pay for attributes of rural health care facilities.
Allen, James E; Davis, Alison F; Hu, Wuyang; Owusu-Amankwah, Emmanuel
2015-01-01
As today's rural hospitals have struggled with financial sustainability for the past 2 decades, it is critical to understand their value relative to alternatives, such as rural health clinics and private practices. To estimate the willingness-to-pay for specific attributes of rural health care facilities in rural Kentucky to determine which services and operational characteristics are most valued by rural residents. We fitted choice experiment data from 769 respondents in 10 rural Kentucky counties to a conditional logit model and used the results to estimate willingness-to-pay for attributes in several categories, including hours open, types of insurance accepted, and availability of health care professionals and specialized care. Acceptance of Medicaid/Medicare with use of a sliding fee scale versus acceptance of only private insurance was the most valued attribute. Presence of full diagnostic services, an emergency room, and 24-hour/7-day-per-week access were also highly valued. Conversely, the presence of specialized care, such as physical therapy, cancer care, or dialysis, was not valued. In total, respondents were willing to pay $225 more annually to support a hospital relative to a rural health clinic. Rural Kentuckians value the services, convenience, and security that rural hospitals offer, though they are not willing to pay more for specialized care that may be available in larger medical treatment centers. The results also inform which attributes might be added to existing rural health facilities to make them more valuable to local residents. © 2014 National Rural Health Association.
Willingness to pay per quality-adjusted life year for life-saving treatments in Thailand
Nimdet, Khachapon; Ngorsuraches, Surachat
2015-01-01
Objective To estimate the willingness to pay (WTP) per quality-adjusted life year (QALY) value for life-saving treatments and to determine factors affecting the WTP per QALY value. Design A cross-sectional survey with multistage sampling and face-to-face interviews. Setting General population in the southern part of Thailand. Participants A total of 600 individuals were included in the study. Only 554 (92.3%) responses were usable for data analyses. Outcome measure Participants were asked for the maximum amount of WTP value for life-saving treatments by an open-ended question. EQ-5D-3L and visual analogue scale (VAS) were used to estimate additional QALY. Results The amount of WTP values varied from 0 to 720 000 Baht/year (approximately 32 Baht=US$1). The averages of additional QALY obtained from VAS and EQ-5D-3L were only slightly different (0.872 and 0.853, respectively). The averages of WTP per QALY obtained from VAS and EQ-5D-3L were 244720 and 243120 Baht/QALY, respectively. As compared to male participants, female participants were more likely to pay less for an additional QALY (p=0.007). In addition, participants with higher household incomes tended to have higher WTP per QALY values (p<0.001). Conclusions Our study added another WTP per QALY value specifically for life-saving treatments, which would complement the current cost-effectiveness threshold used in Thailand and optimise patient access to innovative treatments or technologies. PMID:26438135
Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Shouichi; Matsunaga, Yuta; Haba, Tomonobu; Kawaguchi, Ai; Daioku, Tomihiko; Toyama, Hiroshi; Kato, Ryoichi
2017-05-01
Adequate dose management during computed tomography is important. In the present study, the dosimetric application software ImPACT was added to a functional calculator of the size-specific dose estimate and was part of the scan settings for the auto exposure control (AEC) technique. This study aimed to assess the practicality and accuracy of the modified ImPACT software for dose estimation. We compared the conversion factors identified by the software with the values reported by the American Association of Physicists in Medicine Task Group 204, and we noted similar results. Moreover, doses were calculated with the AEC technique and a fixed-tube current of 200 mA for the chest-pelvis region. The modified ImPACT software could estimate each organ dose, which was based on the modulated tube current. The ability to perform beneficial modifications indicates the flexibility of the ImPACT software. The ImPACT software can be further modified for estimation of other doses. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge
Jaiswal, K.S.; Wald, D.J.
2012-01-01
We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.
Independent Production Cost Estimate: XM1 Tank Main Armament Evaluation
1977-11-01
r V. REFERtNCfcS 1. Ammunition Cost Research Study, Gerald W. Kalal and Patrick Gannon, Jun 76, AD-A-029330. 2. Ammunition Cost Research: Medium...Bore Cannon Ammunition, Annexes A-E, Patrick Gannon, Celestino George, Gerald Kalal , Kathleen Keleher, Paul Riedesel, Joseph Robinson, Sep 75, AD-A...016104. 3. Cost Estimating Relationships for Manufacturing Hardware Cost of Gun/Howitzer Cannons, Gerald W. Kalal , Aug 72, AD-75-7163. 4. ARRCOM
Brenner, David J; Martinez, Alvaro A; Edmundson, Gregory K; Mitchell, Christina; Thames, Howard D; Armour, Elwood P
2002-01-01
A direct approach to the question of whether prostate tumors have an atypically high sensitivity to fractionation (low alpha/beta ratio), more typical of the surrounding late-responding normal tissue. Earlier estimates of alpha/beta for prostate cancer have relied on comparing results from external beam radiotherapy (EBRT) and brachytherapy, an approach with significant pitfalls due to the many differences between the treatments. To circumvent this, we analyze recent data from a single EBRT + high-dose-rate (HDR) brachytherapy protocol, in which the brachytherapy was given in either 2 or 3 implants, and at various doses. For the analysis, standard models of tumor cure based on Poisson statistics were used in conjunction with the linear-quadratic formalism. Biochemical control at 3 years was the clinical endpoint. Patients were matched between the 3 HDR vs. 2 HDR implants by clinical stage, pretreatment prostate-specific antigen (PSA), Gleason score, length of follow-up, and age. The estimated value of alpha/beta from the current analysis of 1.2 Gy (95% CI: 0.03, 4.1 Gy) is consistent with previous estimates for prostate tumor control. This alpha/beta value is considerably less than typical values for tumors (> or =8 Gy), and more comparable to values in surrounding late-responding normal tissues. This analysis provides strong supporting evidence that alpha/beta values for prostate tumor control are atypically low, as indicated by previous analyses and radiobiological considerations. If true, hypofractionation or HDR regimens for prostate radiotherapy (with appropriate doses) should produce tumor control and late sequelae that are at least as good or even better than currently achieved, with the added possibility that early sequelae may be reduced.
NASA Astrophysics Data System (ADS)
Van Hoey, Gert; Birchenough, Silvana N. R.; Hostens, Kris
2014-02-01
Biological value estimation is based on a set of assessment questions and several thresholds to delineate areas of ecological importance (e.g. biodiversity). An existing framework, that was specifically designed to assess the ecosystem biodiversity, was expanded by adding new questions on the productivity, functionality and biogeochemical status of benthic habitats. The additional ecological and sedimentological information was collected by using sediment profile imagery (SPI) and grab sampling. Additionally, information on the performance and comparability of both techniques is provided in this study. The research idea was tested at a site near the harbor of Zeebrugge, an area under consideration as a new disposal site for dredged material from the harbor entrance. The sedimentology of the area can be adequately described based on the information from both SPI and Van Veen grab samples, but only the SPI revealed structural information on the physical habitat (layering, a-RPD). The latter information represented the current status of the benthic habitat, which was confirmed by the Van Veen grab samples. All information was summarized through the biological valuation framework, and provided clear evidence of the differences in biological value for the different sediment types within the area. We concluded that the installation of a new dredged material disposal site in this area was not in conflict with the benthic ecology. This area has a low biological value and the benthic system is adapted to changing conditions, which was signaled by the dominance of mobile, short living and opportunistic species. This study showed that suitable sedimentological and ecological information can be gathered by these traditional and complementary techniques, to estimate the biological value of an area in the light of marine spatial planning and environmental impact assessments.
NASA Astrophysics Data System (ADS)
Longman, Ryan J.; Giambelluca, Thomas W.; Frazier, Abby G.
2012-01-01
Estimates of clear sky global solar irradiance using the parametric model SPCTRAL2 were tested against clear sky radiation observations at four sites in Hawai`i using daily, mean monthly, and 1 year mean model parameter settings. Atmospheric parameters in SPCTRAL2 and similar models are usually set at site-specific values and are not varied to represent the effects of fluctuating humidity, aerosol amount and type, or ozone concentration, because time-dependent atmospheric parameter estimates are not available at most sites of interest. In this study, we sought to determine the added value of using time dependent as opposed to fixed model input parameter settings. At the AERONET site, Mauna Loa Observatory (MLO) on the island of Hawai`i, where daily measurements of atmospheric optical properties and hourly solar radiation observations are available, use of daily rather than 1 year mean aerosol parameter values reduced mean bias error (MBE) from 18 to 10 W m-2 and root mean square error from 25 to 17 W m-2. At three stations in the HaleNet climate network, located at elevations of 960, 1640, and 2590 m on the island of Maui, where aerosol-related parameter settings were interpolated from observed values for AERONET sites at MLO (3397 m) and Lāna`i (20 m), and precipitable water was estimated using radiosonde-derived humidity profiles from nearby Hilo, the model performed best when using constant 1 year mean parameter values. At HaleNet Station 152, for example, MBE was 18, 10, and 8 W m-2 for daily, monthly, and 1 year mean parameters, respectively.
Value-added biotransformation of cellulosic sugars by engineered Saccharomyces cerevisiae.
Lane, Stephan; Dong, Jia; Jin, Yong-Su
2018-07-01
The substantial research efforts into lignocellulosic biofuels have generated an abundance of valuable knowledge and technologies for metabolic engineering. In particular, these investments have led to a vast growth in proficiency of engineering the yeast Saccharomyces cerevisiae for consuming lignocellulosic sugars, enabling the simultaneous assimilation of multiple carbon sources, and producing a large variety of value-added products by introduction of heterologous metabolic pathways. While microbial conversion of cellulosic sugars into large-volume low-value biofuels is not currently economically feasible, there may still be opportunities to produce other value-added chemicals as regulation of cellulosic sugar metabolism is quite different from glucose metabolism. This review summarizes these recent advances with an emphasis on employing engineered yeast for the bioconversion of lignocellulosic sugars into a variety of non-ethanol value-added products. Copyright © 2018 Elsevier Ltd. All rights reserved.
Moseson, Heidi; Massaquoi, Moses; Dehlendorf, Christine; Bawo, Luke; Dahn, Bernice; Zolia, Yah; Vittinghoff, Eric; Hiatt, Robert A; Gerdts, Caitlin
2015-12-01
Direct measurement of sensitive health events is often limited by high levels of under-reporting due to stigma and concerns about privacy. Abortion in particular is notoriously difficult to measure. This study implements a novel method to estimate the cumulative lifetime incidence of induced abortion in Liberia. In a randomly selected sample of 3219 women ages 15–49 years in June 2013 in Liberia, we implemented the ‘Double List Experiment’. To measure abortion incidence, each woman was read two lists: (A) a list of non-sensitive items and (B) a list of correlated non-sensitive items with abortion added. The sensitive item, abortion, was randomly added to either List A or List B for each respondent. The respondent reported a simple count of the options on each list that she had experienced, without indicating which options. Difference in means calculations between the average counts for each list were then averaged to provide an estimate of the population proportion that has had an abortion. The list experiment estimates that 32% [95% confidence interval (CI): 0.29-0.34) of respondents surveyed had ever had an abortion (26% of women in urban areas, and 36% of women in rural areas, P-value for difference < 0.001), with a 95% response rate. The list experiment generated an estimate five times greater than the only previous representative estimate of abortion in Liberia, indicating the potential utility of this method to reduce under-reporting in the measurement of abortion. The method could be widely applied to measure other stigmatized health topics, including sexual behaviours, sexual assault or domestic violence.
Transport of human adenoviruses in porous media
NASA Astrophysics Data System (ADS)
Kokkinos, Petros; Syngouna, Vasiliki I.; Tselepi, Maria A.; Bellou, Maria; Chrysikopoulos, Constantinos V.; Vantarakis, Apostolos
2015-04-01
Groundwater may be contaminated with infective human enteric viruses from various wastewater discharges, sanitary landfills, septic tanks, agricultural practices, and artificial groundwater recharge. Coliphages have been widely used as surrogates of enteric viruses, because they share many fundamental properties and features. Although a large number of studies focusing on various factors (i.e. pore water solution chemistry, fluid velocity, moisture content, temperature, and grain size) that affect biocolloid (bacteria, viruses) transport have been published over the past two decades, little attention has been given toward human adenoviruses (hAdVs). The main objective of this study was to evaluate the effect of pore water velocity on hAdV transport in water saturated laboratory-scale columns packed with glass beads. The effects of pore water velocity on virus transport and retention in porous media was examined at three pore water velocities (0.39, 0.75, and 1.22 cm/min). The results indicated that all estimated average mass recovery values for hAdV were lower than those of coliphages, which were previously reported in the literature by others for experiments conducted under similar experimental conditions. However, no obvious relationship between hAdV mass recovery and water velocity could be established from the experimental results. The collision efficiencies were quantified using the classical colloid filtration theory. Average collision efficiency, α, values decreased with decreasing flow rate, Q, and pore water velocity, U, but no significant effect of U on α was observed. Furthermore, the surface properties of viruses and glass beads were used to construct classical DLVO potential energy profiles. The results revealed that the experimental conditions of this study were unfavorable to deposition and that no aggregation between virus particles is expected to occur. A thorough understanding of the key processes governing virus transport is pivotal for public health protection.
Experimental Study on the Distillation Capacity of Alcohol-Gasoline Blends
NASA Astrophysics Data System (ADS)
Stan, C.; Andreescu, C.; Dobre, A.; Iozsa, D.
2017-10-01
The paper objective is to highlight the consequences of adding different alcohols in gasoline on the distillation characteristics of these blends. Changes of the distillation parameters (ti, t10, t50, t90, tf, E70, E100, E150) have been evaluated and, also, the evolution trends of the distillation curves for different alcohol added in mixture with the gasoline have been estimated. Several types of gasoline sold on the market and methanol, ethanol, i-propanol and butanol were used during the experiments and the corresponding distillation curves have been analyzed. The alcohol fraction in mixtures varied between 5 and 20%. Double blends with alcohol added in gasoline and triple blends with two alcohols added in gasoline were used. The comparison of the distillation curves of the mixtures was done with respect to that of pure gasoline. It was specified how the values of the distillation parameters, E70, E100 and E150, were set within the limits of EN 228. The distillation was made on 100 ml of fuel and the measurements were made on each 10 ml of fuel transformed into vapor state and then condensed. The influence of the alcohols present in these mixtures was manifested by the changes in the shape of the distillation curve. The butanol influence on the distillation temperatures was found lower than that of ethanol, because the physicochemical properties of the butanol are closer to those of gasoline. The molecules of alcohols actively interact with the fractions of gasolines, their combination leading to a conjugate effect and to a modifying the distillation parameters values. The variation of these parameters depends on the alcohol fraction in the mixture.
Investigating the electronic properties of Al2O3/Cu(In,Ga)Se2 interface
NASA Astrophysics Data System (ADS)
Kotipalli, R.; Vermang, B.; Joel, J.; Rajkumar, R.; Edoff, M.; Flandre, D.
2015-10-01
Atomic layer deposited (ALD) Al2O3 films on Cu(In,Ga)Se2 (CIGS) surfaces have been demonstrated to exhibit excellent surface passivation properties, which is advantageous in reducing recombination losses at the rear metal contact of CIGS thin-film solar cells. Here, we report, for the first time, experimentally extracted electronic parameters, i.e. fixed charge density (Qf) and interface-trap charge density (Dit), for as-deposited (AD) and post-deposition annealed (PDA) ALD Al2O3 films on CIGS surfaces using capacitance-voltage (C-V) and conductance-frequency (G-f) measurements. These results indicate that the AD films exhibit positive fixed charges Qf (approximately 1012 cm-2), whereas the PDA films exhibit a very high density of negative fixed charges Qf (approximately 1013 cm-2). The extracted Dit values, which reflect the extent of chemical passivation, were found to be in a similar range of order (approximately 1012 cm-2 eV-1) for both AD and PDA samples. The high density of negative Qf in the bulk of the PDA Al2O3 film exerts a strong Coulomb repulsive force on the underlying CIGS minority carriers (ns), preventing them to recombine at the CIGS/Al2O3 interface. Using experimentally extracted Qf and Dit values, SCAPS simulation results showed that the surface concentration of minority carriers (ns) in the PDA films was approximately eight-orders of magnitude lower than in the AD films. The electrical characterization and estimations presented in this letter construct a comprehensive picture of the interfacial physics involved at the Al2O3/CIGS interface.
[A model of world population growth as an experiment in systematic research].
Kapitsa, S
1997-01-01
A mathematical model was developed for the estimation of global population growth, and the estimates were compared with those of the UN and covered the stretch of 4.4 million years B.C. to the years 2175 and 2500 A.D. The estimates were also broken down into human, geological, and technological historical periods. The model showed that human population would stabilize at the level of 14 billion around 2500 A.D. and 13 billion around 2200 A.D., in accordance with UN projections. It also revealed the history of human population growth through the following stages (UN figures are listed in parentheses): 100,000, about 1.6 million years ago; 5 (1-5) million, 35,000 B.C.; 21 (10-15) million, 7000 B.C.; 46 (47) million, 2000 B.C.; 93 (100-230) million, at the time of Christ; 185 (275-345) million, 1000 A.D.; 366 (450-540) million, 1500 A.D.; 887 (907) million, 1800 A.D.; 1158 (1170) million, 1850 A.D.; 1656 (1650-1710) million, 1900 A.D.; 2812 (2515) million, 1950 A.D.; 5253 (5328) million, 1990 A.D.; 6265 (6261) million, 2000 A.D.; 10,487 (10,019) million, 2050 A.D.; 12,034 (11,186) million, 2100 A.D.; 12,648 (11,543) million, 2150 A.D.; 12,946 (11,600) million, 2200 A.D.; and 13,536 million, 2500 A.D. The model advanced the investigation of phenomena by studying the interactions between economical, technological, social, cultural, and biological processes. The analysis showed that humanity has reached a critical phase in its growth and that development in each period depended on external, not internal, factors. This permits the formulation of the principle of demographic imperative (distinct from the Malthusian principle), which states that resources determine the speed and extent of the growth of population.
ERIC Educational Resources Information Center
Raudenbush, Stephen
2013-01-01
This brief considers the problem of using value-added scores to compare teachers who work in different schools. The author focuses on whether such comparisons can be regarded as fair, or, in statistical language, "unbiased." An unbiased measure does not systematically favor teachers because of the backgrounds of the students they are…
ERIC Educational Resources Information Center
Harris, Douglas N.
2012-01-01
In the recent drive to revamp teacher evaluation and accountability, measures of a teacher's value added have played the starring role. But the star of the show is not always the best actor, nor can the star succeed without a strong supporting cast. In assessing teacher performance, observations of classroom practice, portfolios of teachers' work,…
ERIC Educational Resources Information Center
Amrein-Beardsley, Audrey; Collins, Clarin
2012-01-01
The SAS Educational Value-Added Assessment System (SAS[R] EVAAS[R]) is the most widely used value-added system in the country. It is also self-proclaimed as "the most robust and reliable" system available, with its greatest benefit to help educators improve their teaching practices. This study critically examined the effects of SAS[R] EVAAS[R] as…
ERIC Educational Resources Information Center
Tymms, Peter
This is the fourth in a series of technical reports that have dealt with issues surrounding the possibility of national value-added systems for primary schools in England. The main focus has been on the relative progress made by students between the ends of Key Stage 1 (KS1) and Key Stage 2 (KS2). The analysis has indicated that the strength of…
Beheshti, Iman; Demirel, Hasan; Farokhian, Farnaz; Yang, Chunlan; Matsuda, Hiroshi
2016-12-01
This paper presents an automatic computer-aided diagnosis (CAD) system based on feature ranking for detection of Alzheimer's disease (AD) using structural magnetic resonance imaging (sMRI) data. The proposed CAD system is composed of four systematic stages. First, global and local differences in the gray matter (GM) of AD patients compared to the GM of healthy controls (HCs) are analyzed using a voxel-based morphometry technique. The aim is to identify significant local differences in the volume of GM as volumes of interests (VOIs). Second, the voxel intensity values of the VOIs are extracted as raw features. Third, the raw features are ranked using a seven-feature ranking method, namely, statistical dependency (SD), mutual information (MI), information gain (IG), Pearson's correlation coefficient (PCC), t-test score (TS), Fisher's criterion (FC), and the Gini index (GI). The features with higher scores are more discriminative. To determine the number of top features, the estimated classification error based on training set made up of the AD and HC groups is calculated, with the vector size that minimized this error selected as the top discriminative feature. Fourth, the classification is performed using a support vector machine (SVM). In addition, a data fusion approach among feature ranking methods is introduced to improve the classification performance. The proposed method is evaluated using a data-set from ADNI (130 AD and 130 HC) with 10-fold cross-validation. The classification accuracy of the proposed automatic system for the diagnosis of AD is up to 92.48% using the sMRI data. An automatic CAD system for the classification of AD based on feature-ranking method and classification errors is proposed. In this regard, seven-feature ranking methods (i.e., SD, MI, IG, PCC, TS, FC, and GI) are evaluated. The optimal size of top discriminative features is determined by the classification error estimation in the training phase. The experimental results indicate that the performance of the proposed system is comparative to that of state-of-the-art classification models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zuo, Shu-di; Ren, Yin; Weng, Xian; Ding, Hong-feng; Luo, Yun-jian
2015-02-01
Biomass allometric equation (BAE) considered as a simple and reliable method in the estimation of forest biomass and carbon was used widely. In China, numerous studies focused on the BAEs for coniferous forest and pure broadleaved forest, and generalized BAEs were frequently used to estimate the biomass and carbon of mixed broadleaved forest, although they could induce large uncertainty in the estimates. In this study, we developed the species-specific and generalized BAEs using biomass measurement for 9 common broadleaved trees (Castanopsis fargesii, C. lamontii, C. tibetana, Lithocarpus glaber, Sloanea sinensis, Daphniphyllum oldhami, Alniphyllum fortunei, Manglietia yuyuanensis, and Engelhardtia fenzlii) of subtropical evergreen broadleaved forest, and compared differences in species-specific and generalized BAEs. The results showed that D (diameter at breast height) was a better independent variable in estimating the biomass of branch, leaf, root, aboveground section and total tree than a combined variable (D2 H) of D and H (tree height) , but D2H was better than D in estimating stem biomass. R2 (coefficient of determination) values of BAEs for 6 species decreased when adding H as the second independent variable into D- only BAEs, where R2 value for S. sinensis decreased by 5.6%. Compared with generalized D- and D2H-based BAEs, standard errors of estimate (SEE) of BAEs for 8 tree species decreased, and similar decreasing trend was observed for different components, where SEEs of the branch decreased by 13.0% and 20.3%. Therefore, the biomass carbon storage and its dynamic estimates were influenced largely by tree species and model types. In order to improve the accuracy of the estimates of biomass and carbon, we should consider the differences in tree species and model types.
Validation of a polygenic risk score for dementia in black and white individuals
Marden, Jessica R; Walter, Stefan; Tchetgen Tchetgen, Eric J; Kawachi, Ichiro; Glymour, M Maria
2014-01-01
Objective To determine whether a polygenic risk score for Alzheimer's disease (AD) predicts dementia probability and memory functioning in non-Hispanic black (NHB) and non-Hispanic white (NHW) participants from a sample not used in previous genome-wide association studies. Methods Non-Hispanic white and NHB Health and Retirement Study (HRS) participants provided genetic information and either a composite memory score (n = 10,401) or a dementia probability score (n = 7690). Dementia probability score was estimated for participants' age 65+ from 2006 to 2010, while memory score was available for participants age 50+. We calculated AD genetic risk scores (AD-GRS) based on 10 polymorphisms confirmed to predict AD, weighting alleles by beta coefficients reported in AlzGene meta-analyses. We used pooled logistic regression to estimate the association of the AD-GRS with dementia probability and generalized linear models to estimate its effect on memory score. Results Each 0.10 unit change in the AD-GRS was associated with larger relative effects on dementia among NHW aged 65+ (OR = 2.22; 95% CI: 1.79, 2.74; P < 0.001) than NHB (OR=1.33; 95% CI: 1.00, 1.77; P = 0.047), although additive effect estimates were similar. Each 0.10 unit change in the AD-GRS was associated with a −0.07 (95% CI: −0.09, −0.05; P < 0.001) SD difference in memory score among NHW aged 50+, but no significant differences among NHB (β = −0.01; 95% CI: −0.04, 0.01; P = 0.546). [Correction added on 29 July 2014, after first online publication: confidence intervalshave been amended.] The estimated effect of the GRS was significantly smaller among NHB than NHW (P < 0.05) for both outcomes. Conclusion This analysis provides evidence for differential relative effects of the GRS on dementia probability and memory score among NHW and NHB in a new, national data set. PMID:25328845
Numerical modeling of solar irradiance on earth's surface
NASA Astrophysics Data System (ADS)
Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.
2016-05-01
Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J. G.
While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating bothmore » the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.« less
NASA Astrophysics Data System (ADS)
Manimekalai, R.; Antony Joseph, A.; Ramachandra Raja, C.
2014-03-01
This article has been retracted: please see Elsevier Policy on Article Withdrawal. This article has been retracted at the request of authors. According to the author we have reported Aloevera Amino Acid added Lithium sulphate monohydrate [AALSMH] crystal is a new nonlinear optical crystal. From the recorded high performance liquid chromatography spectrum, by matching the retention times with the known compounds, the amino acids present in our extract are identified as homocystine, isoleucine, serine, leucine and tyrosine. From the thin layer chromatography and colorimetric estimation techniques, presence of isoleucine was identified and it was also confirmed by NMR spectrum. From the above studies, we came to conclude that AALSMH is new nonlinear optical crystal. After further investigation, lattice parameter values of AALSMH are coinciding with lithium sulphate. Therefore we have decided to withdraw our paper. Sorry for the inconvenience and time spent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newsom, R. K.; Sivaraman, C.; Shippert, T. R.
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosismore » from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.« less
Yoon, Ki Woong; Song, Ji Soo; Han, Young Min
2014-01-01
To estimate the diagnostic accuracy of the sum of relative enhancement ratio (sRER) in making a differential diagnosis of hepatocellular carcinoma (HCC) from benign cirrhosis-related nodules. Eighteen benign cirrhosis-related nodules and 18 HCCs were evaluated. Three radiologists independently reviewed computed tomography images using visual assessment and sRER. sRER was estimated by adding region-of-interest measurement in the arterial phase and the delayed phase. Diagnostic performance and accuracy were evaluated. The mean values of sRER were significantly higher in HCCs than in benign cirrhosis-related nodules. The sRER method improved diagnostic accuracy of differentiating HCCs from benign cirrhosis-related nodules. Copyright © 2014 Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2014 CFR
2014-04-01
... States and one or more foreign countries, the value of the assist is the value added outside the United... general expenses or value from further processing, or (iii) Added under § 152.106(b) as profit or general... undertaken within the United States. (3) The following apply in determining the value of assists described in...
Code of Federal Regulations, 2012 CFR
2012-04-01
... States and one or more foreign countries, the value of the assist is the value added outside the United... general expenses or value from further processing, or (iii) Added under § 152.106(b) as profit or general... undertaken within the United States. (3) The following apply in determining the value of assists described in...
Code of Federal Regulations, 2013 CFR
2013-04-01
... States and one or more foreign countries, the value of the assist is the value added outside the United... general expenses or value from further processing, or (iii) Added under § 152.106(b) as profit or general... undertaken within the United States. (3) The following apply in determining the value of assists described in...
Code of Federal Regulations, 2011 CFR
2011-04-01
... States and one or more foreign countries, the value of the assist is the value added outside the United... general expenses or value from further processing, or (iii) Added under § 152.106(b) as profit or general... undertaken within the United States. (3) The following apply in determining the value of assists described in...
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.
2015-12-01
Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
Wiche, Gregg J.; Lent, Robert M.; Rannie, W. F.
1996-01-01
On the basis of three sediment-based chronologies, Fritz et al. ( 1994) concluded that during the ’Little Ice Age’ (about AD 1500 to 1850), the Devils Lake Basin generally had less effective moisture (precipitation minus evaporation) and warmer temperatures than at present. In this comment, we argue that historic data indicate that runoff and effective moisture were greater than at present. The largest nineteenth-century floods (AD 1826, 1852 and 1861) were significantly greater than the twentiethcentury floods, and flooding in the Red River of the North Basin occurred more frequently from AD 1800 to 1870 than since 1870. Between AD 1776 and 1870, the ratio of wet to dry years was about 2 to 1. Mean temperatures in all seasons were cooler for 1850-70 than for 1931-60. Lake levels of Devils Lake during the first half of the nineteenth century were higher than they are today, and, even when Devils Lake was almost dry, the salinity was less than the ’diatom-inferred’ salinity values that Fritz et al. (1994) estimated for 1800 through about 1850. We acknowledge the importance of high-resolution palaeoclimatic records, but interpretation of these records must be consistent with historic information.
Sugianto, Jessica Z; Stewart, Brian; Ambruzs, Josephine M; Arista, Amanda; Park, Jason Y; Cope-Yokoyama, Sandy; Luu, Hung S
2015-01-01
To implement Lean principles to accommodate expanding volumes of gastrointestinal biopsies and to improve laboratory processes overall. Our continuous improvement (kaizen) project analyzed the current state for gastrointestinal biopsy handling using value-stream mapping for specimens obtained at a 487-bed tertiary care pediatric hospital in Dallas, Texas. We identified non-value-added time within the workflow process, from receipt of the specimen in the histology laboratory to the delivery of slides and paperwork to the pathologist. To eliminate non-value-added steps, we implemented the changes depicted in a revised-state value-stream map. Current-state value-stream mapping identified a total specimen processing time of 507 minutes, of which 358 minutes were non-value-added. This translated to a process cycle efficiency of 29%. Implementation of a revised-state value stream resulted in a total process time reduction to 238 minutes, of which 89 minutes were non-value-added, and an improved process cycle efficiency of 63%. Lean production principles of continuous improvement and waste elimination can be successfully implemented within the clinical laboratory.
Sugar-Sweetened Beverages Are the Main Sources of Added Sugar Intake in the Mexican Population.
Sánchez-Pimienta, Tania G; Batis, Carolina; Lutter, Chessa K; Rivera, Juan A
2016-09-01
Sugar intake has been associated with an increased prevalence of obesity, other noncommunicable diseases, and dental caries. The WHO recommends that free sugars should be <10% of total energy intake (TEI) and that additional health benefits could be obtained with a reduction below 5% of TEI. The objective of this study was to estimate the total, intrinsic, and added sugar intake in the Mexican diet and to identify the food groups that are the main sources of these sugars. We used data from a national probabilistic survey [ENSANUT (National Health and Nutrition Survey) 2012], which represents 3 geographic regions and urban and rural areas. Dietary information was obtained by administering a 24-h recall questionnaire to 10,096 participants. Total sugar intake was estimated by using the National Institute of Public Health (INSP) food-composition table and an established method to estimate added sugars. The mean intakes of total, intrinsic, and added sugars were 365, 127, and 238 kcal/d, respectively. Added sugars contributed 13% of TEI. Sugar-sweetened beverages (SSBs) were the main source of sugars, contributing 69% of added sugars. Food products high in saturated fat and/or added sugar (HSFAS) were the second main sources of added sugars, contributing 25% of added sugars. The average intake of added sugars in the Mexican diet is higher than WHO recommendations, which may partly explain the high prevalence of obesity and diabetes in Mexico. Because SSBs and HSFAS contribute >94% of total added sugars, strategies to reduce their intake should be strengthened. This includes stronger food labels to warn the consumer about the content of added sugars in foods and beverages. © 2016 American Society for Nutrition.
Patel, Tejas K; Patel, Parvati B
2018-06-01
The aim of this study was to estimate the prevalence of mortality among patients due to adverse drug reactions that lead to hospitalisation (fatal ADR Ad ), to explore the heterogeneity in its estimation through subgroup analysis of study characteristics, and to identify system-organ classes involved and causative drugs for fatal ADR Ad . We identified prospective ADR Ad -related studies via screening of the PubMed and Google Scholar databases with appropriate key terms. We estimated the prevalence of fatal ADR Ad using a double arcsine method and explored heterogeneity using the following study characteristics: age groups, wards, study region, ADR definitions, ADR identification methods, study duration and sample size. We examined patterns of fatal ADR Ad and causative drugs. Among 312 full-text articles assessed, 49 studies satisfied the selection criteria and were included in the analysis. The mean prevalence of fatal ADR Ad was 0.20% (95% CI: 0.13-0.27%; I 2 = 93%). The age groups and study wards were the important heterogeneity modifiers. The mean fatal ADR Ad prevalence varied from 0.01% in paediatric patients to 0.44% in the elderly. Subgroup analysis showed a higher prevalence of fatal ADR Ad in intensive care units, emergency departments, multispecialty wards and whole hospitals. Computer-based monitoring systems in combination with other methods detected higher mortality. Intracranial haemorrhage, renal failure and gastrointestinal bleeding accounted for more than 50% of fatal ADR Ad cases. Warfarin, aspirin, renin-angiotensin system (RAS) inhibitors and digoxin accounted for 60% of fatal ADR Ad . ADR Ad is an important cause of mortality. Strategies targeting the safer use of warfarin, aspirin, RAS inhibitors and digoxin could reduce the large number of fatal ADR Ad cases.
Genomic selection for the improvement of meat quality in beef.
Pimentel, E C G; König, S
2012-10-01
Selection index theory was used to compare different selection strategies aiming at the improvement of meat quality in beef cattle. Alternative strategies were compared with a reference scenario with three basic traits in the selection index: BW at 200 d (W200) and 400 d (W400) and muscling score (MUSC). These traits resemble the combination currently used in the German national beef genetic evaluation system. Traits in the breeding goal were defined as the 3 basic traits plus marbling score (MARB), to depict a situation where an established breeding program currently selecting for growth and carcass yield intends to incorporate meat quality in its selection program. Economic weights were either the same for all 4 traits, or doubled or tripled for MARB. Two additional selection criteria for improving MARB were considered: Live animal intramuscular fat content measured by ultrasound (UIMF) as an indicator trait and a genomic breeding value (GEBV) for the target trait directly (gMARB). Results were used to estimate the required number of genotyped animals in an own calibration set for implementing genomic selection focusing on meat quality. Adding UIMF to the basic index increased the overall genetic gain per generation by 15% when the economic weight on MARB was doubled and by 44% when it was tripled. When a genomic breeding value for marbling could be estimated with an accuracy of 0.5, adding gMARB to the index provided larger genetic gain than adding UIMF. Greatest genetic gain per generation was obtained with the scenario containing GEBV for 4 traits (gW200, gW400, gMUSC, and gMARB) when the accuracies of these GEBV were ≥0.7. Adding UIMF to the index substantially improved response to selection for MARB, which switched from negative to positive when the economic weight on MARB was doubled or tripled. For all scenarios that contained gMARB in the selection index, the response to selection in MARB was positive for all relative economic weights on MARB, when the accuracy of GEBV was >0.7. Results indicated that setting up a calibration set of ∼500 genotyped animals with carcass phenotypes for MARB could suffice to obtain a larger response to selection than measuring UIMF. If the size of the calibration set is ∼2,500, adding the ultrasound trait to an index containing already the GEBV would bring little benefit, unless the relative economic weight for marbling is much larger than for the other traits.
Interpolated Sounding and Gridded Sounding Value-Added Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toto, T.; Jensen, M.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less
The effect of sequential information on consumers' willingness to pay for credence food attributes.
Botelho, A; Dinis, I; Lourenço-Gomes, L; Moreira, J; Costa Pinto, L; Simões, O
2017-11-01
The use of experimental methods to determine consumers' willingness to pay for "quality" food has been gaining importance in scientific research. In most of the empirical literature on this issue the experimental design starts with blind tasting, after which information is introduced. It is assumed that this approach allows consumers to elicit the real value that they attach to each of the features added through specific information. In this paper, the starting hypothesis is that this technique overestimates the weight of the features introduced by information in consumers' willingness to pay when compared to a real market situation, in which consumers are confronted with all the information at once. The data obtained through contingent valuation in an in-store setting was used to estimate a hedonic model aiming at assessing consumers' willingness to pay (WTP) for the feature "geographical origin of the variety" of pears and apples in different information scenarios: i) blind tasting followed by extrinsic information and ii) full information provided at once. The results show that, in fact, features are more valued when gradually added to background information than when consumers receive all the information from the beginning. Copyright © 2017 Elsevier Ltd. All rights reserved.
2010-01-01
Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705
Furiak, Nicolas M; Klein, Robert W; Kahle-Wrobleski, Kristin; Siemers, Eric R; Sarpong, Eric; Klein, Timothy M
2010-04-30
Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age > or =55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials.
Economic impact of university veterinary diagnostic laboratories: A case study.
Schulz, Lee L; Hayes, Dermot J; Holtkamp, Derald J; Swenson, David A
2018-03-01
Veterinary diagnostic laboratories (VDLs) play a significant role in the prevention and mitigation of endemic animal diseases and serve an important role in surveillance of, and the response to, outbreaks of transboundary and emerging animal diseases. They also allow for business continuity in livestock operations and help improve human health. Despite these critical societal roles, there is no academic literature on the economic impact of VDLs. We present a case study on the economic impact of the Iowa State University Veterinary Diagnostic Laboratory (ISUVDL). We use economic contribution analysis coupled with a stakeholder survey to estimate the impact. Results suggest that the ISUVDL is responsible for $2,162.46 million in direct output, $2,832.45 million in total output, $1,158.19 million in total value added, and $31.79 million in state taxes in normal years. In an animal health emergency this increases to $8,446.21 million in direct output, $11,063.06 million in total output, $4,523.70 million in total value added, and $124.15 million in state taxes. The ISUVDL receives $4 million annually as a direct state government appropriation for operating purposes. The $31.79 million in state taxes in normal years and the $124.15 million in state taxes in an animal health emergency equates to a 795% and 3104% return on investment, respectively. Estimates of the economic impact of the ISUVDL provide information to scientists, administrators, and policymakers regarding the efficacy and return on investment of VDLs. Copyright © 2018 Elsevier B.V. All rights reserved.
Azam, Muhammad; Khan, Abdul Qayyum
2017-07-01
This study examines the impact of economic growth, corruption, health, and poverty on environmental degradation for three countries from ASEAN, namely Indonesia, Malaysia, and Thailand using annual data over the period of 1994-2014. The relationship between environmental degradation (pollution) by carbon dioxide (CO 2 ) emissions and economic growth is examined along with some other variables, namely health expenditure, poverty, agriculture value added growth, industrial value added growth, and corruption. The ordinary least squares (OLS) method is applied as an analytical technique for parameter estimation. The empirical results reveal that almost all variables are statistically significant at the 5% level of significance, whereby test rejects the null hypotheses of non-cointegration, indicating that all variables play an important role in affecting the environment across countries. Empirical results also indicate that economic growth has significant positive impact, while health expenditures show significantly negative impact on the environment. Corruption has significant positive effect on environment in the case of Malaysia; while in the case of Indonesia and Thailand, it has insignificant results. However, for the individual analysis across countries, the regression estimate suggests that economic growth has a significant positive relationship with environment for Indonesia, while it is found insignificantly negative and positive in the case of Malaysia and Thailand, respectively, during the period under the study. Empirical findings of the study suggest that policy-makers require to make technological-friendly environment sequentially to surmount unregulated pollution, steady population transfers from rural areas to urban areas are also important, and poverty alleviation and better health provision can also help to improve the environment.
Assessing the add value of ensemble forecast in a drought early warning
NASA Astrophysics Data System (ADS)
Calmanti, Sandro; Bosi, Lorenzo; Fernandez, Jesus; De Felice, Matteo
2015-04-01
The EU-FP7 project EUPORIAS is developing a prototype climate service to enhance the existing food security drought early warning system in Ethiopia. The Livelihoods, Early Assessment and Protection (LEAP) system is the Government of Ethiopia's national food security early warning system, established with the support of WFP and the World Bank in 2008. LEAP was designed to increase the predictability and timeliness of response to drought-related food crises in Ethiopia. It combines early warning with contingency planning and contingency funding, to allow the government, WFP and other partners to provide early assistance in anticipation of an impending catastrophes. Currently, LEAP uses satellite based rainfall estimates to monitor drought conditions and to compute needs. The main aim of the prototype is to use seasonal hindcast data to assess the added value of using ensemble climate rainfall forecasts to estimate the cost of assistance of population hit by major droughts. We outline the decision making process that is informed by the prototype climate service, and we discuss the analysis of the expected and skill of the available rainfall forecast data over Ethiopia. One critical outcome of this analysis is the strong dependence of the expected skill on the observational estimate assumed as reference. A preliminary evaluation of the full prototype products (drought indices and needs estimated) using hindcasts data will also be presented.
Roze, S; Ferrières, J; Bruckert, E; Van Ganse, E; Chapman, M J; Liens, D; Renaudin, C
2007-11-01
To evaluate the cost-effectiveness of raising high-density lipoprotein cholesterol (HDL-C) with add-on nicotinic acid in statin-treated patients with coronary heart disease (CHD) and low HDL-C, from the French healthcare system perspective. Computer simulation economic modelling incorporating two decision analytic submodels was used. The first submodel generated a cohort of 2000 patients and simulated lipid changes using baseline characteristics and treatment effects from the ARterial Biology for the Investigation of the Treatment Effects of Reducing cholesterol (ARBITER 2) study. Prolonged-release (PR) nicotinic acid (1 g/day) was added in patients with HDL-C < 40 mg/dl (1.03 mmol/l) on statin alone. The second submodel used standard Markov techniques to evaluate long-term clinical and economic outcomes based on Framingham risk estimates. Direct medical costs were accounted from a third party payer perspective [2004 Euros (euro)] and discounted by 3%. Addition of PR nicotinic acid to statin therapy resulted in substantial health gain and increased life expectancy, at a cost well within the threshold (< 50,000 euros per life year gained) considered good value for money in Western Europe. Raising HDL-C by adding PR nicotinic acid to statin therapy in CHD patients was cost-effective in France at a level considered to represent good value for money by reimbursement authorities in Europe. This strategy was highly cost-effective in CHD patients with type 2 diabetes.
A Brainnetome Atlas Based Mild Cognitive Impairment Identification Using Hurst Exponent
Long, Zhuqing; Jing, Bin; Guo, Ru; Li, Bo; Cui, Feiyi; Wang, Tingting; Chen, Hongwen
2018-01-01
Mild cognitive impairment (MCI), which generally represents the transition state between normal aging and the early changes related to Alzheimer’s disease (AD), has drawn increasing attention from neuroscientists due that efficient AD treatments need early initiation ahead of irreversible brain tissue damage. Thus effective MCI identification methods are desperately needed, which may be of great importance for the clinical intervention of AD. In this article, the range scaled analysis, which could effectively detect the temporal complexity of a time series, was utilized to calculate the Hurst exponent (HE) of functional magnetic resonance imaging (fMRI) data at a voxel level from 64 MCI patients and 60 healthy controls (HCs). Then the average HE values of each region of interest (ROI) in brainnetome atlas were extracted and compared between MCI and HC. At last, the abnormal average HE values were adopted as the classification features for a proposed support vector machine (SVM) based identification algorithm, and the classification performance was estimated with leave-one-out cross-validation (LOOCV). Our results indicated 83.1% accuracy, 82.8% sensitivity and 83.3% specificity, and an area under curve of 0.88, suggesting that the HE index could serve as an effective feature for the MCI identification. Furthermore, the abnormal HE brain regions in MCI were predominately involved in left middle frontal gyrus, right hippocampus, bilateral parahippocampal gyrus, bilateral amygdala, left cingulate gyrus, left insular gyrus, left fusiform gyrus, left superior parietal gyrus, left orbital gyrus and left basal ganglia. PMID:29692721
ERIC Educational Resources Information Center
Milanowski, Anthony
2011-01-01
Although many researchers and policy analysts (e.g., Harris, Glazerman et al., 2011; 2010) consider value-added to be the state of the art in school and teacher productivity measurement, only a minority of Teacher Incentive Fund (TIF) Round 1 and 2 grantees used value-added as a measure of school or teacher performance. Fourteen of the 34 grantees…
Estimating skin blood saturation by selecting a subset of hyperspectral imaging data
NASA Astrophysics Data System (ADS)
Ewerlöf, Maria; Salerud, E. Göran; Strömberg, Tomas; Larsson, Marcus
2015-03-01
Skin blood haemoglobin saturation (?b) can be estimated with hyperspectral imaging using the wavelength (λ) range of 450-700 nm where haemoglobin absorption displays distinct spectral characteristics. Depending on the image size and photon transport algorithm, computations may be demanding. Therefore, this work aims to evaluate subsets with a reduced number of wavelengths for ?b estimation. White Monte Carlo simulations are performed using a two-layered tissue model with discrete values for epidermal thickness (?epi) and the reduced scattering coefficient (μ's ), mimicking an imaging setup. A detected intensity look-up table is calculated for a range of model parameter values relevant to human skin, adding absorption effects in the post-processing. Skin model parameters, including absorbers, are; μ's (λ), ?epi, haemoglobin saturation (?b), tissue fraction blood (?b) and tissue fraction melanin (?mel). The skin model paired with the look-up table allow spectra to be calculated swiftly. Three inverse models with varying number of free parameters are evaluated: A(?b, ?b), B(?b, ?b, ?mel) and C(all parameters free). Fourteen wavelength candidates are selected by analysing the maximal spectral sensitivity to ?b and minimizing the sensitivity to ?b. All possible combinations of these candidates with three, four and 14 wavelengths, as well as the full spectral range, are evaluated for estimating ?b for 1000 randomly generated evaluation spectra. The results show that the simplified models A and B estimated ?b accurately using four wavelengths (mean error 2.2% for model B). If the number of wavelengths increased, the model complexity needed to be increased to avoid poor estimations.
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
32 CFR 644.114 - Acquisition by declaration of taking.
Code of Federal Regulations, 2012 CFR
2012-07-01
... amount for the entire interest holding to have added value, for operational or other reasons, because it... determination be made as to whether the value of growing crops should be added to the value of the land... purchase due to failure to reach an agreement with the owners as to value, inability to contact the owners...
32 CFR 644.114 - Acquisition by declaration of taking.
Code of Federal Regulations, 2014 CFR
2014-07-01
... amount for the entire interest holding to have added value, for operational or other reasons, because it... determination be made as to whether the value of growing crops should be added to the value of the land... purchase due to failure to reach an agreement with the owners as to value, inability to contact the owners...