Science.gov

Sample records for meaningful assessment method

  1. Meaningful Assessment: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Thrond, Mary A.

    The annotated bibliography contains citations of nine references on alternative student assessment methods in second language programs, particularly at the secondary school level. The references include a critique of conventional reading comprehension assessment, a discussion of performance assessment, a proposal for a multi-trait, multi-method…

  2. The Role of Leadership and Culture in Creating Meaningful Assessment: A Mixed Methods Case Study

    ERIC Educational Resources Information Center

    Guetterman, Timothy C.; Mitchell, Nancy

    2016-01-01

    With increased demands for institutional accountability and improved student learning, involvement in assessment has become a fundamental role of higher education faculty (Rhodes, 2010). However, faculty members and administrators often question whether assessment efforts do indeed improve student learning (Hutchings, 2010). This mixed methods…

  3. Assessing Meaningful Impact: Moving Beyond the Numbers

    NASA Astrophysics Data System (ADS)

    Buxner, S.; Bass, K.; Castori, P.; Wenger, M.

    2014-07-01

    Evaluation of program impacts is an essential part of program implementation from proposal writing, justifying expenses to funders, making improvements to programs, and demonstrating the value of program to stakeholders. Often, funding agencies ask for metrics but may not ask for more substantive outcomes. Alternatively, funding agencies are now asking for more and more evidence of program impacts resulting in broad questions about the type of assessments that are most appropriate for program evaluation. Assessing meaningful impacts presents no one-size-fits-all solution for all programs. Appropriate assessment is based on program goals, audience, activitie s, and resources. Panelists led a discussion about how to choose meaningful assessment for different situations, presenting examples from their own work. One of the best indicators of the value of a teacher professional development workshop is whether teachers can apply what they have learned to their classroom practice. Kristin Bass spoke about her experience documenting classroom implementation for the Galileo Educator Network (GEN) professional development project.

  4. Rising to the Challenge: Meaningful Assessment of Student Learning

    ERIC Educational Resources Information Center

    Association of Public and Land-grant Universities, 2010

    2010-01-01

    "Rising to the Challenge: Meaningful Assessment of Student Learning" was envisioned in response to a 2007 request for proposals from the U.S. Department of Education's Fund for Improvement of Post Secondary Education (FIPSE). FIPSE called for national, consortial contributions to improving the knowledge and abilities to assess student…

  5. Meaningful Learning and Summative Assessment in Geography Education: An Analysis in Secondary Education in the Netherlands

    ERIC Educational Resources Information Center

    Bijsterbosch, Erik; van der Schee, Joop; Kuiper, Wilmad

    2017-01-01

    Enhancing meaningful learning is an important aim in geography education. Also, assessment should reflect this aim. Both formative and summative assessments contribute to meaningful learning when more complex knowledge and cognitive processes are assessed. The internal school-based geography examinations of the final exam in pre-vocational…

  6. Implementing meaningful, educative curricula, and assessments in complex school environments

    PubMed Central

    Ennis, Catherine D.

    2015-01-01

    This commentary uses the lens of curricular implementation to consider issues and opportunities afforded by the papers in this special edition. While it is interesting to envision innovative approaches to physical education, actually implementing changes in the complex institutional school environment is exceptionally challenging. These authors have done an excellent job presenting viable solutions and fore grounding challenges. Yet, without a concerted effort to invite teachers to engage with us in this process, our implementation initiatives may not enhance the meaningful and educative process that these scholars envision for physical education. PMID:25960685

  7. The Meaningful Activity Participation Assessment: a measure of engagement in personally valued activities.

    PubMed

    Eakman, Aaron M; Carlson, Mike E; Clark, Florence A

    2010-01-01

    The Meaningful Activity Participation Assessment (MAPA), a recently developed 28-item tool designed to measure the meaningfulness of activity, was tested in a sample of 154 older adults. The MAPA evidenced a sufficient level of internal consistency and test-retest reliability and correlated as theoretically predicted with the Life Satisfaction Index-Z, the Satisfaction with Life Scale, the Engagement in Meaningful Activities Survey, the Purpose in Life Test, the Center for Epidemiologic Studies Depression Inventory and the Rand SF-36v2 Health Survey subscales. Zero-order correlations consistently demonstrated meaningful relationships between the MAPA and scales of psychosocial well-being and health-related quality of life. Results from multiple regression analyses further substantiated these findings, as greater meaningful activity participation was associated with better psychological well-being and health-related quality of life. The MAPA appears to be a reliable and valid measure of meaningful activity, incorporating both subjective and objective indicators of activity engagement.

  8. The Meaningful Activity Participation Assessment: A Measure of Engagement in Personally Valued Activities

    ERIC Educational Resources Information Center

    Eakman, Aaron M.; Carlson, Mike E.; Clark, Florence A.

    2010-01-01

    The Meaningful Activity Participation Assessment (MAPA), a recently developed 28-item tool designed to measure the meaningfulness of activity, was tested in a sample of 154 older adults. The MAPA evidenced a sufficient level of internal consistency and test-retest reliability and correlated as theoretically predicted with the Life Satisfaction…

  9. Toward meaningful end points of biodiversity in life cycle assessment.

    PubMed

    Curran, Michael; de Baan, Laura; De Schryver, An M; Van Zelm, Rosalie; Hellweg, Stefanie; Koellner, Thomas; Sonnemann, Guido; Huijbregts, Mark A J

    2011-01-01

    Halting current rates of biodiversity loss will be a defining challenge of the 21st century. To assess the effectiveness of strategies to achieve this goal, indicators and tools are required that monitor the driving forces of biodiversity loss, the changing state of biodiversity, and evaluate the effectiveness of policy responses. Here, we review the use of indicators and approaches to model biodiversity loss in Life Cycle Assessment (LCA), a methodology used to evaluate the cradle-to-grave environmental impacts of products. We find serious conceptual shortcomings in the way models are constructed, with scale considerations largely absent. Further, there is a disproportionate focus on indicators that reflect changes in compositional aspects of biodiversity, mainly changes in species richness. Functional and structural attributes of biodiversity are largely neglected. Taxonomic and geographic coverage remains problematic, with the majority of models restricted to one or a few taxonomic groups and geographic regions. On a more general level, three of the five drivers of biodiversity loss as identified by the Millennium Ecosystem Assessment are represented in current impact categories (habitat change, climate change and pollution), while two are missing (invasive species and overexploitation). However, methods across all drivers can be greatly improved. We discuss these issues and make recommendations for future research to better reflect biodiversity loss in LCA.

  10. Blending Assessment into Instruction: Practical Applications and Meaningful Results

    ERIC Educational Resources Information Center

    Wright, Michael T.; van der Mars, Hans

    2004-01-01

    Since engagement in physical activity is now identified as an important outcome for students, teachers need to assess their students in ways that measure that behavior. Assessment serves many purposes in an educational setting. It can provide feedback, drive instructional needs, and evaluate outcomes of both students and programs. If done…

  11. Rigorous, Meaningful and Robust: Practical Ways Forward for Assessment

    ERIC Educational Resources Information Center

    Harrison, Simon

    2004-01-01

    How do we know how good our students are at history? For that matter, how precisely do we really know what "good" at history even means? Even harder, how does our assessment of our students' attainment fit in with the National Curriculum Levels for Key Stage 3? Simon Harrison has led a project to help history teachers in Hampshire to add…

  12. Exploring Novel Tools for Assessing High School Students' Meaningful Understanding of Organic Reactions

    ERIC Educational Resources Information Center

    Vachliotis, Theodoros; Salta, Katerina; Vasiliou, Petroula; Tzougraki, Chryssa

    2011-01-01

    Systemic assessment questions (SAQs) are novel assessment tools used in the context of the Systemic Approach to Teaching and Learning (SATL) model. The purpose of this model is to enhance students' meaningful understanding of scientific concepts by use of constructivist concept mapping procedures, which emphasize the development of systems…

  13. A Rubric for Assessing Teachers' Lesson Activities with Respect to TPACK for Meaningful Learning with ICT

    ERIC Educational Resources Information Center

    Koh, Joyce Hwee Ling

    2013-01-01

    Teachers' technological pedagogical content knowledge (TPACK) for meaningful learning with ICT describes their knowledge for designing ICT lesson activities with respect to five dimensions: active, constructive, authentic, intentional, and cooperative. The ICT lesson activities designed by teachers can be assessed to determine the strengths and…

  14. Meaningful Assessment of Problem-Solving Activities in the Classroom: Some Exemplars. Research Monograph No. 2.

    ERIC Educational Resources Information Center

    Cheung, K. C.; And Others

    This collection of four papers deals with problem solving and the measurement of problem solving. "Climbing Up the Competence Ladder: Some Thoughts on Meaningful Assessment of Problem-Solving Tasks in the Classroom" by K. C. Cheung uses the metaphor of a competence ladder to represent the problem-solving continuum with progressive…

  15. Does alignment of constructivist teaching, curriculum, and assessment strategies promote meaningful learning?

    NASA Astrophysics Data System (ADS)

    Jimarez, Teresa

    Despite our national efforts to attract more students to the science, technology, engineering, and mathematics (STEM) fields, the number of students continues to be small. Empirical studies have suggested that in order to actively engage students in the science learning processes, lessons need to be designed which consider student prior experiences and provide a sound curriculum, within an environment promoting social interaction---that is, allowing for sharing and negotiation of those ideas which promote reflective thinking. These premises require an embedded assessment system that continuously provides feedback to both student and teacher. This technique allows adaptation and modification of lessons to better facilitate conceptual understanding. This study focused on the use of constructivist strategies that, when aligned, promoted conceptual understanding while facilitating development of science process skills. Skill development leads to meaningful learning, known to promote a change of attitude toward science. A mixed research design embedded in a case study approach was used to understand the complexity of the variables examined in this study. Both quantitative and qualitative methods of data collection were used to strengthen the validity and interpretation of the findings. Students from one of three ninth-grade physical science classes were selected for this study. The students numbered 29, 13 boys and 16 girls; the majority of these students were of Hispanic background. The analysis of data suggested that the use of constructivist strategies promotes conceptual understanding of science concepts and development of science process skills and a change of attitude towards science. This study concluded that selecting teaching and multiple assessment strategies is vital to engage students in science careers. Due to the limited nature of this case study, the researcher recommends a replication or followup with a different teacher and school, including a control

  16. Assessing Readiness for Meeting Meaningful Use: Identifying Electronic Health Record Functionality and Measuring Levels of Adoption

    PubMed Central

    Bowes, Watson A.

    2010-01-01

    With the passage, in 2009, of the Health Information Technology for Economic and Clinical Health Act (HITECH),part of the American Recovery and Reinvestment Act (ARRA), over 19 Billion dollars was targeted for healthcare information technology (HIT) projects to accelerate the adoption of electronic Health Records (EHR)s. Intermountain Healthcare facilities and providers are eligible for approximately $93 million in incentives from HITECH, if we use a “certified EHR” in a “meaningful way”. This paper describes the current state of our EHR functions and EHR adoption compared to those required by the HITECH act. We describe the method used to determine the gaps between our EHR functions and EHR adoption. Our analysis identified 17 significant EHR enhancements needed to become certified and identified 42 meaningful use workflow gaps. PMID:21346942

  17. Towards a meaningful assessment of marine ecological impacts in life cycle assessment (LCA).

    PubMed

    Woods, John S; Veltman, Karin; Huijbregts, Mark A J; Verones, Francesca; Hertwich, Edgar G

    2016-01-01

    Human demands on marine resources and space are currently unprecedented and concerns are rising over observed declines in marine biodiversity. A quantitative understanding of the impact of industrial activities on the marine environment is thus essential. Life cycle assessment (LCA) is a widely applied method for quantifying the environmental impact of products and processes. LCA was originally developed to assess the impacts of land-based industries on mainly terrestrial and freshwater ecosystems. As such, impact indicators for major drivers of marine biodiversity loss are currently lacking. We review quantitative approaches for cause-effect assessment of seven major drivers of marine biodiversity loss: climate change, ocean acidification, eutrophication-induced hypoxia, seabed damage, overexploitation of biotic resources, invasive species and marine plastic debris. Our review shows that impact indicators can be developed for all identified drivers, albeit at different levels of coverage of cause-effect pathways and variable levels of uncertainty and spatial coverage. Modeling approaches to predict the spatial distribution and intensity of human-driven interventions in the marine environment are relatively well-established and can be employed to develop spatially-explicit LCA fate factors. Modeling approaches to quantify the effects of these interventions on marine biodiversity are less well-developed. We highlight specific research challenges to facilitate a coherent incorporation of marine biodiversity loss in LCA, thereby making LCA a more comprehensive and robust environmental impact assessment tool. Research challenges of particular importance include i) incorporation of the non-linear behavior of global circulation models (GCMs) within an LCA framework and ii) improving spatial differentiation, especially the representation of coastal regions in GCMs and ocean-carbon cycle models.

  18. Mission-driven, Manageable and Meaningful Assessment of an Undergraduate Neuroscience Program

    PubMed Central

    Muir, Gary M.

    2015-01-01

    Academia has recently been under mounting pressure to increase accountability and intentionality in instruction through development of student “intended learning outcomes” (ILOs) developed at multiple levels (e.g., course, program, major, and even institution). Once these learning goals have been determined, then classroom instruction can be purposefully designed to map onto those intended outcomes in a “backward design” process (Wiggins and McTighe, 2005). The ongoing challenge with any such process, however, is in determining one’s effectiveness in achieving these intended learning goals, so it is critical that efficient tools can be developed that enable these goals to be assessed. In addition, an important requirement of any ILOs is that they are mission-driven, meaningful and parsed in such a way that they can be used to obtain evidence in a manageable way. So how can we effectively assess these outcomes in our students? This paper describes key factors to consider in the planning and implementation of assessment for an undergraduate neuroscience program. PMID:26240530

  19. Mission-driven, Manageable and Meaningful Assessment of an Undergraduate Neuroscience Program.

    PubMed

    Muir, Gary M

    2015-01-01

    Academia has recently been under mounting pressure to increase accountability and intentionality in instruction through development of student "intended learning outcomes" (ILOs) developed at multiple levels (e.g., course, program, major, and even institution). Once these learning goals have been determined, then classroom instruction can be purposefully designed to map onto those intended outcomes in a "backward design" process (Wiggins and McTighe, 2005). The ongoing challenge with any such process, however, is in determining one's effectiveness in achieving these intended learning goals, so it is critical that efficient tools can be developed that enable these goals to be assessed. In addition, an important requirement of any ILOs is that they are mission-driven, meaningful and parsed in such a way that they can be used to obtain evidence in a manageable way. So how can we effectively assess these outcomes in our students? This paper describes key factors to consider in the planning and implementation of assessment for an undergraduate neuroscience program.

  20. A new method for ecoacoustics? Toward the extraction and evaluation of ecologically-meaningful soundscape components using sparse coding methods

    PubMed Central

    Casey, Michael; Moscoso, Paola; Peck, Mika

    2016-01-01

    Passive acoustic monitoring is emerging as a promising non-invasive proxy for ecological complexity with potential as a tool for remote assessment and monitoring (Sueur & Farina, 2015). Rather than attempting to recognise species-specific calls, either manually or automatically, there is a growing interest in evaluating the global acoustic environment. Positioned within the conceptual framework of ecoacoustics, a growing number of indices have been proposed which aim to capture community-level dynamics by (e.g., Pieretti, Farina & Morri, 2011; Farina, 2014; Sueur et al., 2008b) by providing statistical summaries of the frequency or time domain signal. Although promising, the ecological relevance and efficacy as a monitoring tool of these indices is still unclear. In this paper we suggest that by virtue of operating in the time or frequency domain, existing indices are limited in their ability to access key structural information in the spectro-temporal domain. Alternative methods in which time-frequency dynamics are preserved are considered. Sparse-coding and source separation algorithms (specifically, shift-invariant probabilistic latent component analysis in 2D) are proposed as a means to access and summarise time-frequency dynamics which may be more ecologically-meaningful. PMID:27413632

  1. A new method for ecoacoustics? Toward the extraction and evaluation of ecologically-meaningful soundscape components using sparse coding methods.

    PubMed

    Eldridge, Alice; Casey, Michael; Moscoso, Paola; Peck, Mika

    2016-01-01

    Passive acoustic monitoring is emerging as a promising non-invasive proxy for ecological complexity with potential as a tool for remote assessment and monitoring (Sueur & Farina, 2015). Rather than attempting to recognise species-specific calls, either manually or automatically, there is a growing interest in evaluating the global acoustic environment. Positioned within the conceptual framework of ecoacoustics, a growing number of indices have been proposed which aim to capture community-level dynamics by (e.g., Pieretti, Farina & Morri, 2011; Farina, 2014; Sueur et al., 2008b) by providing statistical summaries of the frequency or time domain signal. Although promising, the ecological relevance and efficacy as a monitoring tool of these indices is still unclear. In this paper we suggest that by virtue of operating in the time or frequency domain, existing indices are limited in their ability to access key structural information in the spectro-temporal domain. Alternative methods in which time-frequency dynamics are preserved are considered. Sparse-coding and source separation algorithms (specifically, shift-invariant probabilistic latent component analysis in 2D) are proposed as a means to access and summarise time-frequency dynamics which may be more ecologically-meaningful.

  2. Development of an Assessment Tool to Measure Students' Meaningful Learning in the Undergraduate Chemistry Laboratory

    ERIC Educational Resources Information Center

    Galloway, Kelli R.; Bretz, Stacey Lowery

    2015-01-01

    Research on learning in the undergraduate chemistry laboratory necessitates an understanding of students' perspectives of learning. Novak's Theory of Meaningful Learning states that the cognitive (thinking), affective (feeling), and psychomotor (doing) domains must be integrated for meaningful learning to occur. The psychomotor domain is the…

  3. Written Extended-Response Questions as Classroom Assessment Tools for Meaningful Understanding of Evolutionary Theory

    ERIC Educational Resources Information Center

    Nieswandt, Martina; Bellomo, Katherine

    2009-01-01

    This qualitative study analyzed grade 12 biology students' answers to written extended-response questions that describe hypothetical scenarios of animals' evolution. We investigated whether these type of questions are suitable for students (n = 24) to express a meaningful understanding of evolutionary theory. Meaningful understanding is comprised…

  4. Cancer Bioinformatic Methods to Infer Meaningful Data From Small-Size Cohorts

    PubMed Central

    Bennani-Baiti, Nabila; Bennani-Baiti, Idriss M

    2015-01-01

    Whole-genome analyses have uncovered that most cancer-relevant genes cluster into 12 signaling pathways. Knowledge of the signaling pathways and associated gene signatures not only allows us to understand the mechanisms of oncogenesis inherent to specific cancers but also provides us with drug targets, molecular diagnostic and prognosis factors, as well as biomarkers for patient risk stratification and treatment. Publicly available genomic data sets constitute a wealth of gene mining opportunities for hypothesis generation and testing. However, the increasingly recognized genetic and epigenetic inter- and intratumor heterogeneity, combined with the preponderance of small-size cohorts, hamper reliable analysis and discovery. Here, we review two methods that are used to infer meaningful biological events from small-size data sets and discuss some of their applications and limitations. PMID:26568679

  5. Does kinematics add meaningful information to clinical assessment in post-stroke upper limb rehabilitation? A case report

    PubMed Central

    Bigoni, Matteo; Baudo, Silvia; Cimolin, Veronica; Cau, Nicola; Galli, Manuela; Pianta, Lucia; Tacchini, Elena; Capodaglio, Paolo; Mauro, Alessandro

    2016-01-01

    [Purpose] The aims of this case study were to: (a) quantify the impairment and activity restriction of the upper limb in a hemiparetic patient; (b) quantitatively evaluate rehabilitation program effectiveness; and (c) discuss whether more clinically meaningful information can be gained with the use of kinematic analysis in addition to clinical assessment. The rehabilitation program consisted of the combined use of different traditional physiotherapy techniques, occupational therapy sessions, and the so-called task-oriented approach. [Subject and Methods] Subject was a one hemiplegic patient. The patient was assessed at the beginning and after 1 month of daily rehabilitation using the Medical Research Council scale, Nine Hole Peg Test, Motor Evaluation Scale for Upper Extremity in Stroke Patients, and Hand Grip Dynamometer test as well as a kinematic analysis using an optoelectronic system. [Results] After treatment, significant improvements were evident in terms of total movement duration, movement completion velocity, and some smoothness parameters. [Conclusion] Our case report showed that the integration of clinical assessment with kinematic evaluation appears to be useful for quantitatively assessing performance changes. PMID:27630445

  6. Novel methods to collect meaningful data from adolescents for the development of health interventions.

    PubMed

    Hieftje, Kimberly; Duncan, Lindsay R; Fiellin, Lynn E

    2014-09-01

    Health interventions are increasingly focused on young adolescents, and as a result, discussions with this population have become a popular method in qualitative research. Traditional methods used to engage adults in discussions do not translate well to this population, who may have difficulty conceptualizing abstract thoughts and opinions and communicating them to others. As part of a larger project to develop and evaluate a video game for risk reduction and HIV prevention in young adolescents, we were seeking information and ideas from the priority audience that would help us create authentic story lines and character development in the video game. To accomplish this authenticity, we conducted in-depth interviews and focus groups with young adolescents aged 10 to 15 years and employed three novel methods: Storytelling Using Graphic Illustration, My Life, and Photo Feedback Project. These methods helped provide a thorough understanding of the adolescents' experiences and perspectives regarding their environment and future aspirations, which we translated into active components of the video game intervention. This article describes the processes we used and the valuable data we generated using these three engaging methods. These three activities are effective tools for eliciting meaningful data from young adolescents for the development of health interventions.

  7. Novel Methods to Collect Meaningful Data From Adolescents for the Development of Health Interventions

    PubMed Central

    Hieftje, Kimberly; Duncan, Lindsay R.; Fiellin, Lynn E.

    2014-01-01

    Health interventions are increasingly focused on young adolescents, and as a result, discussions with this population have become a popular method in qualitative research. Traditional methods used to engage adults in discussions do not translate well to this population, who may have difficulty conceptualizing abstract thoughts and opinions and communicating them to others. As part of a larger project to develop and evaluate a video game for risk reduction and HIV prevention in young adolescents, we were seeking information and ideas from the priority audience that would help us create authentic story lines and character development in the video game. To accomplish this authenticity, we conducted in-depth interviews and focus groups with young adolescents aged 10 to 15 years and employed three novel methods: Storytelling Using Graphic Illustration, My Life, and Photo Feedback Project. These methods helped provide a thorough understanding of the adolescents’ experiences and perspectives regarding their environment and future aspirations, which we translated into active components of the video game intervention. This article describes the processes we used and the valuable data we generated using these three engaging methods. These three activities are effective tools for eliciting meaningful data from young adolescents for the development of health interventions. PMID:24519998

  8. Incorporating Meaningful Gamification in a Blended Learning Research Methods Class: Examining Student Learning, Engagement, and Affective Outcomes

    ERIC Educational Resources Information Center

    Tan, Meng; Hew, Khe Foon

    2016-01-01

    In this study, we investigated how the use of meaningful gamification affects student learning, engagement, and affective outcomes in a short, 3-day blended learning research methods class using a combination of experimental and qualitative research methods. Twenty-two postgraduates were randomly split into two groups taught by the same…

  9. Continuous Scan, a method for performing modal testing using meaningful measurement parameters; Part I

    NASA Astrophysics Data System (ADS)

    Di Maio, D.; Ewins, D. J.

    2011-11-01

    This paper presents the first part of a work about modal testing using meaningful measurement parameters. Scanning Laser Doppler Vibrometer (SLDV) systems are becoming largely used both in industry and university for performing vibration measurements. A reason for the success of SLDV systems can be found in their capability of measuring vibration remotely and under different environmental conditions which, when hostile, can inhibit other transducers to work correctly. Hence, SLDV system can be very practical and useful in many engineering applications. SLDV systems are being used as a contactless transducer measuring vibrations from a discrete number of measurement positions marked on the specimen whenever an optical access to it is available. Hence, the advantage of a modal test carried out using accelerometers and one carried out using a SLDV system can be: (i) the automation of the measurements and (ii) the increase of the spatial resolution of the measured modes. This suggests that SLDV systems can be used as a practical replacement of accelerometers operating the same measurement method. Continuous Scanning method is a novel approach of using contactless transducers for measuring vibrations. The most important difference between a discrete and a continuous approach is the method of measuring a vibration pattern. A discrete method measures the level of vibrations at discrete positions on a structure whereas a continuous method captures the modulation of the vibrations produced by the excited modes. This is possible when a transducer can travel across a vibrating surface. This first part of the work presents a new approach of continuous scanning measurement method using a multi-tonal excitation waveform. The paper starts from a comparison between a step and continuous scan mode to introduce a novel approach of continuous scan and multi-tonal excitation waveform. The objective of this first part of work is to present and understand that measurement parameters

  10. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  11. A Contextual Approach to the Assessment of Social Skills: Identifying Meaningful Behaviors for Social Competence

    ERIC Educational Resources Information Center

    Warnes, Emily D.; Sheridan, Susan M.; Geske, Jenenne; Warnes, William A.

    2005-01-01

    An exploratory study was conducted which assessed behaviors that characterize social competence in the second and fifth grades. A contextual approach was used to gather information from second- and fifth-grade children and their parents and teachers regarding the behaviors they perceived to be important for getting along well with peers. Data were…

  12. Rote versus Meaningful Learning.

    ERIC Educational Resources Information Center

    Mayer, Richard E.

    2002-01-01

    Examines the six categories that make up the cognitive process dimension of Bloom's Taxonomy Table, as well as the 19 specific cognitive processes that fit within them. After describing three learning outcomes, the paper focuses on retention versus transfer of learning and rote versus meaningful learning, discussing how teaching and assessment can…

  13. Predicting participation in meaningful activity for older adults with cancer

    PubMed Central

    Pergolotti, Mackenzi; Cutchin, Malcolm P.; Muss, Hyman B.

    2015-01-01

    Purpose Participation in activity that is personally meaningful leads to improved emotional and physical well-being and quality of life. However, little is known about what predicts participation in meaningful activity by older adults with cancer. Methods Seventy-one adults aged 65 years and older with a diagnosis of cancer were enrolled. All adults were evaluated with the following: a brief geriatric assessment, the meaningful activity participation assessment (MAPA), and the Possibilities for Activity Scale (PActS). The MAPA measures participation in meaningful activity, and the PActS measures what older adults believe they should and could be doing. A regression approach was used to assess the predictors of meaningful activity participation. Results The PActS (B = .56, p < .001) was the strongest predictor of meaningful activity participation. Conclusions What older adults with cancer feel they should and could do significantly predicted meaningful participation in activities above and beyond clinical and demographic factors. In future research, perceptions of possibilities for activity may be useful in the design of interventions targeted to improve meaningful participation in older adults with cancer. PMID:25381123

  14. Revisiting Individual Creativity Assessment: Triangulation in Subjective and Objective Assessment Methods

    ERIC Educational Resources Information Center

    Park, Namgyoo K.; Chun, Monica Youngshin; Lee, Jinju

    2016-01-01

    Compared to the significant development of creativity studies, individual creativity research has not reached a meaningful consensus regarding the most valid and reliable method for assessing individual creativity. This study revisited 2 of the most popular methods for assessing individual creativity: subjective and objective methods. This study…

  15. Validity Argument for Assessing L2 Pragmatics in Interaction Using Mixed Methods

    ERIC Educational Resources Information Center

    Youn, Soo Jung

    2015-01-01

    This study investigates the validity of assessing L2 pragmatics in interaction using mixed methods, focusing on the evaluation inference. Open role-plays that are meaningful and relevant to the stakeholders in an English for Academic Purposes context were developed for classroom assessment. For meaningful score interpretations and accurate…

  16. What Makes Learning Meaningful?

    ERIC Educational Resources Information Center

    Wilson, Arthur L.; Burket, Lee

    This document examines the work of Dewey, Kolb, Jarvis, Mezirow, Freire, Rogers, and Houle to find out what these experiential learning theorists have to say about the role experience plays in making learning meaningful. The first section addresses each writer's work for specific ideas of how experience is related to making learning meaningful,…

  17. Making Fractions Meaningful

    ERIC Educational Resources Information Center

    McCormick, Kelly K.

    2015-01-01

    To be able to support meaningful mathematical experiences, preservice elementary school teachers (PSTs) must learn mathematics in deep and meaningful ways (Ma 1999). They need to experience investigating and making sense of the mathematics they will be called on to teach. To expand their own--often limited--views of what it means to teach and…

  18. Meaningful Measurement: The Role of Assessments in Improving High School Education in the Twenty-First Century

    ERIC Educational Resources Information Center

    Pinkus, Lyndsay M., Ed.

    2009-01-01

    In the chapters presented in this volume, leading experts describe some of the assessment challenges in greater detail and provide federal recommendations on how to address them. In "College and Work Readiness as a Goal of High Schools: The Role of Standards, Assessments, and Accountability," John Tanner of the Center for Innovative Measures at…

  19. Acceptance criteria for method equivalency assessments.

    PubMed

    Chatfield, Marion J; Borman, Phil J

    2009-12-15

    Quality by design (ICH-Topic Q8) requires that process control strategy requirements are met and maintained. The challenging task of setting appropriate acceptance criteria for assessment of method equivalence is a critical component of satisfying these requirements. The use of these criteria will support changes made to methods across the product lifecycle. A method equivalence assessment is required when a change is made to a method which may pose a risk to its ability to monitor the quality of the process. Establishing appropriate acceptance criteria are a vital, but not clearly understood, prerequisite to deciding the appropriate design/sample size of the equivalency study. A number of approaches are proposed in the literature for setting acceptance criteria for equivalence which address different purposes. This perspective discusses those purposes and then provides more details on setting acceptance criteria based on patient and producer risk, e.g., tolerance interval approach and the consideration of method or process capability. Applying these to a drug substance assay method for batch release illustrates that, for the equivalence assessment to be meaningful, a clear understanding and appraisal of the control requirements of the method is needed. Rather than a single exact algorithm, the analyst's judgment on a number of aspects is required in deciding the appropriate acceptance criteria.

  20. Meaningful Learning: A Perspective.

    ERIC Educational Resources Information Center

    Borras, Isabel

    This paper elaborates on the ways self-reflective practices that have sprung from within the postmodern discourse may conduce to meaningful learning, all without forgetting that the truth is idiosyncratic and that the highest human goals are barely teachable. Hence, rather than prescribing methodological "recipes," the paper looks at the…

  1. From Mindless to Meaningful

    ERIC Educational Resources Information Center

    Billings, Laura; Roberts, Terry

    2014-01-01

    Despite teachers' best intentions, traditional whole-class discussions sometimes end up sounding like the monotonous drone of Charlie Brown's teacher. But with careful planning, teachers can structure discussions that encourage meaningful student interaction and collaborative thinking, write Laura Billings and Terry Roberts of the…

  2. Meaningful and Purposeful Practice

    ERIC Educational Resources Information Center

    Clementi, Donna

    2014-01-01

    This article describes a graphic, designed by Clementi and Terrill, the authors of "Keys to Planning for Learning" (2013), visually representing the components that contribute to meaningful and purposeful practice in learning a world language, practice that leads to greater proficiency. The entire graphic is centered around the letter…

  3. Meaningful Responses to Literature

    ERIC Educational Resources Information Center

    Kovarik, Madeline

    2006-01-01

    If students were as engaged in reading as they are in video games, television, and sports, the world would be rife with proficient readers. Using a variety of instructional strategies, teachers can make the reading experience more meaningful, increase comprehension, and build proficiency. Mastering cognitive skills can change student reading…

  4. Miscues: Meaningful Assessment Aids Instruction

    ERIC Educational Resources Information Center

    Luft, Pamela

    2009-01-01

    LeRoy was a deaf sixth grader who used signs and his voice to communicate. Yanetta was a deaf eighth grader who had deaf parents and preferred American Sign Language (ASL). Michael was a deaf fifth grader in a suburban school who attended an oral program and used his voice exclusively to communicate. All three students struggled with reading. They…

  5. Life is pretty meaningful.

    PubMed

    Heintzelman, Samantha J; King, Laura A

    2014-09-01

    The human experience of meaning in life is widely viewed as a cornerstone of well-being and a central human motivation. Self-reports of meaning in life relate to a host of important functional outcomes. Psychologists have portrayed meaning in life as simultaneously chronically lacking in human life as well as playing an important role in survival. Examining the growing literature on meaning in life, we address the question "How meaningful is life, in general?" We review possible answers from various psychological sources, some of which anticipate that meaning in life should be low and others that it should be high. Summaries of epidemiological data and research using two self-report measures of meaning in life suggest that life is pretty meaningful. Diverse samples rate themselves significantly above the midpoint on self-reports of meaning in life. We suggest that if meaning in life plays a role in adaptation, it must be commonplace, as our analysis suggests.

  6. Methods & Strategies: Deep Assessment

    ERIC Educational Resources Information Center

    Haas, Alison; Hollimon, Shameka; Lee, Okhee

    2015-01-01

    The "Next Generation Science Standards" ("NGSS") push students to have "a deeper understanding of content" (NGSS Lead States 2013, Appendix A, p. 4). However, with the reality of high-stakes assessments that rely primarily on multiple-choice questions, how can a science teacher analyze students' written responses…

  7. Students' Meaningful Learning Orientation and Their Meaningful Understandings of Meiosis and Genetics.

    ERIC Educational Resources Information Center

    Cavallo, Ann Liberatore

    This 1-week study explored the extent to which high school students (n=140) acquired meaningful understanding of selected biological topics (meiosis and the Punnett square method) and the relationship between these topics. This study: (1) examined "mental modeling" as a technique for measuring students' meaningful understanding of the…

  8. The Retention of Meaningful Understanding of Meiosis and Genetics.

    ERIC Educational Resources Information Center

    Cavallo, Ann Liberatore

    This study investigated the retention of meaningful understanding of the biological topics of meiosis, the Punnett square method and the relations between these two topics. This study also explored the predictive influence of students' general tendency to learn meaningfully or by rote (meaningful learning orientation), prior knowledge of meiosis,…

  9. Methods for Aquatic Resource Assessment

    EPA Science Inventory

    The Methods for Aquatic Resource Assessment (MARA) project consists of three main activities in support of assessing the conditions of the nation’s aquatic resources: 1) scientific support for EPA Office of Water’s national aquatic resource surveys; 2) spatial predications of riv...

  10. Assessment Methods in Medical Education

    ERIC Educational Resources Information Center

    Norcini, John J.; McKinley, Danette W.

    2007-01-01

    Since the 1950s, there has been rapid and extensive change in the way assessment is conducted in medical education. Several new methods of assessment have been developed and implemented over this time and they have focused on clinical skills (taking a history from a patient and performing a physical examination), communication skills, procedural…

  11. Qualitative methods for assessing risk

    SciTech Connect

    Mahn, J.A.; Hannaman, G.W.; Kryska, P.

    1995-03-01

    The purpose of this document is to describe a qualitative risk assessment process that supplements the requirements of DOE/AL 5481.1B. Although facility managers have a choice of assessing risk either quantitatively or qualitatively, trade offs are involved in making the most appropriate choice for a given application. The results that can be obtained from a quantitative risk assessment are significantly more robust than those results derived from a qualitative approach. However, the advantages derived from quantitative risk assessment are achieved at a greater expenditure of money, time and convenience. This document provides the elements of a framework for performing a much less costly qualitative risk assessment, while retaining the best attributes of quantitative methods. The approach discussed herein will; (1) provide facility managers with the tools to prepare consistent, site wide assessments, and (2) aid the reviewers who may be tasked to evaluate the assessments. Added cost/benefit measures of the qualitative methodology include the identification of mechanisms for optimally allocating resources for minimizing risk in an expeditious, and fiscally responsible manner.

  12. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  13. Dietary assessment methods: dietary records.

    PubMed

    Ortega, Rosa M; Pérez-Rodrigo, Carmen; López-Sobaler, Ana M

    2015-02-26

    Dietary records or food diaries can be highlighted among dietary assessment methods of the current diet for their interest and validity. It is a prospective, open-ended survey method collecting data about the foods and beverages consumed over a previously specified period of time. Dietary records can be used to estimate current diet of individuals and population groups, as well as to identify groups at risk of inadequacy. It is a dietary assessment method interesting for its use in epidemiological or in clinical studies. High validity and precision has been reported for the method when used following adequate procedures and considering the sufficient number of days. Thus, dietary records are often considered as a reference method in validation studies. Nevertheless, the method is affected by error and has limitations due mainly to the tendency of subjects to report food consumption close to those socially desirable. Additional problems are related to the high burden posed on respondents. The method can also influence food behavior in respondents in order to simplify the registration of food intake and some subjects can experience difficulties in writing down the foods and beverages consumed or in describing the portion sizes. Increasing the number of days observed reduces the quality of completed diet records. It should also be considered the high cost of coding and processing information collected in diet records. One of the main advantages of the method is the registration of the foods and beverages as consumed, thus reducing the problem of food omissions due to memory failure. Weighted food records provide more precise estimates of consumed portions. New Technologies can be helpful to improve and ease collaboration of respondents, as well as precision of the estimates, although it would be desirable to evaluate the advantages and limitations in order to optimize the implementation.

  14. LNG Safety Assessment Evaluation Methods

    SciTech Connect

    Muna, Alice Baca; LaFleur, Angela Christine

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  15. Quality Assessment of Qualitative Evidence for Systematic Review and Synthesis: Is It Meaningful, and if So, How Should It Be Performed?

    ERIC Educational Resources Information Center

    Carroll, Christopher; Booth, Andrew

    2015-01-01

    The critical appraisal and quality assessment of primary research are key stages in systematic review and evidence synthesis. These processes are driven by the need to determine how far the primary research evidence, singly and collectively, should inform findings and, potentially, practice recommendations. Quality assessment of primary…

  16. State Capacity for Leadership: Ensuring Meaningful Higher Education Involvement in State Implementation of New Assessments Aligned with the Common Core State Standards

    ERIC Educational Resources Information Center

    National Center for Higher Education Management Systems (NJ1), 2011

    2011-01-01

    The Common Core State Standards (CCSS) and assessments aligned to them represent a significant milestone in public education reform in the U.S. Developed with consultation from higher education, the rigorous new standards and the assessments now being drafted by two consortia promise to help students reach higher levels of academic achievement and…

  17. Meaningful Use of Health Information Technology by Rural Hospitals

    ERIC Educational Resources Information Center

    McCullough, Jeffrey; Casey, Michelle; Moscovice, Ira; Burlew, Michele

    2011-01-01

    Purpose: This study examines the current status of meaningful use of health information technology (IT) in Critical Access Hospitals (CAHs), other rural, and urban US hospitals, and it discusses the potential role of Medicare payment incentives and disincentives in encouraging CAHs and other rural hospitals to achieve meaningful use. Methods: Data…

  18. Quality assessment of qualitative evidence for systematic review and synthesis: Is it meaningful, and if so, how should it be performed?

    PubMed

    Carroll, Christopher; Booth, Andrew

    2015-06-01

    The critical appraisal and quality assessment of primary research are key stages in systematic review and evidence synthesis. These processes are driven by the need to determine how far the primary research evidence, singly and collectively, should inform findings and, potentially, practice recommendations. Quality assessment of primary qualitative research remains a contested area. This article reviews recent developments in the field charting a perceptible shift from whether such quality assessment should be conducted to how it might be performed. It discusses the criteria that are used in the assessment of quality and how the findings of the process are used in synthesis. It argues that recent research indicates that sensitivity analysis offers one potentially useful means for advancing this controversial issue.

  19. Eight Steps to Meaningful Grading

    ERIC Educational Resources Information Center

    Deddeh, Heather; Main, Erin; Fulkerson, Sharon Ratzlaff

    2010-01-01

    A group of teachers at Clifford Smart Middle School in Michigan's Walled Lake Consolidated School District have broken free from traditional grading in order to embrace a more meaningful grading practice. Using standards-based grading practices, they believe their grading now accurately communicates to students and parents the student's mastery…

  20. Relationships between students' meaningful learning orientation and their understanding of genetics topics

    NASA Astrophysics Data System (ADS)

    Cavallo, Ann M. Liberatore; Schafer, Larry E.

    This study explored factors predicting the extent to which high school students (N = 140) acquired meaningful understanding of the biological topics of meiosis, the Punnett-square method, and the relationships between these topics. This study (a) examined mental modeling as a technique for measuring students' meaningful understanding of the topics, (b) measured students' predisposed, generalized tendency to learn meaningfully (meaningful learning orientation), (c) determined the extent to which students' meaningful learning orientation predicted meaningful understanding beyond that predicted by aptitude and achievement motivation, (d) experimentally tested two instructional treatments (relationships presented to students, relationships generated by students), (e) explored the relationships of meaningful learning orientation, prior knowledge, instructional treatment, and all interactions of these variables in predicting meaningful understanding. The results of correlations and multiple regressions indicated that meaningful learning orientation contributed to students' attainment of meaningful understanding independent of aptitude and achievement motivation. Meaningful learning orientation and prior knowledge interacted in unique ways for each topic to predict students' attainment of meaningful understanding. Instructional treatment had relatively little relationship to students' acquisition of meaningful understanding, except for learners midrange between meaningful and rote. These findings imply that a meaningful learning approach among students may be important, perhaps as much or more than aptitude and achievement motivation, for their acquisition of interrelated, meaningful understandings of science.

  1. Evaluation of methods for the assessment of attention while driving.

    PubMed

    Kircher, Katja; Ahlstrom, Christer

    2017-03-21

    The ability to assess the current attentional state of the driver is important for many aspects of driving, not least in the field of partial automation for transfer of control between vehicle and driver. Knowledge about the driver's attentional state is also necessary for the assessment of the effects of additional tasks on attention. The objective of this paper is to evaluate different methods that can be used to assess attention, first theoretically, and then empirically in a controlled field study and in the laboratory. Six driving instructors participated in all experimental conditions of the study, delivering within-subjects data for all tested methods. Additional participants were recruited for some of the conditions. The test route consisted of 14km of motorway with low to moderate traffic, which was driven three times per participant per condition. The on-road conditions were: baseline, driving with eye tracking and self-paced visual occlusion, and driving while thinking aloud. The laboratory conditions were: Describing how attention should be distributed on a motorway, and thinking aloud while watching a video from the baseline drive. The results show that visual occlusion, especially in combination with eye tracking, was appropriate for assessing spare capacity. The think aloud protocol was appropriate to gain insight about the driver's actual mental representation of the situation at hand. Expert judgement in the laboratory was not reliable for the assessment of drivers' attentional distribution in traffic. Across all assessment techniques, it is evident that meaningful assessment of attention in a dynamic traffic situation can only be achieved when the infrastructure layout, surrounding road users, and intended manoeuvres are taken into account. This requires advanced instrumentation of the vehicle, and subsequent data reduction, analysis and interpretation are demanding. In conclusion, driver attention assessment in real traffic is a complex task, but

  2. The Use of Qualitative Methods in Large-Scale Evaluation: Improving the Quality of the Evaluation and the Meaningfulness of the Findings

    ERIC Educational Resources Information Center

    Slayton, Julie; Llosa, Lorena

    2005-01-01

    In light of the current debate over the meaning of "scientifically based research", we argue that qualitative methods should be an essential part of large-scale program evaluations if program effectiveness is to be determined and understood. This article chronicles the challenges involved in incorporating qualitative methods into the large-scale…

  3. Sequencing of EHR adoption among US hospitals and the impact of meaningful use

    PubMed Central

    Adler-Milstein, Julia; Everson, Jordan; Lee, Shoou-Yih D

    2014-01-01

    Objective To examine whether there is a common sequence of adoption of electronic health record (EHR) functions among US hospitals, identify differences by hospital type, and assess the impact of meaningful use. Materials and methods Using 2008 American Hospital Association (AHA) Information Technology (IT) Supplement data, we calculate adoption rates of individual EHR functions, along with Loevinger homogeneity (H) coefficients, to assess the sequence of EHR adoption across hospitals. We compare adoption rates and Loevinger H coefficients for hospitals of different types to assess variation in sequencing. We qualitatively assess whether stage 1 meaningful use functions are those adopted early in the sequence. Results There is a common sequence of EHR adoption across hospitals, with moderate-to-strong homogeneity. Patient demographic and ancillary results functions are consistently adopted first, while physician notes, clinical reminders, and guidelines are adopted last. Small hospitals exhibited greater homogeneity than larger hospitals. Rural hospitals and non-teaching hospitals exhibited greater homogeneity than urban and teaching hospitals. EHR functions emphasized in stage 1 meaningful use are spread throughout the scale. Discussion Stronger homogeneity among small, rural, and non-teaching hospitals may be driven by greater reliance on vendors and less variation in the types of care they deliver. Stage 1 meaningful use is likely changing how hospitals sequence EHR adoption—in particular, by moving clinical guidelines and medication computerized provider order entry ahead in sequence. Conclusions While there is a common sequence underlying adoption of EHR functions, the degree of adherence to the sequence varies by key hospital characteristics. Stage 1 meaningful use likely alters the sequence. PMID:24853066

  4. Clinically meaningful performance benchmarks in MS

    PubMed Central

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (<6 seconds, 6–7.99 seconds, and ≥8 seconds) and found group main effects on 12 of 13 objective and subjective measures (p < 0.05). Conclusions: Using a cross-sectional design, we identified 2 clinically meaningful T25FW benchmarks of ≥6 seconds (6–7.99) and ≥8 seconds. Longitudinal and larger studies are needed to confirm the clinical utility and relevance of these proposed T25FW benchmarks and to parse out whether there are additional benchmarks in the lower (<6 seconds) and higher (>10 seconds) ranges of performance. PMID:24174581

  5. Enhancing Institutional Assessment Efforts through Qualitative Methods

    ERIC Educational Resources Information Center

    Van Note Chism, Nancy; Banta, Trudy W.

    2007-01-01

    Qualitative methods can do much to describe context and illuminate the why behind patterns encountered in institutional assessment. Alone, or in combination with quantitative methods, they should be the approach of choice for many of the most important assessment questions. (Contains 1 table.)

  6. Screeners and brief assessment methods.

    PubMed

    Pérez Rodrigo, Carmen; Morán Fagúndez, Luis Juan; Riobó Serván, Pilar; Aranceta Bartrina, Javier

    2015-02-26

    In the last two decades easy-to-use simple instruments have been developed and validated to assess specific aspects of the diet or a general profile that can be compared with a reference dietary pattern as the Mediterranean Diet or with the recommendations of the Dietary Guidelines. Brief instruments are rapid, simple and easy to use tools that can be implemented by unskilled personnel without specific training. These tools are useful both in clinical settings and in Primary Health Care or in the community as a tool for triage, as a screening tool to identify individuals or groups of people at risk who require further care or even they have been used in studies to investigate associations between specific aspects of the diet and health outcomes. They are also used in interventions focused on changing eating behaviors as a diagnostic tool, for self-evaluation purposes, or to provide tailored advice in web based interventions or mobile apps. There are some specific instruments for use in children, adults, elderly or specific population groups.

  7. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials.

  8. Methods of assessment of antiepileptic drugs.

    PubMed Central

    Milligan, N; Richens, A

    1981-01-01

    Epilepsy is a symptom with protean manifestations and as such it is a difficult disease in which to carry out a therapeutic trial. The methods available to research workers for the assessment of new antiepileptic drugs are hampered by the fact that epilepsy is a fluctuant condition. Although it is a chronic disorder open to study using cross-over trials and within-patient comparisons, accurate assessment cannot be easily made at any one point in time. Research workers are therefore automatically placed at a time factor disadvantage and this is especially so for those searching for quick methods of evaluating new compounds. The need for a quick and reliable method of assessing a new antiepileptic drug has long been appreciated. This article will discuss the methods currently available and we will begin by considering the most commonly used method of assessment with particular reference to some of the problems involved in conducting a controlled clinical trial in epilepsy. PMID:7272157

  9. Meaningful Gamification in an Industrial/Organizational Psychology Course

    ERIC Educational Resources Information Center

    Stansbury, Jessica A.; Earnest, David R.

    2017-01-01

    Motivation and game research continue to demonstrate that the implementation of game design characteristics in the classroom can be engaging and intrinsically motivating. The present study assessed the extent to which an industrial organizational psychology course designed learning environment created with meaningful gamification elements can…

  10. Alternative Assessment.

    ERIC Educational Resources Information Center

    Stefonek, Tom; And Others

    1991-01-01

    This special double issue focuses on the issue of alternative assessment and its place in educational reform. "Alternative Assessment: A National Perspective" (T. Stefonek) emphasizes that the fundamental purposes of new assessment methods are grounded in educational goals, meaningful outcomes, and curricular and instructional programs…

  11. Personality, Assessment Methods and Academic Performance

    ERIC Educational Resources Information Center

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  12. EMERGY METHODS: VALUABLE INTEGRATED ASSESSMENT TOOLS

    EPA Science Inventory

    NHEERL's Atlantic Ecology Division is investigating emergy methods as tools for integrated assessment in several projects evaluating environmental impacts, policies, and alternatives for remediation and intervention. Emergy accounting is a methodology that provides a quantitative...

  13. Scientific method, adversarial system, and technology assessment

    NASA Technical Reports Server (NTRS)

    Mayo, L. H.

    1975-01-01

    A basic framework is provided for the consideration of the purposes and techniques of scientific method and adversarial systems. Similarities and differences in these two techniques of inquiry are considered with reference to their relevance in the performance of assessments.

  14. An operational definition of a statistically meaningful trend.

    PubMed

    Bryhn, Andreas C; Dimberg, Peter H

    2011-04-28

    Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.

  15. Energy efficiency assessment methods and tools evaluation

    SciTech Connect

    McMordie, K.L.; Richman, E.E.; Keller, J.M.; Dixon, D.R.

    1994-08-01

    Many different methods of assessing the energy savings potential at federal installations, and identifying attractive projects for capital investment have been used by the different federal agencies. These methods range from high-level estimating tools to detailed design tools, both manual and software assisted. These methods have different purposes and provide results that are used for different parts of the project identification, and implementation process. Seven different assessment methods are evaluated in this study. These methods were selected by the program managers at the DoD Energy Policy Office, and DOE Federal Energy Management Program (FEMP). Each of the methods was applied to similar buildings at Bolling Air Force Base (AFB), unless it was inappropriate or the method was designed to make an installation-wide analysis, rather than focusing on particular buildings. Staff at Bolling AFB controlled the collection of data.

  16. Small-Group Assessment Methods in Mathematics.

    ERIC Educational Resources Information Center

    Berry, John; Nyman, Melvin A.

    2002-01-01

    Discusses a team-oriented formal testing method used in a mathematical modeling course taught during the Alma College intensive spring term. Asks the question, If a collaborative teaching method is used, how does one assess students' acquisition of problem-solving and mathematical-thinking skills? (Author/MM)

  17. Assessment and Evaluation Methods for Access Services

    ERIC Educational Resources Information Center

    Long, Dallas

    2014-01-01

    This article serves as a primer for assessment and evaluation design by describing the range of methods commonly employed in library settings. Quantitative methods, such as counting and benchmarking measures, are useful for investigating the internal operations of an access services department in order to identify workflow inefficiencies or…

  18. Russian risk assessment methods and approaches

    SciTech Connect

    Dvorack, M.A.; Carlson, D.D.; Smith, R.E.

    1996-07-01

    One of the benefits resulting from the collapse of the Soviet Union is the increased dialogue currently taking place between American and Russian nuclear weapons scientists in various technical arenas. One of these arenas currently being investigated involves collaborative studies which illustrate how risk assessment is perceived and utilized in the Former Soviet Union (FSU). The collaborative studies indicate that, while similarities exist with respect to some methodologies, the assumptions and approaches in performing risk assessments were, and still are, somewhat different in the FSU as opposed to that in the US. The purpose of this paper is to highlight the present knowledge of risk assessment methodologies and philosophies within the two largest nuclear weapons laboratories of the Former Soviet Union, Arzamas-16 and Chelyabinsk-70. Furthermore, This paper will address the relative progress of new risk assessment methodologies, such as Fuzzy Logic, within the framework of current risk assessment methods at these two institutes.

  19. Science Education and Meaningful Learning.

    ERIC Educational Resources Information Center

    Summers, M. K.

    1982-01-01

    Argues that there should be no equation between modern methods of teaching science and discovery methods, suggesting that the emphasis on discovery has resulted from confused thinking among science educators. Also, describes research-based developments promising better theoretical/practical perspectives for improved science teaching, focusing on…

  20. A method for assessing reflective journal writing.

    PubMed

    Plack, Margaret M; Driscoll, Maryanne; Blissett, Sylvene; McKenna, Raymond; Plack, Thomas P

    2005-01-01

    Reflection is widely accepted as a learning tool and is considered integral to professional practice. Journal writing is advocated in facilitating reflection, yet little is written about how to assess reflection in journals. The purpose of this study was to develop and test a method of assessing the elements of reflection in journals and to determine whether, and to what level, reflection occurs in journals. Twenty-seven physical therapy students maintained written reflective journals throughout three of their four eight-week clinical affiliations. The students were introduced to concepts of reflective practice with definitions of terms and reflective questions before their second affiliation. A coding schema was developed to assess the journals. Three raters assessed forty-three journals. The text of each journal was analyzed for evidence of nine elements of reflection, and each journal was categorized as showing no evidence of reflection, evidence of reflection, or evidence of critical reflection. Descriptive statistics were used to demonstrate evidence of reflection. Reliability between each pair of raters was assessed using percent agreement, phi coefficients, and gamma statistics. Interrater reliability of all raters was assessed using intraclass correlation coefficients (ICC[2,1]). Results showed that the raters assessed 95.3%-100% of the journals as showing at least one element of reflection. The percent agreement between rater pairs for the nine elements of reflection ranged from 65.1% to 93.0%, the phi coefficient ranged from 0.08 to 0.81, and the ICC(2,1) values used to assess reliability among the three raters on each element ranged from 0.03 to 0.72. Averaging the assessment of the three raters for the overall journal, 14.7% of the journals were assessed as showing no evidence of reflection, 43.4% as showing evidence of reflection, and 41.9% as showing evidence of critical reflection. The percent agreement between rater pairs for the overall assessment

  1. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  2. How Do Novice Art Teachers Define and Implement Meaningful Curriculum?

    ERIC Educational Resources Information Center

    Bain, Christina; Newton, Connie; Kuster, Deborah; Milbrandt, Melody

    2010-01-01

    Four researchers collaborated on this qualitative case study that examined 11 first-year novice art teachers' understanding and implementation of meaningful curriculum. Participants were selected through a criterion method sampling strategy; the subjects were employed in rural, urban, and suburban public school districts. In order to conduct a…

  3. Ukrainian Teacher Candidates Develop Dispositions of Socially Meaningful Activity

    ERIC Educational Resources Information Center

    Koshmanova, Tetyana; Ravchyna, Tetyana

    2010-01-01

    This study addresses how the method of peer mediation can be utilized by teacher educators in developing students' attitudes to care for those who are in need, how to actively participate in socially meaningful activity without any expectation of reward, and how to contribute to the democratic development of a post-conflict country via active…

  4. Making Social Studies Meaningful to Elementary Students.

    ERIC Educational Resources Information Center

    Klein, Susan

    1982-01-01

    Describes a unit on Ancient Greece designed to make social studies meaningful to fourth and fifth graders. Individual projects and group activities helped students learn about ancient Greek culture. (AM)

  5. Method and apparatus for assessing cardiovascular risk

    NASA Technical Reports Server (NTRS)

    Albrecht, Paul (Inventor); Bigger, J. Thomas (Inventor); Cohen, Richard J. (Inventor)

    1998-01-01

    The method for assessing risk of an adverse clinical event includes detecting a physiologic signal in the subject and determining from the physiologic signal a sequence of intervals corresponding to time intervals between heart beats. The long-time structure of fluctuations in the intervals over a time period of more than fifteen minutes is analyzed to assess risk of an adverse clinical event. In a preferred embodiment, the physiologic signal is an electrocardiogram and the time period is at least fifteen minutes. A preferred method for analyzing the long-time structure variability in the intervals includes computing the power spectrum and fitting the power spectrum to a power law dependence on frequency over a selected frequency range such as 10.sup.-4 to 10.sup.-2 Hz. Characteristics of the long-time structure fluctuations in the intervals is used to assess risk of an adverse clinical event.

  6. Validation Methods for Direct Writing Assessment.

    ERIC Educational Resources Information Center

    Miller, M. David; Crocker, Linda

    1990-01-01

    This review of methods for validating writing assessments was conceptualized within a framework suggested by S. Messick (1989) that included five operational components of construct validation: (1) content representativeness; (2) structural fidelity; (3) nomological validity; (4) criterion-related validity; and (5) nomothetic span. (SLD)

  7. A New Method to Assess Eye Dominance

    ERIC Educational Resources Information Center

    Valle-Inclan, Fernando; Blanco, Manuel J.; Soto, David; Leiros, Luz

    2008-01-01

    People usually show a stable preference for one of their eyes when monocular viewing is required ("sighting dominance") or under dichoptic stimulation conditions ("sensory eye-dominance"). Current procedures to assess this "eye dominance" are prone to error. Here we present a new method that provides a continuous measure of eye dominance and…

  8. Methods of Assessment for Affected Family Members

    ERIC Educational Resources Information Center

    Orford, Jim; Templeton, Lorna; Velleman, Richard; Copello, Alex

    2010-01-01

    The article begins by making the point that a good assessment of the needs and circumstances of family members is important if previous neglect of affected family members is to be reversed. The methods we have used in research studies are then described. They include a lengthy semi-structured interview covering seven topic areas and standard…

  9. A classification scheme for risk assessment methods.

    SciTech Connect

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report

  10. New method for assessing risks of email

    NASA Astrophysics Data System (ADS)

    Raja, Seyyed H.; Afrooz, Farzad

    2013-03-01

    E-mail technology, has become one of the requirements of human lives for correspondence between individuals. Given this, the important point is that the messages, server and client of e-mail and correspondences that exchanged between different people have acceptable security, to make people sure to use of this technology. In the information age, many of financial and non financial transactions are done electronically, data exchange takes place via the internet and theft and manipulation of data can make exorbitant cost in terms of integrity, financial, political, economic and culture. E-mail correspondence in there is same and it is very important. With review took place, a method that will focus on email system for risks assessment is not provided. We are examining ways of assessing for other systems and their strengths and weaknesses, then we use Mr Convery method for assessing email risks which it is for assessing network risks. At the end of paper we have offered special table for email risk assessment.

  11. A meaningful MESS (Medical Education Scholarship Support)

    PubMed Central

    Whicker, Shari A.; Engle, Deborah L.; Chudgar, Saumil; DeMeo, Stephen; Bean, Sarah M.; Narayan, Aditee P.; Grochowski, Colleen O'Connor; Nagler, Alisa

    2016-01-01

    Background Graduate medical education faculty bear the responsibility of demonstrating active research and scholarship; however, faculty who choose education-focused careers may face unique obstacles related to the lack of promotion tracks, funding, career options, and research opportunities. Our objective was to address education research and scholarship barriers by providing a collaborative peer-mentoring environment and improve the production of research and scholarly outputs. Methods We describe a Medical Education Scholarship Support (MESS) group created in 2013. MESS is an interprofessional, multidisciplinary peer-mentoring education research community that now spans multiple institutions. This group meets monthly to address education research and scholarship challenges. Through this process, we develop new knowledge, research, and scholarly products, in addition to meaningful collaborations. Results MESS originated with eight founding members, all of whom still actively participate. MESS has proven to be a sustainable unfunded local community of practice, encouraging faculty to pursue health professions education (HPE) careers and fostering scholarship. We have met our original objectives that involved maintaining 100% participant retention; developing increased knowledge in at least seven content areas; and contributing to the development of 13 peer-reviewed publications, eight professional presentations, one Masters of Education project, and one educational curriculum. Discussion The number of individuals engaged in HPE research continues to rise. The MESS model could be adapted for use at other institutions, thereby reducing barriers HPE researchers face, providing an effective framework for trainees interested in education-focused careers, and having a broader impact on the education research landscape. PMID:27476538

  12. Nondestructive methods to assess dental implant stability

    NASA Astrophysics Data System (ADS)

    Rizzo, Piervincenzo; Tabrizi, Aydin; Berhanu, Bruk; Ochs, Mark W.

    2012-04-01

    The robustness and reliability of two nondestructive evaluation methods to assess dental prostheses stability is presented. The study aims at addressing an increasing need in the biomedical area where robust, reliable, and noninvasive methods to assess the bone-interface of dental and orthopedic implants are increasingly demanded for clinical diagnosis and direct prognosis. The methods are based on the electromechanical impedance method and on the propagation of solitary waves. Nobel Biocare® 4.3 x 13 mm implants were entrenched inside bovine rib bones that were immersed inside Normal Saline for 24 hours before test in order to avoid dehydration and simulating physiologic osmolarity of the corticocancellous bone and plasma. Afterwards the bones were immersed in a solution of nitric acid to allow material degradation, inversely simulating a bone-healing process. This process was monitored by bonding a Piezoceramic Transducer (PZT) to the abutment and measuring the electrical admittance of the PZT over time. On the other hand the bones calcium loss was calculated after immersing in acid by Atomic Absorption Spectroscopy over time for comparison. Moreover a novel transducer based on the generation and detection of highly nonlinear solitary waves was used to assess the stiffness of the abutment-implant bone. In these experiments it was found that the PZT's conductance and some of the solitary waves parameters are sensitive to the degradation of the bones and was correlated to the bone calcium loss over time.

  13. Methods of geodiversity assessment and theirs application

    NASA Astrophysics Data System (ADS)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2016-04-01

    The concept of geodiversity has rapidly gained the approval of scientists around the world (Wiedenbein 1993, Sharples 1993, Kiernan 1995, 1996, Dixon 1996, Eberhard 1997, Kostrzewski 1998, 2011, Gray 2004, 2008, 2013, Zwoliński 2004, Serrano, Ruiz- Flano 2007, Gordon et al. 2012). However, the problem recognition is still at an early stage, and in effect not explicitly understood and defined (Najwer, Zwoliński 2014). Nevertheless, despite widespread use of the concept, little progress has been made in its assessment and mapping. Less than the last decade can be observing investigation of methods for geodiversity assessment and its visualisation. Though, many have acknowledged the importance of geodiversity evaluation (Kozłowski 2004, Gray 2004, Reynard, Panizza 2005, Zouros 2007, Pereira et al. 2007, Hjort et al. 2015). Hitherto, only a few authors have undertaken that kind of methodological issues. Geodiversity maps are being created for a variety of purposes and therefore their methods are quite manifold. In the literature exists some examples of the geodiversity maps applications for the geotourism purpose, basing mainly on the geological diversity, in order to point the scale of the area's tourist attractiveness (Zwoliński 2010, Serrano and Gonzalez Trueba 2011, Zwoliński and Stachowiak 2012). In some studies, geodiversity maps were created and applied to investigate the spatial or genetic relationships with the richness of particular natural environmental components (Burnett et al. 1998, Silva 2004, Jačková, Romportl 2008, Hjort et al. 2012, 2015, Mazurek et al. 2015, Najwer et al. 2014). There are also a few examples of geodiversity assessment in order to geoconservation and efficient management and planning of the natural protected areas (Serrano and Gonzalez Trueba 2011, Pellitero et al. 2011, 2014, Jaskulska et al. 2013, Melelli 2014, Martinez-Grana et al. 2015). The most popular method of assessing the diversity of abiotic components of the natural

  14. Landfill mining: Developing a comprehensive assessment method.

    PubMed

    Hermann, Robert; Wolfsberger, Tanja; Pomberger, Roland; Sarc, Renato

    2016-11-01

    In Austria, the first basic technological and economic examinations of mass-waste landfills with the purpose to recover secondary raw materials have been carried out by the 'LAMIS - Landfill Mining Österreich' pilot project. A main focus of its research, and the subject of this article, is the first conceptual design of a comprehensive assessment method for landfill mining plans, including not only monetary factors (like costs and proceeds) but also non-monetary ones, such as the concerns of adjoining owners or the environmental impact. Detailed reviews of references, the identification of influences and system boundaries to be included in planning landfill mining, several expert workshops and talks with landfill operators have been performed followed by a division of the whole assessment method into preliminary and main assessment. Preliminary assessment is carried out with a questionnaire to rate juridical feasibility, the risk and the expenditure of a landfill mining project. The results of this questionnaire are compiled in a portfolio chart that is used to recommend, or not, further assessment. If a detailed main assessment is recommended, defined economic criteria are rated by net present value calculations, while ecological and socio-economic criteria are examined in a utility analysis and then transferred into a utility-net present value chart. If this chart does not support making a definite statement on the feasibility of the project, the results must be further examined in a cost-effectiveness analysis. Here, the benefit of the particular landfill mining project per capital unit (utility-net present value ratio) is determined to make a final distinct statement on the general benefit of a landfill mining project.

  15. Method and apparatus to assess compartment syndrome

    NASA Technical Reports Server (NTRS)

    Ueno, Toshiaki (Inventor); Hargens, Alan R. (Inventor); Yost, William T. (Inventor)

    2008-01-01

    A method and apparatus for measuring pressure buildup in a body compartment that encases muscular tissue. The method includes assessing the body compartment configuration and identifying the effect of pulsatile components on at least one compartment dimension. This process is used in preventing tissue necrosis, and in decisions of whether to perform surgery on the body compartment for prevention of Compartment Syndrome. An apparatus is used for measuring excess pressure in the body compartment having components for imparting ultrasonic waves such as a transducer, placing the transducer to impart the ultrasonic waves, capturing the reflected imparted ultrasonic waves, and converting them to electrical signals, a pulsed phase-locked loop device for assessing a body compartment configuration and producing an output signal, and means for mathematically manipulating the output signal to thereby categorize pressure build-up in the body compartment from the mathematical manipulations.

  16. Method of assessing heterogeneity in images

    SciTech Connect

    Jacob, Richard E.; Carson, James P.

    2016-08-23

    A method of assessing heterogeneity in images is disclosed. 3D images of an object are acquired. The acquired images may be filtered and masked. Iterative decomposition is performed on the masked images to obtain image subdivisions that are relatively homogeneous. Comparative analysis, such as variogram analysis or correlogram analysis, is performed of the decomposed images to determine spatial relationships between regions of the images that are relatively homogeneous.

  17. Lake ecosystem health assessment: indicators and methods.

    PubMed

    Xu, F L; Tao, S; Dawson, R W; Li, P G; Cao, J

    2001-09-01

    A set of ecological indicators including structural, functional, and system-level aspects were proposed for a lake ecosystem health assessment, according to the structural, functional, and system-level responses of lake ecosystems to chemical stresses including acidification, eutrophication and copper, oil and pesticide contamination. The structural indicators included phytoplankton cell size and biomass, zooplankton body size and biomass, species diversity, macro- and micro-zooplankton biomass, the zooplankton phytoplankton ratio, and the macrozooplankton microzooplankton ratio. The functional indicators encompassed the algal C assimilation ratio, resource use efficiency, community production, gross production/respiration (i.e. P/R) ratio, gross production standing crop biomass (i.e. P/B) ratio, and standing crop biomass unit energy flow (i.e. B/E) ratio. The ecosystem-level indicators conisisted of ecological buffer capacities, energy, and structural energy. Based on these indicators, a direct measurement method (DMM) and an ecological modeling method (EMM) for lake ecosystem health assessment were developed. The DMM procedures were designed to: (1) identify key indicators; (2) measure directly or calculate indirectly the selected indicators; and, (3) assess ecosystem health on the basis of the indicator values. The EMM procedures were designed to: (1) determine the structure and complexity of the ecological model according to the lake's ecosystem structure; (2) establish an ecological model by designing a conceptual diagram, establishing model equations, and estimating model pararmeters; (3) compare the simulated values of important state variables and process rates with actual observations; (4) calculate ecosystem health indicators using the ecological model; and, (5) assess lake ecosystem health according to the values of the ecological indicators. The results of a case study demonstrated that both methods provided similar results which corresponded with the

  18. Noninvasive methods for the assessment of photoageing.

    PubMed

    Wheller, Laura; Lin, Lynlee L; Chai, Eric; Sinnya, Sudipta; Soyer, H Peter; Prow, Tarl W

    2013-11-01

    Although histopathological dermal elastosis is the current gold standard for the diagnosis of photoageing, noninvasive methods for quantifying the amount of photodamage to skin are clearly preferable. This study is the first to survey five noninvasive methods of assessing photoageing (clinical examination, spectrophotometry, skin surface topography, reflectance confocal microscopy and fluorescence lifetime imaging microscopy) in the same individual. Measurements for each noninvasive method were compared across nine individuals from three participant groups ('younger', 'older' and 'photodamaged') in UV-protected volar and UV-exposed dorsal forearm skin. Overall, participants in the younger group had the lowest measures of photodamage, while those in the photodamaged group had the highest, as indicated by each modality. The five noninvasive strategies surveyed in this study may demonstrate potential as a suitable methodology for the quantification of photoageing. The advantage of such noninvasive methods is that they allow for skin visualisation in vivo and repeated assessments of the same site. The main limitation of this study was its small sample size, which may have precluded many findings of statistical significance.

  19. Quantitative methods in assessment of neurologic function.

    PubMed

    Potvin, A R; Tourtellotte, W W; Syndulko, K; Potvin, J

    1981-01-01

    Traditionally, neurologists have emphasized qualitative techniques for assessing results of clinical trials. However, in recent years qualitative evaluations have been increasingly augmented by quantitative tests for measuring neurologic functions pertaining to mental state, strength, steadiness, reactions, speed, coordination, sensation, fatigue, gait, station, and simulated activities of daily living. Quantitative tests have long been used by psychologists for evaluating asymptomatic function, assessing human information processing, and predicting proficiency in skilled tasks; however, their methodology has never been directly assessed for validity in a clinical environment. In this report, relevant contributions from the literature on asymptomatic human performance and that on clinical quantitative neurologic function are reviewed and assessed. While emphasis is focused on tests appropriate for evaluating clinical neurologic trials, evaluations of tests for reproducibility, reliability, validity, and examiner training procedures, and for effects of motivation, learning, handedness, age, and sex are also reported and interpreted. Examples of statistical strategies for data analysis, scoring systems, data reduction methods, and data display concepts are presented. Although investigative work still remains to be done, it appears that carefully selected and evaluated tests of sensory and motor function should be an essential factor for evaluating clinical trials in an objective manner.

  20. Normalization in sustainability assessment: Methods and implications

    DOE PAGES

    Pollesch, N. L.; Dale, Virginia H.

    2016-08-08

    One approach to assessing progress towards sustainability makes use of diverse indicators spanning the environmental, social, and economic dimensions of the system being studied. Given the use of multiple indicators and the inherent complexity entailed in interpreting several metrics, aggregation of sustainability indicators is a common step after indicator measures are quantified. Diverse indicators have different units of measurement, and normalization is the procedure employed to transform differing indicator measures onto similar scales or to unit-free measures. It is often difficult for stakeholders to make clear connections between specific indicator measurements and resulting aggregate scores of sustainability. Normalization can alsomore » create implicit weightings of indicator measures that are independent of actual stakeholder preference or explicit weighting. This paper explores normalization methods utilized in sustainability assessment including ratio normalization, target normalization, Z-score normalization, and unit equivalence normalization. A mathematical analysis of the impact of changes in raw indicator data measurements on an aggregate sustainability score is developed. Theoretical results are clarified through a case study of data used in assessment of progress towards bioenergy sustainability. Advantages and drawbacks associated with different normalization schemes are discussed within the context of sustainability assessment.« less

  1. Normalization in sustainability assessment: Methods and implications

    SciTech Connect

    Pollesch, N. L.; Dale, Virginia H.

    2016-08-08

    One approach to assessing progress towards sustainability makes use of diverse indicators spanning the environmental, social, and economic dimensions of the system being studied. Given the use of multiple indicators and the inherent complexity entailed in interpreting several metrics, aggregation of sustainability indicators is a common step after indicator measures are quantified. Diverse indicators have different units of measurement, and normalization is the procedure employed to transform differing indicator measures onto similar scales or to unit-free measures. It is often difficult for stakeholders to make clear connections between specific indicator measurements and resulting aggregate scores of sustainability. Normalization can also create implicit weightings of indicator measures that are independent of actual stakeholder preference or explicit weighting. This paper explores normalization methods utilized in sustainability assessment including ratio normalization, target normalization, Z-score normalization, and unit equivalence normalization. A mathematical analysis of the impact of changes in raw indicator data measurements on an aggregate sustainability score is developed. Theoretical results are clarified through a case study of data used in assessment of progress towards bioenergy sustainability. Advantages and drawbacks associated with different normalization schemes are discussed within the context of sustainability assessment.

  2. Environmental assessment of used oil management methods.

    PubMed

    Boughton, Bob; Horvath, Arpad

    2004-01-15

    The 1 billion gal of used oil generated in the U.S. each year are managed in three primary ways: rerefined into base oil for reuse, distilled into marine diesel oil fuel, and marketed as untreated fuel oil. Management of used oil has local, regional and global impacts. Because of the globally distributed nature of fuel markets, used oil as fuel has localized and regional impacts in many areas. In this paper, the human health and environmental tradeoffs of the management options are quantified and characterized. The goal of this study was to assess and compare the environmental impacts and benefits of each management method in a product end-of-life scenario using a life-cycle assessment (LCA) approach. A life-cycle inventory showed that 800 mg of zinc and 30 mg of lead air emissions may result from the combustion of 1 L of used oil as fuel (50-100 times that of crude-derived fuel oils). As an example, up to 136 Mg of zinc and 5 Mg of lead air emissions may be generated from combustion of over 50 M gal of California-generated used oil each year. While occurring elsewhere, these levels are significant (of the same magnitude as reported total stationary source emissions in California). An impact assessment showed that heavy metals-related toxicity dominates the comparison of management methods. Zinc and lead emissions were the primary contributors to the terrestrial and human toxicity impact potentials that were calculated to be 150 and 5 times higher, respectively, for used oil combusted as fuel than for rerefining or distillation. Low profits and weak markets increasingly drive the used oil management method selection toward the untreated fuel oil market. Instead, both the rerefining and distillation methods and associated product markets should be strongly supported because they are environmentally preferable to the combustion of unprocessed used oil as fuel.

  3. Using Meaningful Interpretation and Chunking to Enhance Memory: The Case of Chinese Character Learning

    ERIC Educational Resources Information Center

    Xu, Xiaoqiu; Padilla, Amado M.

    2013-01-01

    Learning and retaining Chinese characters are often considered to be the most challenging elements in learning Chinese as a foreign language. Applying the theory of meaningful interpretation, the chunking mnemonic technique, and the linguistic features of Chinese characters, this study examines whether the method of meaningful interpretation and…

  4. Autonomic pain: features and methods of assessment

    SciTech Connect

    Gandhavadi, B.; Rosen, J.S.; Addison, R.G.

    1982-01-01

    The distribution of pain originating in the sympathetic nervous system does not match the somatic segmental sensory distribution at the postganglionic level. The two types of distribution are separate and different. At the preganglionic level, fibers show typical segmental sensory distribution, which resembles but is not identical to somatic segmental sensory distribution. Instead, sympathetic pain has its own distribution along the vascular supply and some peripheral nerves. It cannot be called atypical in terms of somatic segmental sensory distribution. Several techniques are available to assess autonomic function in cases of chronic pain. Infrared thermography is superior to any other physiologic or pharmacologic method to assess sympathetic function. Overactivity of sympathetic function in the area of pain is the probable cause of temperature reduction in that area. Accordingly it would appear that in cases in which thermography demonstrates decreased temperature, sympathetic block or sympathectomy would provide relief from the pain.

  5. Meaningful Use of School Health Data

    ERIC Educational Resources Information Center

    Johnson, Kathleen Hoy; Bergren, Martha Dewey

    2011-01-01

    Meaningful use (MU) of Electronic Health Records (EHRs) is an important development in the safety and security of health care delivery in the United States. Advancement in the use of EHRs occurred with the passage of the American Recovery and Reinvestment Act of 2009, which provides incentives for providers to support adoption and use of EHRs.…

  6. Meaningful Learning in the Cooperative Classroom

    ERIC Educational Resources Information Center

    Sharan, Yael

    2015-01-01

    Meaningful learning is based on more than what teachers transmit; it promotes the construction of knowledge out of learners' experience, feelings and exchanges with other learners. This educational view is based on the constructivist approach to learning and the co-operative learning approach. Researchers and practitioners in various…

  7. Making Biodiversity Meaningful through Environmental Education.

    ERIC Educational Resources Information Center

    van Weelie, Daan; Wals, Arjen E. J.

    2002-01-01

    Explores the crossroads between science education and environmental education and presents a framework for tapping environmental education's potential of biodiversity. Outlines a number of stepping stones for making biodiversity meaningful to learners. From the perspective of environmental education, the ill-defined nature of biodiversity is a…

  8. On Meaningful Measurement: Concepts, Technology and Examples.

    ERIC Educational Resources Information Center

    Cheung, K. C.

    This paper discusses how concepts and procedural skills in problem-solving tasks, as well as affects and emotions, can be subjected to meaningful measurement (MM), based on a multisource model of learning and a constructivist information-processing theory of knowing. MM refers to the quantitative measurement of conceptual and procedural knowledge…

  9. Values: The Natural Result of Meaningful Relationships.

    ERIC Educational Resources Information Center

    Beedy, Jeff; Gordon, John

    1997-01-01

    The New Hampton School (New Hampshire) uses the holistic Total Human Development Model with both students and faculty to instill principles focused on relationships as central to teaching and learning; respect and responsibility; sense of community; whole person development within the community; compassion and service; and the meaningful,…

  10. Rangeland assessment and monitoring methods guide - an interactive tool for selecting methods for assessment and monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A common concern expressed by land managers and biologists is that they do not know enough about the strengths and weaknesses of different field and remote-sensing methods for rangeland assessment and monitoring. The Methods Guide is a web-based tool and resource that provides researchers and manage...

  11. Electrophysiological methods for hearing assessment in pinnipeds

    NASA Astrophysics Data System (ADS)

    Reichmuth Kastak, Colleen; Kastak, David; Finneran, James J.; Houser, Dorian S.; Supin, Alexander

    2005-04-01

    Studies of auditory sensitivity in marine mammals generally rely on behavioral psychophysical methodologies. While these studies are the standard for hearing assessment in marine mammals, data are limited to only a few individuals representing a small proportion of species. Accumulating research on dolphin auditory physiology has resulted in the refinement of electrophysiological methods appropriate for odontocete cetaceans and an increase in available audiometric information. Electrophysiological methods have also been used with pinnipeds, but there are significant gaps in our understanding of pinniped auditory physiology that must be addressed before such appoaches can be broadly applied to investigations of pinniped hearing. We are taking a bottom-up approach to developing suitable methods for evoked potential audiometry in pinnipeds, including technology transfer from studies of cetaceans and other mammals, mapping of response amplitude with respect to recording positions on the skull, characterization of responses in relationship to various stimulus types and presentation parameters, and determination of whether useful frequency-specific data can be reliably obtained using electrophysiological methods. This approach is being taken with representative pinniped species including California sea lions (Zalophus californianus), harbor seals (Phoca vitulina), and northern elephant seals (Mirounga angustirostris) using both training and chemical immobilization techniques. [Work supported by NOPP.

  12. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    SciTech Connect

    NELSEN LA

    2009-01-30

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining.

  13. Methods for probabilistic assessments of geologic hazards

    SciTech Connect

    Mann, C.J.

    1987-01-01

    Although risk analysis today is considered to include three separate aspects: (1) identifying sources of risk, (2) estimating probabilities quantitatively, and (3) evaluating consequences of risk, here, only estimation of probabilities for natural geologic events, processes, and phenomena is addressed. Ideally, evaluation of potential future hazards includes an objective determination of probabilities that has been derived from past occurrences of identical events or components contributing to complex processes or phenomena. In practice, however, data which would permit objective estimation of those probabilities of interest may not be adequate, or may not even exist. Another problem that arises normally, regardless of the extent of data, is that risk assessments involve estimating extreme values. Rarely are extreme values accurately predictable even when an empirical frequency distribution is established well by data. In the absence of objective methods for estimating probabilities of natural events or processes, subjective probabilities for the hazard must be established through Bayesian methods, expert opinion, or Delphi methods. Uncertainty of every probability determination must be stated for each component of an event, process, or phenomenon. These uncertainties also must be propagated through the quantitative analysis so that a realistic estimate of total uncertainty can be associated with each final probability estimate for a geologic hazard.

  14. Toward More Substantively Meaningful Automated Essay Scoring

    ERIC Educational Resources Information Center

    Ben-Simon, Anat; Bennett, Randy Elliott

    2007-01-01

    This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…

  15. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  16. Meaningful use and meaningful curricula: a survey of health informatics programmes in the USA.

    PubMed

    Koong, Kai S; Ngafeeson, Madison N; Liu, Lai C

    2012-01-01

    The introduction of the US government's Meaningful Use criteria carries with it many implications including the training curriculum of healthcare personnel. This study examines 108 health informatics degree programmes across the USA. First, the courses offered are identified and classified into generic classes. Next, these generic groupings are mapped to two important frameworks: the Learning to Manage Health Information (LMHI) academic framework; and the Meaningful Use criteria policy framework. Results suggest that while current curricula seemed acceptable in addressing Meaningful Use Stage 1 objective, there was insufficient evidence that these curricula could support Meaningful Use Stage 2 and Stage 3. These findings are useful to both curriculum developers and the healthcare industry. Curriculum developers in health informatics must match curriculum to the emerging healthcare policy goals and the healthcare industry must now recruit highly trained and qualified personnel to help achieve these new goals of data-capture, data-sharing and intelligence.

  17. Progress and challenge in meeting meaningful use at an integrated delivery network.

    PubMed

    Bowes, Watson A

    2011-01-01

    Intermountain Healthcare hospitals and providers are eligible for approximately $95 million in incentives from the Health Information Technology for Economic and Clinical Health Act (HITECH), which requires that hospitals and providers use a certified electronic health record (EHR) in a meaningful way. This paper describes the our progress in readying legacy systems for certification, including measuring, and filling gaps in (EHR) functionality. Also addressed are some of the challenges and successes in meeting meaningful use. Methods for measuring and tracking levels of clinician meaningful use behaviors, and our most recent results impacting meaningful use behaviors in a large integrated delivery network are described. We identified 20 EHR requirements we can certify now, 16 requirements with minor issues to resolve, and 38 requirements which are still in some state of development. We also identified 6 meaningful use workflows that will require significant work to bring all of our hospitals and providers above the measure requirement.

  18. Progress and Challenge in Meeting Meaningful Use at an Integrated Delivery Network

    PubMed Central

    Bowes, Watson A.

    2011-01-01

    Intermountain Healthcare hospitals and providers are eligible for approximately $95 million in incentives from the Health Information Technology for Economic and Clinical Health Act (HITECH), which requires that hospitals and providers use a certified electronic health record (EHR) in a meaningful way. This paper describes the our progress in readying legacy systems for certification, including measuring, and filling gaps in (EHR) functionality. Also addressed are some of the challenges and successes in meeting meaningful use. Methods for measuring and tracking levels of clinician meaningful use behaviors, and our most recent results impacting meaningful use behaviors in a large integrated delivery network are described. We identified 20 EHR requirements we can certify now, 16 requirements with minor issues to resolve, and 38 requirements which are still in some state of development. We also identified 6 meaningful use workflows that will require significant work to bring all of our hospitals and providers above the measure requirement. PMID:22195065

  19. Meaningfully Integrating Big Earth Science Data

    NASA Astrophysics Data System (ADS)

    Pebesma, E. J.; Stasch, C.

    2014-12-01

    After taking the technical hurdles to deal with big earth observationdata, large challenges remain to avoid that operations are carried out that are not meaningful. Examples of this are summing things that should not be summed, or interpolating phenomena that shouldnot be interpolated. We propose a description of data at the level of their meaning, to allow for notifying data users whenmeaningless operations are being executed. We present a prototypicalimplementation in R.

  20. Methods for regional assessment of geothermal resources

    USGS Publications Warehouse

    Muffler, P.; Cataldi, R.

    1978-01-01

    A consistent, agreed-upon terminology is prerequisite for geothermal resource assessment. Accordingly, we propose a logical, sequential subdivision of the "geothermal resource base", accepting its definition as all the thermal energy in the earth's crust under a given area, measured from mean annual temperature. That part of the resource base which is shallow enough to be tapped by production drilling is termed the "accessible resource base", and it in turn is divided into "useful" and "residual" components. The useful component (i.e. the thermal energy that could reasonably be extracted at costs competitive with other forms of energy at some specified future time) is termed the "geothermal resource". This in turn is divided into "economic" and "subeconomic" components, based on conditions existing at the time of assessment. In the format of a McKelvey diagram, this logic defines the vertical axis (degree of economic feasibility). The horizontal axis (degree of geologic assurance) contains "identified" and "undiscovered" components. "Reserve" is then designated as the identified economic resource. All categories should be expressed in units of thermal energy, with resource and reserve figures calculated at wellhead, prior to the inevitable large losses inherent in any practical thermal use or in conversion to electricity. Methods for assessing geothermal resources can be grouped into 4 classes: (a) surface thermal flux, (b) volume, (c) planar fracture and (d) magmatic heat budget. The volume method appears to be most useful because (1) it is applicable to virtually any geologic environment, (2) the required parameters can in Sprinciple be measured or estimated, (3) the inevitable errors are in part compensated and (4) the major uncertainties (recoverability and resupply) are amenable to resolution in the foreseeable future. The major weakness in all the methods rests in the estimation of how much of the accessible resource base can be extracted at some time in the

  1. Evaluation of methods to assess physical activity

    NASA Astrophysics Data System (ADS)

    Leenders, Nicole Y. J. M.

    Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the

  2. A reliable method for assessing rotational power.

    PubMed

    Andre, Matthew J; Fry, Andrew C; Heyrman, Melissa A; Hudy, Andrea; Holt, Brady; Roberts, Cody; Vardiman, J Phillip; Gallagher, Philip M

    2012-03-01

    Rotational core training is said to be beneficial for rotational power athletes. Currently, there has been no method proposed for the reliable assessment of rotational power. Therefore, our purpose was to determine the test-retest reliability of kinetic and kinematic rotational characteristics of a pulley system when performing a rotational exercise of the axial skeleton in the transverse plane to find out if this would be a reliable tool for evaluating rotational power. Healthy, college-aged men (n = 8) and women (n = 15) reported for 3 testing sessions. The participants were seated on a box, and they held the handle with both arms extended in front of their body, starting their motion with their torso rotated toward the machine. All the participants rotated their torso forcefully until they reached 180° of rotation, and they then slowly returned to the starting position, 3 times per trial, with 3 loads: 9% body weight (BW), 12% BW, and 15% BW. The repetition with the greatest power for each trial for each load was analyzed. The mean peak power repetition (watts) for all the subjects was 20.09 ± 7.16 (9% BW), 26.17 ± 8.6 (12% BW), and 30.74 ± 11.022 (15% BW) in the first training session and 22.3 ± 8.087 (9% BW), 28.7 ± 11.295 (12% BW), and 33.52 ± 12.965 (15% BW) in the second training session with intraclass correlation coefficients of 0.97 (9%BW), 0.94 (12%BW), and 0.95 (15%BW). When the participants were separated by sex, there were no significant differences between groups. Based on these results, it was found that a pulley system and an external dynamometer can be used together as a reliable research tool to assess rotational power.

  3. Repositioning assessment: giving students the 'choice' of assessment methods.

    PubMed

    Garside, Joanne; Nhemachena, Jean Z Z; Williams, Julie; Topping, Annie

    2009-03-01

    Assessment is a feature of all academic courses undertaken for award in the United Kingdom (UK). The nature of the strategies that can be used to assess learning vary a great deal from the traditional unseen examination to more student-centered innovative approaches. A review of a pre-registration nursing curriculum in preparation for re-approval by the University and Nurse Midwifery Council (NMC) provided an opportunity to re-appraise existing assessment strategies. Concurrently a parallel review process was underway with a postgraduate continuing professional development (CPD) programme for registered nurses. Recognising that students have individual strengths, weaknesses, learning styles and preferences concerning mode of assessment, offering choice of assessment was proposed as a strategy for inculcating the values of student centeredness and responsibility for learning. Although recommended in the literature (Race et al., 2005 and Cowan, J., 2006. On Becoming an Innovative University Teacher: Reflection in Action. University Press, Open Maidenhead.) no empirical evidence of benefit in support of this initiative was identified. This paper presents an account of the journey taken by the project team from original idea, navigation of the quality assurance processes associated with curriculum approval to delivery of choice of assessment on two modules embedded in an undergraduate pre-registration and post-registration CPD programmes, and an evaluation undertaken with the students. Offering students choice of assessment appears to be well received and this approach has subsequently been adopted as a feature of other health and social care professional programmes offered in the institution.

  4. Healthcare BI: a tool for meaningful analysis.

    PubMed

    Rohloff, Rose

    2011-05-01

    Implementing an effective business intelligence (BI) system requires organizationwide preparation and education to allow for meaningful analysis of information. Hospital executives should take steps to ensure that: Staff entering data are proficient in how the data are to be used for decision making, and integration is based on clean data from primary sources of entry. Managers have the business acumen required for effective data analysis. Decision makers understand how multidimensional BI offers new ways of analysis that represent significant improvements over historical approaches using static reporting.

  5. Assessment Methods and Tools for Architectural Curricula

    ERIC Educational Resources Information Center

    Marriott, Christine A.

    2012-01-01

    This research explores the process of assessment within the arena of architectural education by questioning traditional assessment practices and probing into the conditions that necessitate change. As architectural educators we have opened our studios to digital technologies for the purposes of design and representation, but how do we measure and…

  6. Meaningful Understanding and Systems Thinking in Organic Chemistry: Validating Measurement and Exploring Relationships

    NASA Astrophysics Data System (ADS)

    Vachliotis, Theodoros; Salta, Katerina; Tzougraki, Chryssa

    2014-04-01

    The purpose of this study was dual: First, to develop and validate assessment schemes for assessing 11th grade students' meaningful understanding of organic chemistry concepts, as well as their systems thinking skills in the domain. Second, to explore the relationship between the two constructs of interest based on students' performance on the applied assessment framework. For this purpose, (a) various types of objective assessment questions were developed and evaluated for assessing meaningful understanding, (b) a specific type of systemic assessment questions (SAQs) was developed and evaluated for assessing systems thinking skills, and (c) the association between students' responses on the applied assessment schemes was explored. The results indicated that properly designed objective questions can effectively capture aspects of students' meaningful understanding. It was also found that the SAQs can elicit systems thinking skills in the context of a formalistic systems thinking theoretical approach. Moreover, a significant relationship was observed between students' responses on the two assessment strategies. This research provides evidence that students' systems thinking level within a science domain is significantly related to their meaningful understanding of relative science concepts.

  7. Literature as a meaningful life laboratory.

    PubMed

    Kurakin, Dmitry

    2010-09-01

    Meaningful life is emotionally marked off. That's the general point that Johansen (IPBS: Integrative Psychological & Behavioral Science 44, 2010) makes which is of great importance. Fictional abstractions use to make the point even more salient. As an example I've examined Borges' famous fiction story. Along with the examples of Johansen it provides an informative case of exploring symbolic mechanisms which bind meaning with emotions. This particular mode of analysis draws forth poetry and literature in general to be treated as a "meaningful life laboratory". Ways of explanation of emotional effect the art exercises on people, which had been disclosed within this laboratory, however, constitute a significant distinction in terms that I have designated as "referential" and "substantive". The former appeals to something that has already been charged with emotional power, whereas the latter comes to effect by means of special symbolic mechanisms creating the emotional experience within the situation. Johansen, who tends to explain emotions exerted by the art without leaving the semiotic perspective, is drawn towards the "referential" type of explanation. Based upon discussions in theory of metaphor and Robert Witkin's sociological theory of arts it is demonstrated an insufficient of "referential" explanation. To overcome a monopoly of "referential" explanation of emotional engagement, in particular, in literature, means to break away from the way of reasoning, stating endless references to "something else", presupposing the existence of something already significant and therefore sharing its effects.

  8. Revised Methods for Worker Risk Assessment

    EPA Pesticide Factsheets

    EPA is updating and changing the way it approaches pesticide risk assessments. This new approach will result in more comprehensive and consistent evaluation of potential risks of food use pesticides, non-food use pesticides, and occupational exposures.

  9. Travel Efficiency Assessment Method: Three Case Studies

    EPA Pesticide Factsheets

    This slide presentation summarizes three case studies EPA conducted in partnership with Boston, Kansas City, and Tucson, to assess the potential benefits of employing travel efficiency strategies in these areas.

  10. Using Corporate-Based Methods To Assess Technical Communication Programs.

    ERIC Educational Resources Information Center

    Faber, Brenton; Bekins, Linn; Karis, Bill

    2002-01-01

    Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…

  11. A New Rapid ISTAR Assessment Method

    DTIC Science & Technology

    2004-06-01

    about each target group to satisfy the constraints of that CCIR. This is assessed using a deterministic ‘logic engine’. This considers each sensor on...each search group in turn, assessing its capability against each target group within the constraints of the CCIR in question. Six tests are...arget posture T An important property of a target is its posture. The posture of each target group is designated by a flag defining the proportion

  12. Meaningful Understanding and Systems Thinking in Organic Chemistry: Validating Measurement and Exploring Relationships

    ERIC Educational Resources Information Center

    Vachliotis, Theodoros; Salta, Katerina; Tzougraki, Chryssa

    2014-01-01

    The purpose of this study was dual: First, to develop and validate assessment schemes for assessing 11th grade students' meaningful understanding of organic chemistry concepts, as well as their systems thinking skills in the domain. Second, to explore the relationship between the two constructs of interest based on students' performance…

  13. Towards a mathematical theory of meaningful communication

    PubMed Central

    Corominas-Murtra, Bernat; Fortuny, Jordi; Solé, Ricard V.

    2014-01-01

    Meaning has been left outside most theoretical approaches to information in biology. Functional responses based on an appropriate interpretation of signals have been replaced by a probabilistic description of correlations between emitted and received symbols. This assumption leads to potential paradoxes, such as the presence of a maximum information associated to a channel that creates completely wrong interpretations of the signals. Game-theoretic models of language evolution and other studies considering embodied communicating agents show that the correct (meaningful) match resulting from agent-agent exchanges is always achieved and natural systems obviously solve the problem correctly. Inspired by the concept of duality of the communicative sign stated by the swiss linguist Ferdinand de Saussure, here we present a complete description of the minimal system necessary to measure the amount of information that is consistently decoded. Several consequences of our developments are investigated, such as the uselessness of a certain amount of information properly transmitted for communication among autonomous agents. PMID:24699312

  14. Facilitating critical discourse through "meaningful disagreement" online.

    PubMed

    Dalley-Hewer, Jayne; Clouder, Deanne Lynn; Jackson, Ann; Goodman, Simon; Bluteau, Patricia; Davies, Bernadette

    2012-11-01

    This paper is concerned with identifying ways of facilitating "meaningful disagreement" amongst students in interprofessional online discussion forums. It builds on previous research that identified a trend toward polite agreement and only limited evidence of disagreement in this setting. Given the suggestion that disagreement indicates a deeper level of engagement in group discussion and therefore leads to deeper learning, our aim was to critique the pedagogical approach adopted by analyzing whether we were promoting a particular interprofessional discourse amongst students that favored agreement and therefore limited potential learning. Agreement in this context has been conceptualized as a form of online interprofessional "netiquette" existing amongst participants. Findings suggest that creating an online context for critical discourse is challenging; however, the careful construction of learning outcomes, trigger material/resources and learning activities, as well as attention to students' stage of study and life experience, can provoke the desired effects.

  15. Towards a mathematical theory of meaningful communication

    NASA Astrophysics Data System (ADS)

    Corominas-Murtra, Bernat; Fortuny, Jordi; Solé, Ricard V.

    2014-04-01

    Meaning has been left outside most theoretical approaches to information in biology. Functional responses based on an appropriate interpretation of signals have been replaced by a probabilistic description of correlations between emitted and received symbols. This assumption leads to potential paradoxes, such as the presence of a maximum information associated to a channel that creates completely wrong interpretations of the signals. Game-theoretic models of language evolution and other studies considering embodied communicating agents show that the correct (meaningful) match resulting from agent-agent exchanges is always achieved and natural systems obviously solve the problem correctly. Inspired by the concept of duality of the communicative sign stated by the swiss linguist Ferdinand de Saussure, here we present a complete description of the minimal system necessary to measure the amount of information that is consistently decoded. Several consequences of our developments are investigated, such as the uselessness of a certain amount of information properly transmitted for communication among autonomous agents.

  16. Streamflow Duration Assessment Method for the Pacific Northwest

    EPA Pesticide Factsheets

    The Streamflow Duration Assessment Method for the Pacific Northwest is a scientific tool developed by EPA and the U.S. Army Corps of Engineers to provide a rapid assessment framework to distinguish between ephemeral, intermittent and perennial streams.

  17. How meaningful are heritability estimates of liability?

    PubMed Central

    Morris, Nathan J.

    2013-01-01

    It is commonly acknowledged that estimates of heritability from classical twin studies have many potential shortcomings. Despite this, in the post-GWAS era, these heritability estimates have come to be a continual source of interest and controversy. While the heritability estimates of a quantitative trait are subject to a number of biases, in this article we will argue that the standard statistical approach to estimating the heritability of a binary trait relies on some additional untestable assumptions which, if violated, can lead to badly biased estimates. The ACE liability threshold model assumes at its heart that each individual has an underlying liability or propensity to acquire the binary trait (e.g., disease), and that this unobservable liability is multivariate normally distributed. We investigated a number of different scenarios violating this assumption such as the existence of a single causal diallelic gene and the existence of a dichotomous exposure. For each scenario, we found that substantial asymptotic biases can occur, which no increase in sample size can remove. Asymptotic biases as much as four times larger than the true value were observed, and numerous cases also showed large negative biases. Additionally, regions of low bias occurred for specific parameter combinations. Using simulations, we also investigated the situation where all of the assumptions of the ACE liability model are met. We found that commonly used sample sizes can lead to biased heritability estimates. Thus, even if we are willing to accept the meaningfulness of the liability construct, heritability estimates under the ACE liability threshold model may not accurately reflect the heritability of this construct. The points made in this paper should be kept in mind when considering the meaningfulness of a reported heritability estimate for any specific disease. PMID:23867980

  18. Assessment of seismic margin calculation methods

    SciTech Connect

    Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.

    1989-03-01

    Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs.

  19. Methods and Strategies: The Reflective Assessment Technique

    ERIC Educational Resources Information Center

    Kennedy, Cathleen; Long, Kathy; Camins, Arthur

    2009-01-01

    Teachers often rely on student questions, their observations of students at work, and their own intuition to monitor how well students are learning. However, the authors found that teachers learn more about their students when they use the four-step Reflective Assessment Technique that draws on guided teacher reflections to inform classroom…

  20. Spiritual Assessment in Counseling: Methods and Practice

    ERIC Educational Resources Information Center

    Oakes, K. Elizabeth; Raphel, Mary M.

    2008-01-01

    Given the widely expanding professional and empirical support for integrating spirituality into counseling, the authors present a practical discussion for raising counselors' general awareness and skill in the critical area of spiritual assessment. A discussion of rationale, measurement, and clinical practice is provided along with case examples.…

  1. Peer Assessment in Small Groups: A Comparison of Methods

    ERIC Educational Resources Information Center

    Baker, Diane F.

    2008-01-01

    This article describes and evaluates several peer evaluation tools used to assess student behavior in small groups. The two most common methods of peer assessment found in the literature are rating scales and single score methods. Three peer evaluation instruments, two using a rating scale and one using a single score method, are tested in several…

  2. REVIEW OF RAPID METHODS FOR ASSESSING WETLAND CONDITION

    EPA Science Inventory

    We evaluated over 40 wetland rapid assessment methods developed for a variety of purposes for their use in the assessment of ecological integrity or ecosystem condition. Four criteria were used to screen methods: 1) the method can be used to measure condition, 2) it is truly rap...

  3. Assessment of plaque assay methods for alphaviruses.

    PubMed

    Juarez, Diana; Long, Kanya C; Aguilar, Patricia; Kochel, Tadeusz J; Halsey, Eric S

    2013-01-01

    Viruses from the Alphavirus genus are responsible for numerous arboviral diseases impacting human health throughout the world. Confirmation of acute alphavirus infection is based on viral isolation, identification of viral RNA, or a fourfold or greater increase in antibody titers between acute and convalescent samples. In convalescence, the specificity of antibodies to an alphavirus may be confirmed by plaque reduction neutralization test. To identify the best method for alphavirus and neutralizing antibody recognition, the standard solid method using a cell monolayer overlay with 0.4% agarose and the semisolid method using a cell suspension overlay with 0.6% carboxymethyl cellulose (CMC) overlay were evaluated. Mayaro virus, Una virus, Venezuelan equine encephalitis virus (VEEV), and Western equine encephalitis virus (WEEV) were selected to be tested by both methods. The results indicate that the solid method showed consistently greater sensitivity than the semisolid method. Also, a "semisolid-variant method" using a 0.6% CMC overlay on a cell monolayer was assayed for virus titration. This method provided the same sensitivity as the solid method for VEEV and also had greater sensitivity for WEEV titration. Modifications in plaque assay conditions affect significantly results and therefore evaluation of the performance of each new assay is needed.

  4. Cyber Assessment Methods For SCADA Security

    SciTech Connect

    May Robin Permann; Kenneth Rohde

    2005-06-01

    The terrorist attacks of September 11, 2001 brought to light threats and vulnerabilities that face the United States. In response, the U.S. Government is directing the effort to secure the nation's critical infrastructure by creating programs to implement the National Strategy to Secure Cyberspace (1). One part of this effort involves assessing Supervisory Control and Data Acquisition (SCADA) systems. These systems are essential to the control of critical elements of our national infrastructure, such as electric power, oil, and gas production and distribution. Since their incapacitation or destruction would have a debilitating impact on the defense or economic security of the United States, one of the main objectives of this program is to identify vulnerabilities and encourage the public and private sectors to work together to design secure control systems that resolve these weaknesses. This paper describes vulnerability assessment methodologies used in ongoing research and assessment activities designed to identify and resolve vulnerabilities so as to improve the security of the nation's critical infrastructure.

  5. Cyber Assessment Methods for SCADA Security

    SciTech Connect

    Not Available

    2005-06-01

    The terrorist attacks of September 11, 2001 brought to light threats and vulnerabilities that face the United States. In response, the U.S. Government is directing the effort to secure the nation's critical infrastructure by creating programs to implement the National Strategy to Secure Cyberspace (1). One part of this effort involves assessing Supervisory Control and Data Acquisition (SCADA) systems. These systems are essential to the control of critical elements of our national infrastructure, such as electric power, oil, and gas production and distribution. Since their incapacitation or destruction would have a debilitating impact on the defense or economic security of the United States, one of the main objectives of this program is to identify vulnerabilities and encourage the public and private sectors to work together to design secure control systems that resolve these weaknesses. This paper describes vulnerability assessment methodologies used in ongoing research and assessment activities designed to identify and resolve vulnerabilities so as to improve the security of the nation's critical infrastructure.

  6. Assessment of User Home Location Geoinference Methods

    SciTech Connect

    Harrison, Joshua J.; Bell, Eric B.; Corley, Courtney D.; Dowling, Chase P.; Cowell, Andrew J.

    2015-05-29

    This study presents an assessment of multiple approaches to determine the home and/or other important locations to a Twitter user. In this study, we present a unique approach to the problem of geotagged data sparsity in social media when performing geoinferencing tasks. Given the sparsity of explicitly geotagged Twitter data, the ability to perform accurate and reliable user geolocation from a limited number of geotagged posts has proven to be quite useful. In our survey, we have achieved accuracy rates of over 86% in matching Twitter user profile locations with their inferred home locations derived from geotagged posts.

  7. Evaluation of Dynamic Methods for Earthwork Assessment

    NASA Astrophysics Data System (ADS)

    Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína

    2015-05-01

    Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  8. Assessment of heliostat control system methods

    SciTech Connect

    Pearson, J; Chen, B

    1986-01-01

    Automatic control and communication between the major components in solar thermal central receiver systems is critically needed for the optimal and safe operation of these systems. This report assesses novel and cost-effective approaches to the control of the solar collector field and the communication with the central plant computer/control system. The authors state that radio frequency and carrier-current communication approaches have the greatest potential to improve cost-effectiveness relative to the current dedicated control wiring approaches. In addition, based on their analysis, the authors recommend distributed control, which is becoming an industry-wide control standard, for the individual concentrators within the collector field rather than the current central computer approach. The vastly improved cost and performance ofmicroprocessors and other solid-state electronics, which has continually and rapidly proceeded for more than five years, is the major reason for these conclusions.

  9. Assessing Institutional Effectiveness: Issues, Methods, and Management.

    ERIC Educational Resources Information Center

    Fincher, Cameron, Ed.

    This collection of 12 papers was presented at a 1987 conference at which speakers presented personal perspectives on institutional effectiveness. Papers are organized under three major headings: "Managing Quality: Methods and Outcomes,""Institutional Response," and "Special Issues." Titles include: (1) "Managing the Meaning of Institutional…

  10. STANDARDIZED ASSESSMENT METHOD (SAM) FOR RIVERINE MACROINVERTEBRATES

    EPA Science Inventory

    During the summer of 2001, twelve sites were sampled for macroinvertebrates, six each on the Great Miami and Kentucky Rivers. Sites were chosen in each river from those sampled in the 1999 methods comparison study to reflect a disturbance gradient. At each site, a total distanc...

  11. Enabling Meaningful Affiliation Searches in the ADS

    NASA Astrophysics Data System (ADS)

    Grant, C. S.; Thompson, D. M.; Chyla, R.; Holachek, A.; Accomazzi, A.; Henneken, E. A.; Kurtz, M. J.; Luker, J.; Murray, S. S.

    2015-04-01

    For many years, users have wanted to search affiliations in the ADS in order to build institutional databases and to help with author disambiguation. Although we currently provide this capability upon request, we have yet to incorporate it as part of the operational Abstract Service. This is because it cannot be used reliably, primarily because of the lack of uniform representation of the affiliation data. In an effort to make affiliation searches more meaningful, we have designed a two-tiered hierarchy of standard institutional names based on Ringgold identifiers, with the expectation that this will enable us to implement a search by institution, which will work for the vast majority of institutions. It is our intention to provide the capability of searching the ADS both by standard affiliation name and original affiliation string, as well as to enable autosuggest of affiliations as a means of helping to disambiguate author identification. Some institutions are likely to require manual work, and we encourage interested librarians to assist us in standardizing the representation of their institutions in the affiliation field.

  12. Information and Perception of Meaningful Patterns

    PubMed Central

    Del Viva, Maria M.; Punzi, Giovanni; Benedetti, Daniele

    2013-01-01

    The visual system needs to extract the most important elements of the external world from a large flux of information in a short time for survival purposes. It is widely believed that in performing this task, it operates a strong data reduction at an early stage, by creating a compact summary of relevant information that can be handled by further levels of processing. In this work we formulate a model of early vision based on a pattern-filtering architecture, partly inspired by high-speed digital data reduction in experimental high-energy physics (HEP). This allows a much stronger data reduction than models based just on redundancy reduction. We show that optimizing this model for best information preservation under tight constraints on computational resources yields surprisingly specific a-priori predictions for the shape of biologically plausible features, and for experimental observations on fast extraction of salient visual features by human observers. Interestingly, applying the same optimized model to HEP data acquisition systems based on pattern-filtering architectures leads to specific a-priori predictions for the relevant data patterns that these devices extract from their inputs. These results suggest that the limitedness of computing resources can play an important role in shaping the nature of perception, by determining what is perceived as “meaningful features” in the input data. PMID:23894422

  13. Information and perception of meaningful patterns.

    PubMed

    Del Viva, Maria M; Punzi, Giovanni; Benedetti, Daniele

    2013-01-01

    The visual system needs to extract the most important elements of the external world from a large flux of information in a short time for survival purposes. It is widely believed that in performing this task, it operates a strong data reduction at an early stage, by creating a compact summary of relevant information that can be handled by further levels of processing. In this work we formulate a model of early vision based on a pattern-filtering architecture, partly inspired by high-speed digital data reduction in experimental high-energy physics (HEP). This allows a much stronger data reduction than models based just on redundancy reduction. We show that optimizing this model for best information preservation under tight constraints on computational resources yields surprisingly specific a-priori predictions for the shape of biologically plausible features, and for experimental observations on fast extraction of salient visual features by human observers. Interestingly, applying the same optimized model to HEP data acquisition systems based on pattern-filtering architectures leads to specific a-priori predictions for the relevant data patterns that these devices extract from their inputs. These results suggest that the limitedness of computing resources can play an important role in shaping the nature of perception, by determining what is perceived as "meaningful features" in the input data.

  14. Survey of Methods to Assess Workload

    DTIC Science & Technology

    1979-08-01

    A.H.Roscoe 83 Chapter 12 BRAIN WAVES AND THE ENHANCEMFNT OF PILOT PERFORMANCE,; by G.H.Lawrence 93 Chapter 13 PUPILLOMETRIC METHODS OF WORKLOAD EVALUATION...neuruphysio- logical condition and is located in the central nervous system more spccifically, in the brain stem reticulor activation system. His...conu.eptualization of fatigue as a conditon of the central nervous system is based on early studies of the role of the brain -stem reticular formation in

  15. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  16. A Novel Method for Learner Assessment Based on Learner Annotations

    ERIC Educational Resources Information Center

    Noorbehbahani, Fakhroddin; Samani, Elaheh Biglar Beigi; Jazi, Hossein Hadian

    2013-01-01

    Assessment is one of the most essential parts of any instructive learning process which aims to evaluate a learner's knowledge about learning concepts. In this work, a new method for learner assessment based on learner annotations is presented. The proposed method exploits the M-BLEU algorithm to find the most similar reference annotations…

  17. Assessment of dental plaque by optoelectronic methods

    NASA Astrophysics Data System (ADS)

    Negrutiu, Meda-Lavinia; Sinescu, Cosmin; Bortun, Cristina Maria; Levai, Mihaela-Codrina; Topala, Florin Ionel; Crǎciunescu, Emanuela Lidia; Cojocariu, Andreea Codruta; Duma, Virgil Florin; Podoleanu, Adrian Gh.

    2016-03-01

    The formation of dental biofilm follows specific mechanisms of initial colonization on the surface, microcolony formation, development of organized three dimensional community structures, and detachment from the surface. The structure of the plaque biofilm might restrict the penetration of antimicrobial agents, while bacteria on a surface grow slowly and display a novel phenotype; the consequence of the latter is a reduced sensitivity to inhibitors. The aim of this study was to evaluate with different optoelectronic methods the morphological characteristics of the dental biofilm. The study was performed on samples from 25 patients aged between 18 and 35 years. The methods used in this study were Spectral Domain Optical Coherence Tomography (SD-OCT) working at 870 nm for in vivo evaluations and Scanning Electron Microscopy (SEM) for validations. For each patient a sample of dental biofilm was obtained directly from the vestibular surface of the teeth's. SD-OCT produced C- and B-scans that were used to generate three dimensional (3D) reconstructions of the sample. The results were compared with SEM evaluations. The biofilm network was dramatically destroyed after the professional dental cleaning. OCT noninvasive methods can act as a valuable tool for the 3D characterization of dental biofilms.

  18. Methods for Assessing Mitochondrial Function in Diabetes

    PubMed Central

    Kane, Daniel A.; Lanza, Ian R.; Neufer, P. Darrell

    2013-01-01

    A growing body of research is investigating the potential contribution of mitochondrial function to the etiology of type 2 diabetes. Numerous in vitro, in situ, and in vivo methodologies are available to examine various aspects of mitochondrial function, each requiring an understanding of their principles, advantages, and limitations. This review provides investigators with a critical overview of the strengths, limitations and critical experimental parameters to consider when selecting and conducting studies on mitochondrial function. In vitro (isolated mitochondria) and in situ (permeabilized cells/tissue) approaches provide direct access to the mitochondria, allowing for study of mitochondrial bioenergetics and redox function under defined substrate conditions. Several experimental parameters must be tightly controlled, including assay media, temperature, oxygen concentration, and in the case of permeabilized skeletal muscle, the contractile state of the fibers. Recently developed technology now offers the opportunity to measure oxygen consumption in intact cultured cells. Magnetic resonance spectroscopy provides the most direct way of assessing mitochondrial function in vivo with interpretations based on specific modeling approaches. The continuing rapid evolution of these technologies offers new and exciting opportunities for deciphering the potential role of mitochondrial function in the etiology and treatment of diabetes. PMID:23520284

  19. Regional method to assess offshore slope stability.

    USGS Publications Warehouse

    Lee, H.J.; Edwards, B.D.

    1986-01-01

    The slope stability of some offshore environments can be evaluated by using only conventional acoustic profiling and short-core sampling, followed by laboratory consolidation and strength testing. The test results are synthesized by using normalized-parameter techniques. The normalized data are then used to calculate the critical earthquake acceleration factors or the wave heights needed to initiate failure. These process-related parameters provide a quantitative measure of the relative stability for locations from which short cores were obtained. The method is most applicable to offshore environments of gentle relief and simple subsurface structure and is not considered a substitute for subsequent site-specific analysis. -from ASCE Publications Information

  20. MIMIC Methods for Assessing Differential Item Functioning in Polytomous Items

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Shih, Ching-Lin

    2010-01-01

    Three multiple indicators-multiple causes (MIMIC) methods, namely, the standard MIMIC method (M-ST), the MIMIC method with scale purification (M-SP), and the MIMIC method with a pure anchor (M-PA), were developed to assess differential item functioning (DIF) in polytomous items. In a series of simulations, it appeared that all three methods…

  1. Time Domain Stability Margin Assessment Method

    NASA Technical Reports Server (NTRS)

    Clements, Keith

    2017-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.

  2. The liquefaction method for assessing paleoseismicity

    SciTech Connect

    Tuttle, M.P.

    1994-12-01

    Paleoseismicity studies expand our knowledge of seismic activity into the prehistoric period and thereby can improve our understanding of the earthquake potential of various regions. Paleoseismology is proving especially useful in eastern North America, where the recurrence interval of large earthquakes is longer than the historic record of earthquakes. Because surface traces of seismogenic faults have been difficult to identify in eastern North America, most paleoseismicity studies have employed features resulting from liquefaction. The goals of paleoliquefaction studies are to determine the recurrence intervals, magnitudes, and source areas of prehistoric earthquakes. To accomplish these goals, one must be able to identify earthquake-induced liquefaction features, determine their ages, and map their distribution. This report reviews (1) characteristics of earthquake-induced liquefaction features as well as other soft-sediment deformation structures, (2) methods for dating liquefaction features, and (3) relationships between liquefaction and the magnitude and distance of causative earthquakes. Recent studies by the author in Quebec Province, Canada and in the New Madrid seismic zone of the central United States provide the basis for this report. For additional information on the use of liquefaction features in paleoseismology see Amick et al. (1990) and Obermeier et al. (1990 and 1992).

  3. Compounding conservatisms: EPA's health risk assessment methods

    SciTech Connect

    Stackelberg, K. von; Burmaster, D.E. )

    1993-03-01

    Superfund conjures up images of hazardous waste sites, which EPA is spending billions of dollars to remediate. One of the law's most worrisome effects is that it drains enormous economic resources without returning commensurate benefits. In a Sept. 1, 1991, front page article in The New York Times, experts argued that most health dangers at Superfund sites could be eliminated for a fraction of the billions that will be spent cleaning up the 1,200 high-priority sites across the country. Even EPA has suggested that the Superfund program may receive disproportionate resources, compared with other public health programs, such as radon in houses, the diminishing ozone layer and occupational diseases. Public opinion polls over the last decade consistently have mirrored the public's vast fear of hazardous waste sites, a fear as great as that held for nuclear power plants. Fear notwithstanding, the high cost of chosen remedies at given sites may have less to do with public health goals than with the method EPA uses to translate them into acceptable contaminant concentrations in soil, groundwater and other environmental media.

  4. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1313 and Method 1316

    EPA Science Inventory

    This document summarizes the results of an interlaboratory study conducted to generate precision estimates for two parallel batch leaching methods which are part of the Leaching Environmental Assessment Framework (LEAF). These methods are: (1) Method 1313: Liquid-Solid Partition...

  5. Influence of expertise on rockfall hazard assessment using empirical methods

    NASA Astrophysics Data System (ADS)

    Delonca, Adeline; Verdel, Thierry; Gunzburger, Yann

    2016-07-01

    To date, many rockfall hazard assessment methods still consider qualitative observations within their analysis. Based on this statement, knowledge and expertise are supposed to be major parameters of rockfall assessment. To test this hypothesis, an experiment was carried out in order to evaluate the influence of knowledge and expertise on rockfall hazard assessment. Three populations were selected, having different levels of expertise: (1) students in geosciences, (2) researchers in geosciences and (3) confirmed experts. These three populations evaluated the rockfall hazard level on the same site, considering two different methods: the Laboratoire des Ponts et Chaussées (LPC) method and a method partly based on the "slope mass rating" (SMR) method. To complement the analysis, the completion of an "a priori" assessment of the rockfall hazard was requested of each population, without using any method. The LPC method is the most widely used method in France for official hazard mapping. It combines two main indicators: the predisposition to instability and the expected magnitude. Reversely, the SMR method was used as an ad hoc quantitative method to investigate the effect of quantification within a method. These procedures were applied on a test site divided into three different sectors. A statistical treatment of the results (descriptive statistical analysis, chi-square independent test and ANOVA) shows that there is a significant influence of the method used on the rockfall hazard assessment, whatever the sector. However, there is a non-significant influence of the level of expertise of the population the sectors 2 and 3. On sector 1, there is a significant influence of the level of expertise, explained by the importance of the temporal probability assessment in the rockfall hazard assessment process. The SMR-based method seems highly sensitive to the "site activity" indicator and exhibits an important dispersion in its results. However, the results are more similar

  6. [Statistical prediction methods in violence risk assessment and its application].

    PubMed

    Liu, Yuan-Yuan; Hu, Jun-Mei; Yang, Min; Li, Xiao-Song

    2013-06-01

    It is an urgent global problem how to improve the violence risk assessment. As a necessary part of risk assessment, statistical methods have remarkable impacts and effects. In this study, the predicted methods in violence risk assessment from the point of statistics are reviewed. The application of Logistic regression as the sample of multivariate statistical model, decision tree model as the sample of data mining technique, and neural networks model as the sample of artificial intelligence technology are all reviewed. This study provides data in order to contribute the further research of violence risk assessment.

  7. Life is pretty meaningful and/or purposeful?: On conflations, contexts, and consequences.

    PubMed

    Hill, Patrick L; Burrow, Anthony L; Sumner, Rachel; Young, Robin K

    2015-09-01

    Comments on the original article "Life is pretty meaningful," by S. J. Heintzelman and L. A. King (see record 2014-03265-001). Heintzelman and King condense descriptive data from numerous studies to conclude that individuals tend to see life as meaningful, because average scores on the meaning and purpose in life assessments fall above the midpoint. However, in so doing, they make two contentious assumptions. The first is the expectation that scale midpoints actually reflect an average score on that construct. However, one should not interpret this metric to suggest that people generally live meaningful lives without great caution and consideration of the second assumption: the conflation of purpose and meaning in life. In response, the current authors address this second assumption and the need to develop better questions and measures for both meaning and purpose.

  8. Critical evaluation of soil contamination assessment methods for trace metals.

    PubMed

    Desaules, André

    2012-06-01

    Correctly distinguishing between natural and anthropogenic trace metal contents in soils is crucial for assessing soil contamination. A series of assessment methods is critically outlined. All methods rely on assumptions of reference values for natural content. According to the adopted reference values, which are based on various statistical and geochemical procedures, there is a considerable range and discrepancy in the assessed soil contamination results as shown by the five methods applied to three weakly contaminated sites. This is a serious indication of their high methodological specificity and bias. No method with off-site reference values could identify any soil contamination in the investigated trace metals (Pb, Cu, Zn, Cd, Ni), while the specific and sensitive on-site reference methods did so for some sites. Soil profile balances are considered to produce the most plausible site-specific results, provided the numerous assumptions are realistic and the required data reliable. This highlights the dilemma between model and data uncertainty. Data uncertainty, however, is a neglected issue in soil contamination assessment so far. And the model uncertainty depends much on the site-specific realistic assumptions of pristine natural trace metal contents. Hence, the appropriate assessment of soil contamination is a subtle optimization exercise of model versus data uncertainty and specification versus generalization. There is no general and accurate reference method and soil contamination assessment is still rather fuzzy, with negative implications for the reliability of subsequent risk assessments.

  9. REGIONAL VULNERABILITY ASSESSMENT OF THE MID-ATLANTIC REGION: EVALUATION OF INTEGRATION METHODS AND ASSESSMENTS RESULTS

    EPA Science Inventory

    This report describes methods for quantitative regional assessment developed by the Regional Vulnerability Assessment (ReVA) program. The goal of ReVA is to develop regional-scale assessments of the magnitude, extent, distribution, and uncertainty of current and anticipated envir...

  10. The professional portfolio: an evidence-based assessment method.

    PubMed

    Byrne, Michelle; Schroeter, Kathryn; Carter, Shannon; Mower, Julie

    2009-12-01

    Competency assessment is critical for a myriad of disciplines, including medicine, law, education, and nursing. Many nurse managers and educators are responsible for nursing competency assessment, and assessment results are often used for annual reviews, promotions, and satisfying accrediting agencies' requirements. Credentialing bodies continually seek methods to measure and document the continuing competence of licensees or certificants. Many methods and frameworks for continued competency assessment exist. The portfolio process is one method to validate personal and professional accomplishments in an interactive, multidimensional manner. This article illustrates how portfolios can be used to assess competence. One specialty nursing certification board's process of creating an evidence-based portfolio for recertification or reactivation of a credential is used as an example. The theoretical background, development process, implementation, and future implications may serve as a template for other organizations in developing their own portfolio models.

  11. Meaningful Movement for Children: Stay True to Their Natures

    ERIC Educational Resources Information Center

    Baumgarten, Sam

    2006-01-01

    This article discusses the meaningful movement for children. In this article, the author discusses "roots" in the "physical education garden" which, when thoroughly examined, reveal tried and true insights about children and their natures. By revisiting these natures or characteristics, one will have a clearer picture of what is meaningful to…

  12. Career Development for Meaningful Life Work. ERIC Digest.

    ERIC Educational Resources Information Center

    Imel, Susan

    Achieving meaningful life work is a process that involves aligning one's work with one's true essence or core self. It is an ongoing process that involves self-reflection to discover the deep passions within and then exploration of how to bring those passions or interests to bear in meaningful ways at work. In response to the need to address the…

  13. Meaningful Literacy: Writing Poetry in the Language Classroom

    ERIC Educational Resources Information Center

    Hanauer, David I.

    2012-01-01

    This paper develops the concept of meaningful literacy and offers a classroom methodology--poetry writing--that manifests this approach to ESL/EFL literacy instruction. The paper is divided into three sections. The first deals with the concept of meaningful literacy learning in second and foreign language pedagogy; the second summarizes empirical…

  14. Comparison of Cognitive Assessment Methods With Heterosocially Anxious College Women.

    ERIC Educational Resources Information Center

    Myszka, Michael T.; And Others

    1986-01-01

    Investigated comparability of self-statements generated by different cognitive assessment methods; effect of an assessment delay on cognitive phenomena; and interrelationships among different cognitive variables. Subjects were heterosocially anxious women (N=64) who engaged in a conversation with a male confederate. Self-statements generated by…

  15. Models and Methods for Assessing Refugee Mental Health Needs.

    ERIC Educational Resources Information Center

    Deinard, Amos S.; And Others

    This background paper on refugee needs assessment discusses the assumptions, goals, objectives, strategies, models, and methods that the state refugee programs can consider in designing their strategies for assessing the mental health needs of refugees. It begins with a set of background assumptions about the ethnic profile of recent refugee…

  16. Uncertainty in environmental health impact assessment: quantitative methods and perspectives.

    PubMed

    Mesa-Frias, Marco; Chalabi, Zaid; Vanni, Tazio; Foss, Anna M

    2013-01-01

    Environmental health impact assessment models are subjected to great uncertainty due to the complex associations between environmental exposures and health. Quantifying the impact of uncertainty is important if the models are used to support health policy decisions. We conducted a systematic review to identify and appraise current methods used to quantify the uncertainty in environmental health impact assessment. In the 19 studies meeting the inclusion criteria, several methods were identified. These were grouped into random sampling methods, second-order probability methods, Bayesian methods, fuzzy sets, and deterministic sensitivity analysis methods. All 19 studies addressed the uncertainty in the parameter values but only 5 of the studies also addressed the uncertainty in the structure of the models. None of the articles reviewed considered conceptual sources of uncertainty associated with the framing assumptions or the conceptualisation of the model. Future research should attempt to broaden the way uncertainty is taken into account in environmental health impact assessments.

  17. Integrating rangeland and pastureland assessment methods into a national grazingland assessment approach

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Grazingland resource allocation and decision making at the national scale need to be based on comparable metrics. However, in the USA, rangelands and pasturelands have traditionally been assessed using different methods and indicators. These differences in assessment methods limit the ability to con...

  18. Comparison of selected multi-criteria assessment methods

    NASA Astrophysics Data System (ADS)

    Krzemiński, Michał

    2016-06-01

    The article presents the results of earlier work done in conjunction with the author in which the focus was on assessing the impact of the selection methods for the evaluation of multi-criteria and methods of normalization of the input matrix on the final result of the prioritization of possible variants. Also done an assessment of these variants using fuzzy logic. The aim of the article was to compare the results obtained.

  19. Methods of Postural Assessment Used for Sports Persons

    PubMed Central

    Singla, Deepika

    2014-01-01

    Occurrence of postural defects has become very common now-a-days not only in general population but also in sports persons. There are various methods which can be used to assess these postural defects. These methods have evolved over a period of many years. This paper is first of its kind to summarize the methods of postural assessment which have been used and which can be used for evaluation of postural abnormalities in sports persons such as the visual observation, plumbline, goniometry, photographic, radiographic, photogrammetric, flexiruler, electromagnetic tracking device etc. We recommend more and more postural evaluation studies to be done in future based on the photogrammetric method. PMID:24959470

  20. [Establishment of Assessment Method for Air Bacteria and Fungi Contamination].

    PubMed

    Zhang, Hua-ling; Yao, Da-jun; Zhang, Yu; Fang, Zi-liang

    2016-03-15

    In this paper, in order to settle existing problems in the assessment of air bacteria and fungi contamination, the indoor and outdoor air bacteria and fungi filed concentrations by impact method and settlement method in existing documents were collected and analyzed, then the goodness of chi square was used to test whether these concentration data obeyed normal distribution at the significant level of α = 0.05, and combined with the 3σ principle of normal distribution and the current assessment standards, the suggested concentrations ranges of air microbial concentrations were determined. The research results could provide a reference for developing air bacteria and fungi contamination assessment standards in the future.

  1. Recent methods for assessing osteoporosis and fracture risk.

    PubMed

    Imai, Kazuhiro

    2014-01-01

    In the management and treatment of osteoporosis, the target is to assess fracture risk and the end-point is to prevent fractures. Traditionally, measurement of bone mineral density (BMD) by dual energy X-ray absorptiometry (DXA) has been the standard method for diagnosing osteoporosis, in addition to assessing fracture risk and therapeutic effects. Quantitative computed tomography (QCT) can quantify volumetric BMD, and cancellous bone can be measured independently of surrounding cortical bone and aortic calcification. Hip structure analysis (HSA) is a method using the DXA scan image and provides useful data for assessing hip fracture risk. Recently, new tools to assess osteoporosis and fracture risk have been developed. One of the recent advances has been the development of the FRAX (Fracture Risk Assessment Tool), which is helpful in conveying fracture risk to patients and providing treatment guidance to clinicians. Another advance is the finite element (FE) method based on data from computed tomography (CT), which is useful for assessing bone strength, fracture risk, and therapeutic effects on osteoporosis. In selecting the most appropriate drug for osteoporosis treatment, assessment by bone metabolic markers is an important factor. In this review, recent patents for assessing osteoporosis and fracture risk are discussed.

  2. Teaching Physics in a Physiologically Meaningful Manner

    ERIC Educational Resources Information Center

    Plomer, Michael; Jessen, Karsten; Rangelov, Georgi; Meyer, Michael

    2010-01-01

    The learning outcome of a physics laboratory course for medical students was examined in an interdisciplinary field study and discussed for the electrical physiology ("Propagation of Excitation and Nerve Cells"). At the Ludwig-Maximilians-University of Munich (LMU) at a time about 300 medicine students were assessed in two successive…

  3. Methods to assess the reliability of the interRAI Acute Care: a framework to guide clinimetric testing. Part II.

    PubMed

    Wellens, Nathalie I H; Milisen, Koen; Flamaing, Johan; Moons, Philip

    2012-08-01

    The interRAI Acute Care is a comprehensive geriatric assessment tool that provides a holistic picture of complex and frail hospitalized older persons. It is designed to support holistic care planning and to transfer patient data across settings. Its usefulness in clinical decision making depends on the extent to which clinicians can rely on the patient data as accurate and meaningful indicators of patients functioning. But its multidimensional character implies challenges for clinimetric testing as some of the traditional analyses techniques cannot be unconditionally applied. The objective was to present an overview of methods to examine the reliability of the interRAI Acute Care. For each line of evidence, examples of hypotheses and research questions are listed.

  4. Exploring valid and reliable assessment methods for care management education.

    PubMed

    Gennissen, Lokke; Stammen, Lorette; Bueno-de-Mesquita, Jolien; Wieringa, Sietse; Busari, Jamiu

    2016-07-04

    Purpose It is assumed that the use of valid and reliable assessment methods can facilitate the development of medical residents' management and leadership competencies. To justify this assertion, the perceptions of an expert panel of health care leaders were explored on assessment methods used for evaluating care management (CM) development in Dutch residency programs. This paper aims to investigate how assessors and trainees value these methods and examine for any inherent benefits or shortcomings when they are applied in practice. Design/methodology/approach A Delphi survey was conducted among members of the platform for medical leadership in The Netherlands. This panel of experts was made up of clinical educators, practitioners and residents interested in CM education. Findings Of the respondents, 40 (55.6 per cent) and 31 (43 per cent) participated in the first and second rounds of the Delphi survey, respectively. The respondents agreed that assessment methods currently being used to measure residents' CM competencies were weak, though feasible for use in many residency programs. Multi-source feedback (MSF, 92.1 per cent), portfolio/e-portfolio (86.8 per cent) and knowledge testing (76.3 per cent) were identified as the most commonly known assessment methods with familiarity rates exceeding 75 per cent. Practical implications The findings suggested that an "assessment framework" comprising MSF, portfolios, individual process improvement projects or self-reflections and observations in clinical practice should be used to measure CM competencies in residents. Originality/value This study reaffirms the need for objective methods to assess CM skills in post-graduate medical education, as there was not a single assessment method that stood out as the best instrument.

  5. AN APPROACH TO METHODS DEVELOPMENT FOR HUMAN EXPOSURE ASSESSMENT STUDIES

    EPA Science Inventory

    Human exposure assessment studies require methods that are rapid, cost-effective and have a high sample through-put. The development of analytical methods for exposure studies should be based on specific information for individual studies. Human exposure studies suggest that di...

  6. Methods for Assessing Honeycomb Sandwich Panel Wrinkling Failures

    NASA Technical Reports Server (NTRS)

    Zalewski, Bart F.; Dial, William B.; Bednarcyk, Brett A.

    2012-01-01

    Efficient closed-form methods for predicting the facesheet wrinkling failure mode in sandwich panels are assessed. Comparisons were made with finite element model predictions for facesheet wrinkling, and a validated closed-form method was implemented in the HyperSizer structure sizing software.

  7. A Comparison of Treatment Integrity Assessment Methods for Behavioral Intervention

    ERIC Educational Resources Information Center

    Koh, Seong A.

    2010-01-01

    The purpose of this study was to examine the similarity of outcomes from three different treatment integrity (TI) methods, and to identify the method which best corresponded to the assessment of a child's behavior. Six raters were recruited through individual contact via snowball sampling. A modified intervention component list and 19 video clips…

  8. Assessment methods for solid waste management: A literature review.

    PubMed

    Allesch, Astrid; Brunner, Paul H

    2014-06-01

    Assessment methods are common tools to support decisions regarding waste management. The objective of this review article is to provide guidance for the selection of appropriate evaluation methods. For this purpose, frequently used assessment methods are reviewed, categorised, and summarised. In total, 151 studies have been considered in view of their goals, methodologies, systems investigated, and results regarding economic, environmental, and social issues. A goal shared by all studies is the support of stakeholders. Most studies are based on life cycle assessments, multi-criteria-decision-making, cost-benefit analysis, risk assessments, and benchmarking. Approximately 40% of the reviewed articles are life cycle assessment-based; and more than 50% apply scenario analysis to identify the best waste management options. Most studies focus on municipal solid waste and consider specific environmental loadings. Economic aspects are considered by approximately 50% of the studies, and only a small number evaluate social aspects. The choice of system elements and boundaries varies significantly among the studies; thus, assessment results are sometimes contradictory. Based on the results of this review, we recommend the following considerations when assessing waste management systems: (i) a mass balance approach based on a rigid input-output analysis of the entire system, (ii) a goal-oriented evaluation of the results of the mass balance, which takes into account the intended waste management objectives; and (iii) a transparent and reproducible presentation of the methodology, data, and results.

  9. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    PubMed

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2016-12-12

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  10. Deriving meaningful climate-effects data from social media

    NASA Astrophysics Data System (ADS)

    Fuka, M. Z.; Fuka, D. R.

    2011-12-01

    This paper presents our research on extracting meaningful climate indicator data from unsolicited observations ("tweets") made by Twitter users regarding their physical surroundings and events occurring around them. Our goal is to establish whether the existing understanding of climate indicator data collected by more traditional means could be usefully supplemented by information derived from the potentially rich but also statistically diffuse data resource represented by social media. To this end, we've initiated an ongoing effort to collect and analyze Twitter observations made on a wide variety of climate-related phenological, biological, epidemiological and meteorological phenomena. We report on our acquisition methodology and discuss in particular our rationale for selecting keywords, phrases and filters for our searches. The iterative process of assembling an inventory of hundreds of climate-related search terms has in and of itself yielded interesting and sometimes surprising insights on what is and isn't noticed and commented on via social media with respect to climate indicator phenomenology. We report some of the highlights of those analyses along with significant findings from the data acquisition to date. In conclusion, we discuss our preliminary assessment of the approach, how it can be generalized and extended for social media other than Twitter, and how the resulting data could be used to serve climate science objectives.

  11. Unit Costs Provide Basis for Meaningful Evaluation of Efficiency of TV Courses.

    ERIC Educational Resources Information Center

    Jones, Gardner; And Others

    1969-01-01

    Efficient use of television for teaching cannot be achieved without meaningful cost comparisons with conventional classroom methods. Considerable effort has been spent at the University of Michigan in developing a unit cost basis for televised filmed lectures to include not only salaries, but administrative costs, supplies, amortization of…

  12. Write Another Poem about Marigold: Meaningful Writing as a Process of Change.

    ERIC Educational Resources Information Center

    Teichmann, Sandra Gail

    1995-01-01

    Considers a process approach toward the goal of meaningful writing which may aid in positive personal change. Outlines recent criticism of contemporary poetry; argues against tradition and practice of craft in writing poetry. Proposes a means of writing centered on a method of inquiry involving elements of self-involvement, curiosity, and risk to…

  13. Ideas in Practice - Making Motion More Meaningful

    ERIC Educational Resources Information Center

    Cutchins, Malcolm A.

    1971-01-01

    Three methods of studying motion are described. A wind tunnel is utilized in demonstrating flutter. Computer graphics with an oscilloscope are used to investigate the natural modes of vibration and to track the simulated motion of missiles. (TS)

  14. A clinically meaningful theory of outcome measures in rehabilitation medicine.

    PubMed

    Massof, Robert W

    2010-01-01

    Comparative effectiveness research in rehabilitation medicine requires the development and validation of clinically meaningful and scientifically rigorous measurements of patient states and theories that explain and predict outcomes of intervention. Patient traits are latent (unobservable) variables that can be measured only by inference from observations of surrogate manifest (observable) variables. In the behavioral sciences, latent variables are analogous to intensive physical variables such as temperature and manifest variables are analogous to extensive physical variables such as distance. Although only one variable at a time can be measured, the variable can have a multidimensional structure that must be understood in order to explain disagreements among different measures of the same variable. The use of Rasch theory to measure latent trait variables can be illustrated with a balance scale metaphor that has randomly added variability in the weights of the objects being measured. Knowledge of the distribution of the randomly added variability provides the theoretical structure for estimating measures from ordinal observation scores (e.g., performance measures or rating scales) using statistical inference. In rehabilitation medicine, the latent variable of primary interest is the patient's functional ability. Functional ability can be estimated from observations of surrogate performance measures (e.g., speed and accuracy) or self-report of the difficulty the patient experiences performing specific activities. A theoretical framework borrowed from project management, called the Activity Breakdown Structure (ABS), guides the choice of activities for assessment, based on the patient's value judgments, to make the observations clinically meaningful. In the case of low vision, the functional ability measure estimated from Rasch analysis of activity difficulty ratings was discovered to be a two-dimensional variable. The two visual function dimensions are independent

  15. Herbal hepatotoxicity: Challenges and pitfalls of causality assessment methods

    PubMed Central

    Teschke, Rolf; Frenzel, Christian; Schulze, Johannes; Eickhoff, Axel

    2013-01-01

    The diagnosis of herbal hepatotoxicity or herb induced liver injury (HILI) represents a particular clinical and regulatory challenge with major pitfalls for the causality evaluation. At the day HILI is suspected in a patient, physicians should start assessing the quality of the used herbal product, optimizing the clinical data for completeness, and applying the Council for International Organizations of Medical Sciences (CIOMS) scale for initial causality assessment. This scale is structured, quantitative, liver specific, and validated for hepatotoxicity cases. Its items provide individual scores, which together yield causality levels of highly probable, probable, possible, unlikely, and excluded. After completion by additional information including raw data, this scale with all items should be reported to regulatory agencies and manufacturers for further evaluation. The CIOMS scale is preferred as tool for assessing causality in hepatotoxicity cases, compared to numerous other causality assessment methods, which are inferior on various grounds. Among these disputed methods are the Maria and Victorino scale, an insufficiently qualified, shortened version of the CIOMS scale, as well as various liver unspecific methods such as the ad hoc causality approach, the Naranjo scale, the World Health Organization (WHO) method, and the Karch and Lasagna method. An expert panel is required for the Drug Induced Liver Injury Network method, the WHO method, and other approaches based on expert opinion, which provide retrospective analyses with a long delay and thereby prevent a timely assessment of the illness in question by the physician. In conclusion, HILI causality assessment is challenging and is best achieved by the liver specific CIOMS scale, avoiding pitfalls commonly observed with other approaches. PMID:23704820

  16. Environmental Methods Review: Retooling Impact Assessment for the New Century

    DTIC Science & Technology

    1998-03-01

    Assessment: Wanted-Dead or Alive! [A. Thomas Roper and Alan L. Porter] PROCESSES 14 Methods for EIA: Selecting a Model and Approach [Ron D. Webster] 15...Emerging Issues [Cory H. Wilkinson] MODELS IN ENVIRONMENTAL IMPACT ASSESSMENT 33 Selecting· Computer Models and Input Parameters for Analysis of...Meier raise data and modeling issues with wide implications in EAandlA. • Brown’s "environmental overview" and "decision- scoping" offer exciting

  17. Testing of Raman spectroscopy method for assessment of skin implants

    NASA Astrophysics Data System (ADS)

    Timchenko, E. V.; Timchenko, P. E.; Volova, L. T.; Pershutkina, S. V.; Shalkovskaya, P. Y.

    2016-11-01

    Results of studies of testing of Raman spectroscopy (RS) method for assessment of skin implants are presented. As objects of study were used samples of rat's skin material. The main spectral differences of implants using various types of their processing appear at wavenumbers 1062 cm-1, 1645 cm-1, 1553 cm-1, 851 cm-1, 863 cm-1, 814 cm-1 and 1410 cm-1. Optical coefficients for assessment of skin implants were introduced. The research results are confirmed by morphological analysis.

  18. Safety assessment and detection methods of genetically modified organisms.

    PubMed

    Xu, Rong; Zheng, Zhe; Jiao, Guanglian

    2014-01-01

    Genetically modified organisms (GMOs), are gaining importance in agriculture as well as the production of food and feed. Along with the development of GMOs, health and food safety concerns have been raised. These concerns for these new GMOs make it necessary to set up strict system on food safety assessment of GMOs. The food safety assessment of GMOs, current development status of safety and precise transgenic technologies and GMOs detection have been discussed in this review. The recent patents about GMOs and their detection methods are also reviewed. This review can provide elementary introduction on how to assess and detect GMOs.

  19. A mixed methods assessment of coping with pediatric cancer

    PubMed Central

    Alderfer, Melissa A.; Deatrick, Janet A.; Marsac, Meghan L.

    2014-01-01

    The purpose of this study was to describe child coping and parent coping assistance with cancer-related stressors during treatment. Fifteen children (aged 6-12) with cancer and their parents (N = 17) completed semi-structured interviews and self-report measures to assess coping and coping assistance. Results suggest families utilized a broad array of approach and avoidance strategies to manage cancer and its treatment. Quantitative and qualitative assessments provided complementary and unique contributions to understanding coping among children with cancer and their parents. Using a mixed methods approach to assess coping provides a richer understanding of families’ experiences, which can better inform clinical practice. PMID:24428250

  20. Portfolios: An Alternative Method of Student and Program Assessment

    PubMed Central

    Hannam, Susan E.

    1995-01-01

    The use of performance-based evaluation and alternative assessment techniques has become essential for curriculum programs seeking Commission of Accreditation of Allied Health Education Programs (CAAHEP) accreditation. In athletic training education, few assessment models exist to assess student performance over the entire course of their educational program. This article describes a model of assessment-a student athletic training portfolio of “best works.” The portfolio can serve as a method to assess student development and to assess program effectiveness. The goals of the program include purposes specific to the five NATA performance domains. In addition, four types of portfolio evidence are described: artifacts, attestations, productions, and reproductions. Quality assignments and projects completed by students as they progress through a six-semester program are identified relative to the type of evidence and the domain(s) they represent. The portfolio assists with student development, provides feedback for curriculum planning, allows for student/faculty collaboration and “coaching” of the student, and assists with job searching. This information will serve as a useful model for those athletic training programs looking for an alternative method of assessing student and program outcomes. PMID:16558359

  1. Portfolios: an alternative method of student and program assessment.

    PubMed

    Hannam, S E

    1995-10-01

    The use of performance-based evaluation and alternative assessment techniques has become essential for curriculum programs seeking Commission of Accreditation of Allied Health Education Programs (CAAHEP) accreditation. In athletic training education, few assessment models exist to assess student performance over the entire course of their educational program. This article describes a model of assessment-a student athletic training portfolio of "best works." The portfolio can serve as a method to assess student development and to assess program effectiveness. The goals of the program include purposes specific to the five NATA performance domains. In addition, four types of portfolio evidence are described: artifacts, attestations, productions, and reproductions. Quality assignments and projects completed by students as they progress through a six-semester program are identified relative to the type of evidence and the domain(s) they represent. The portfolio assists with student development, provides feedback for curriculum planning, allows for student/faculty collaboration and "coaching" of the student, and assists with job searching. This information will serve as a useful model for those athletic training programs looking for an alternative method of assessing student and program outcomes.

  2. The Teaching of Poetry as a Meaningful Genre

    ERIC Educational Resources Information Center

    Miller, Roy

    1973-01-01

    Suggests that poetry can be made meaningful and timely for students who read it in terms of such universal themes as the Seven Deadly Sins, the Four Cardinal Virtues, and the Theological Virtues. (RB)

  3. Methods for assessment of trunk stabilization, a systematic review.

    PubMed

    Maaswinkel, E; Griffioen, M; Perez, R S G M; van Dieën, J H

    2016-02-01

    Trunk stabilization is achieved differently in patients with low back pain compared to healthy controls. Many methods exist to assess trunk stabilization but not all measure the contributions of intrinsic stiffness and reflexes simultaneously. This may pose a threat to the quality/validity of the study and might lead to misinterpretation of the results. The aim of this study was to provide a critical review of previously published methods for studying trunk stabilization in relation to low back pain (LBP). We primarily aimed to assess their construct validity to which end we defined a theoretical framework operationalized in a set of methodological criteria which would allow to identify the contributions of intrinsic stiffness and reflexes simultaneously. In addition, the clinimetric properties of the methods were evaluated. A total of 133 articles were included from which four main categories of methods were defined; upper limb (un)loading, moving platform, unloading and loading. Fifty of the 133 selected articles complied with all the criteria of the theoretical framework, but only four articles provided information about reliability and/or measurement error of methods to assess trunk stabilization with test-retest reliability ranging from poor (ICC 0) to moderate (ICC 0.72). When aiming to assess trunk stabilization with system identification, we propose a perturbation method where the trunk is studied in isolation, the perturbation is unpredictable, force controlled, directly applied to the upper body, completely known and results in small fluctuations around the working point.

  4. Assessing subjective workload assessment - A comparison of SWAT and the NASA-bipolar methods. [Subjective Workload Assessment Technique

    NASA Technical Reports Server (NTRS)

    Vidulich, M. A.; Tsang, P. S.

    1985-01-01

    The Subjective Workload Assessment Technique (SWAT) and the NASA weighted-bipolar method used for evaluating subjective workload assessment are compared. The application of these methods to the rating of single- and dual-task trials of tracking and spatial transformation is described. The methods used to collect the ratings for the SWAT and bipolar technique are examined. Analysis of the transformation-tracking data reveal that the two assessment techniques produce similar results and both measure the differences in task difficulty. The positive and negative characteristics of each technique are analyzed.

  5. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  6. Assessing Regional Emissions Reductions from Travel Efficiency: Applying the Travel Efficiency Assessment Method

    EPA Pesticide Factsheets

    This presentation from the 2016 TRB Summer Conference on Transportation Planning and Air Quality summarizes the application of the Travel Efficiency Assessment Method (TEAM) which analyzed selected transportation emission reduction strategies in three case

  7. Assessment of pancreatic β-cell function: review of methods and clinical applications.

    PubMed

    Cersosimo, Eugenio; Solis-Herrera, Carolina; Trautmann, Michael E; Malloy, Jaret; Triplitt, Curtis L

    2014-01-01

    Type 2 diabetes mellitus (T2DM) is characterized by a progressive failure of pancreatic β-cell function (BCF) with insulin resistance. Once insulin over-secretion can no longer compensate for the degree of insulin resistance, hyperglycemia becomes clinically significant and deterioration of residual β-cell reserve accelerates. This pathophysiology has important therapeutic implications. Ideally, therapy should address the underlying pathology and should be started early along the spectrum of decreasing glucose tolerance in order to prevent or slow β-cell failure and reverse insulin resistance. The development of an optimal treatment strategy for each patient requires accurate diagnostic tools for evaluating the underlying state of glucose tolerance. This review focuses on the most widely used methods for measuring BCF within the context of insulin resistance and includes examples of their use in prediabetes and T2DM, with an emphasis on the most recent therapeutic options (dipeptidyl peptidase-4 inhibitors and glucagon-like peptide-1 receptor agonists). Methods of BCF measurement include the homeostasis model assessment (HOMA); oral glucose tolerance tests, intravenous glucose tolerance tests (IVGTT), and meal tolerance tests; and the hyperglycemic clamp procedure. To provide a meaningful evaluation of BCF, it is necessary to interpret all observations within the context of insulin resistance. Therefore, this review also discusses methods utilized to quantitate insulin-dependent glucose metabolism, such as the IVGTT and the euglycemic-hyperinsulinemic clamp procedures. In addition, an example is presented of a mathematical modeling approach that can use data from BCF measurements to develop a better understanding of BCF behavior and the overall status of glucose tolerance.

  8. Assessment of Pancreatic β-Cell Function: Review of Methods and Clinical Applications

    PubMed Central

    Cersosimo, Eugenio; Solis-Herrera, Carolina; Trautmann, Michael E.; Malloy, Jaret; Triplitt, Curtis L.

    2014-01-01

    Type 2 diabetes mellitus (T2DM) is characterized by a progressive failure of pancreatic β-cell function (BCF) with insulin resistance. Once insulin over-secretion can no longer compensate for the degree of insulin resistance, hyperglycemia becomes clinically significant and deterioration of residual β-cell reserve accelerates. This pathophysiology has important therapeutic implications. Ideally, therapy should address the underlying pathology and should be started early along the spectrum of decreasing glucose tolerance in order to prevent or slow β-cell failure and reverse insulin resistance. The development of an optimal treatment strategy for each patient requires accurate diagnostic tools for evaluating the underlying state of glucose tolerance. This review focuses on the most widely used methods for measuring BCF within the context of insulin resistance and includes examples of their use in prediabetes and T2DM, with an emphasis on the most recent therapeutic options (dipeptidyl peptidase-4 inhibitors and glucagon-like peptide-1 receptor agonists). Methods of BCF measurement include the homeostasis model assessment (HOMA); oral glucose tolerance tests, intravenous glucose tolerance tests (IVGTT), and meal tolerance tests; and the hyperglycemic clamp procedure. To provide a meaningful evaluation of BCF, it is necessary to interpret all observations within the context of insulin resistance. Therefore, this review also discusses methods utilized to quantitate insulin-dependent glucose metabolism, such as the IVGTT and the euglycemic-hyperinsulinemic clamp procedures. In addition, an example is presented of a mathematical modeling approach that can use data from BCF measurements to develop a better understanding of BCF behavior and the overall status of glucose tolerance. PMID:24524730

  9. [Meaningful advanced training concepts for surgeons].

    PubMed

    Ansorg, J; Krüger, M; Vallböhmer, D

    2012-04-01

    A state of the art surgical training is crucial for the attraction of surgery as a medical profession. The German surgical community can only succeed in overcoming the shortage of young surgeons by the development of an attractive and professional training environment. Responsibility for surgical training has to be taken by the heads of department as well as by the surgical societies. Good surgical training should be deemed to be part of the corporate strategy of German hospitals and participation in external courses has to be properly funded by the hospital management. On the other hand residents are asked for commitment and flexibility and should keep records in logbooks and take part in assessment projects to gain continuing feedback on their learning progress. The surgical community is in charge of developing a structured but flexible training curriculum for each of the eight surgical training trunks. A perfect future curriculum has to reflect and cross-link local hospital training programs with a central training portfolio of a future Academy of German Surgeons, such as workshops, courses and e-learning projects. This challenge has to be dealt with in close cooperation by all surgical boards and societies. A common sense of surgery as a community in diversity is crucial for the success of this endeavour.

  10. [Assessing forest ecosystem health I. Model, method, and index system].

    PubMed

    Chen, Gao; Dai, Limin; Ji, Lanzhu; Deng, Hongbing; Hao, Zhanqing; Wang, Qingli

    2004-10-01

    Ecosystem health assessment is one of the main researches and urgent tasks of ecosystem science in 21st century. An operational definition on ecosystem health and an all-sided, simple, easy operational and standard index system, which are the foundation of assessment on ecosystem health, are necessary in obtaining a simple and applicable assessment theory and method of ecosystem health. Taking the Korean pine and broadleaved mixed forest ecosystem as an example, an originally creative idea on ecosystem health was put forward in this paper based on the idea of mode ecosystem set and the idea of forest ecosystem health, together with its assessment. This creative idea can help understand what ecosystem health is. Finally, a formula was deduced based on a new effective health assessment method--health distance (HD), which is the first time to be brought forward in China. At the same time, aiming at it's characteristics by status understanding and material health questions, a health index system of Korean pine and broadleaved mixed forest ecosystem was put forward in this paper, which is a compound ecosystem based on the compound properties of nature, economy and society. It is concrete enough to measure sub-index, so it is the foundation to assess ecosystem health of Korean pine and broadleaved mixed forest in next researches.

  11. [Assessment of ecosystem and its services conservation: indicators and methods].

    PubMed

    Lü, Yi-He; Zhang, Li-Wei; Wang, Jiang-Lei

    2013-05-01

    To conserve ecosystem and its services is a frontier and hot topic in the researches of conservation ecology. This paper reviewed the newest concepts and methods in the assessment of ecosystem and its services conservation, with the focus on the indicators and criteria for assessing the conservation status and the endangerment level of ecosystem as well as the main methods of ecosystem services assessment and conservation (including benefit transfer, systematic modeling, and quantitative indicator-based estimation). With the consideration of the research progress and the demands of ecological conservation in China, some issues to be urgently solved were put forward: 1) formulating the indicators, criteria, and methods suitable for the assessment of ecosystem conservation in China, 2) developing the methodologies for the quantitative assessment of ecosystem services, 3) determining the demands and optimal spatial arrangement of ecosystem and its services conservation in China, and 4) establishing the policies and incentive mechanisms for ecosystem and its services conservation. The resolution of these issues would supply important guarantee to the development of ecological civilization in China.

  12. [Study on the risk assessment method of regional groundwater pollution].

    PubMed

    Yang, Yan; Yu, Yun-Jiang; Wang, Zong-Qing; Li, Ding-Long; Sun, Hong-Wei

    2013-02-01

    Based on the boundary elements of system risk assessment, the regional groundwater pollution risk assessment index system was preliminarily established, which included: regional groundwater specific vulnerability assessment, the regional pollution sources characteristics assessment and the health risk assessment of regional featured pollutants. The three sub-evaluation systems were coupled with the multi-index comprehensive method, the risk was characterized with the Spatial Analysis of ArcMap, and a new method to evaluate regional groundwater pollution risk that suitable for different parts of natural conditions, different types of pollution was established. Take Changzhou as an example, the risk of shallow groundwater pollution was studied with the new method, and found that the vulnerability index of groundwater in Changzhou is high and distributes unevenly; The distribution of pollution sources is concentrated and has a great impact on groundwater pollution risks; Influenced by the pollutants and pollution sources, the values of health risks are high in the urban area of Changzhou. The pollution risk of shallow groundwater is high and distributes unevenly, and distributes in the north of the line of Anjia-Xuejia-Zhenglu, the center of the city and the southeast, where the human activities are more intense and the pollution sources are intensive.

  13. Method and apparatus of assessing down-hole drilling conditions

    DOEpatents

    Hall, David R.; Pixton, David S.; Johnson, Monte L.; Bartholomew, David B.; Fox, Joe

    2007-04-24

    A method and apparatus for use in assessing down-hole drilling conditions are disclosed. The apparatus includes a drill string, a plurality of sensors, a computing device, and a down-hole network. The sensors are distributed along the length of the drill string and are capable of sensing localized down-hole conditions while drilling. The computing device is coupled to at least one sensor of the plurality of sensors. The data is transmitted from the sensors to the computing device over the down-hole network. The computing device analyzes data output by the sensors and representative of the sensed localized conditions to assess the down-hole drilling conditions. The method includes sensing localized drilling conditions at a plurality of points distributed along the length of a drill string during drilling operations; transmitting data representative of the sensed localized conditions to a predetermined location; and analyzing the transmitted data to assess the down-hole drilling conditions.

  14. [Quantitative method of representative contaminants in groundwater pollution risk assessment].

    PubMed

    Wang, Jun-Jie; He, Jiang-Tao; Lu, Yan; Liu, Li-Ya; Zhang, Xiao-Liang

    2012-03-01

    In the light of the problem that stress vulnerability assessment in groundwater pollution risk assessment is lack of an effective quantitative system, a new system was proposed based on representative contaminants and corresponding emission quantities through the analysis of groundwater pollution sources. And quantitative method of the representative contaminants in this system was established by analyzing the three properties of representative contaminants and determining the research emphasis using analytic hierarchy process. The method had been applied to the assessment of Beijing groundwater pollution risk. The results demonstrated that the representative contaminants hazards greatly depended on different research emphasizes. There were also differences between the sequence of three representative contaminants hazards and their corresponding properties. It suggested that subjective tendency of the research emphasis had a decisive impact on calculation results. In addition, by the means of sequence to normalize the three properties and to unify the quantified properties results would zoom in or out of the relative properties characteristic of different representative contaminants.

  15. Reliability of marginal microleakage assessment by visual and digital methods

    PubMed Central

    de Santi Alvarenga, Fábio Augusto; Pinelli, Camila; Monteiro Loffredo, Leonor de Castro

    2015-01-01

    Objective: The aim of this study was to investigate the reliability of visual and digital methods to assess marginal microleakage in vitro. Materials and Methods: Typical Class V preparations were made in bovine teeth and filled with composite resin. After dye penetration (0.5% basic fuchsin), teeth were sectioned and the 53 obtained fragments were assessed according to visual (stereomicroscope) and digital methods (Image Tool Software®-ITS) (University of Texas Health Science Center-San Antonio Dental School, USA). Two calibrated examiners (A and B) evaluated dye penetration, by means of a stereomicroscope with ×20 magnification (scores), and by the ITS (millimeters). The intra- and inter-examiner agreement was estimated according to Kappa statistics (κ), and intraclass correlation coefficient (ρ). Results: In relation to the visual method, the intra-examiner agreement was almost perfect (κA = 0.87) and substantial (κB = 0.76), respectively to the examiner A and B. The inter-examiner agreement showed an almost perfect reliability (κ = 0.84). For the digital method, the intra-examiner agreement was almost perfect for both examiners and equal to ρ = 0.99, and so was the inter-examiner agreement value. Conclusion: Visual (stereomicroscope) and digital methods (ITS) showed high levels of intra- and inter-examiner reproducibility when marginal microleakage was assessed. PMID:25713476

  16. Assessing Fit of Unidimensional Graded Response Models Using Bayesian Methods

    ERIC Educational Resources Information Center

    Zhu, Xiaowen; Stone, Clement A.

    2011-01-01

    The posterior predictive model checking method is a flexible Bayesian model-checking tool and has recently been used to assess fit of dichotomous IRT models. This paper extended previous research to polytomous IRT models. A simulation study was conducted to explore the performance of posterior predictive model checking in evaluating different…

  17. Myths and Misconceptions about Using Qualitative Methods in Assessment

    ERIC Educational Resources Information Center

    Harper, Shaun R.; Kuh, George D.

    2007-01-01

    The value of qualitative assessment approaches has been underestimated primarily because they are often juxtaposed against long-standing quantitative traditions and the widely accepted premise that the best research produces generalizable and statistically significant findings. Institutional researchers avoid qualitative methods for at least three…

  18. The Annemarie Roeper Method of Qualitative Assessment: My Journey

    ERIC Educational Resources Information Center

    Beneventi, Anne

    2016-01-01

    The Annemarie Roeper Method of Qualitative Assessment (QA) establishes an extremely rich set of procedures for revealing students' strengths as well as opportunities for the development of bright young people. This article explores the ways in which the QA process serves as a sterling example of a holistic, authentic system for recognizing…

  19. River Pollution: Part II. Biological Methods for Assessing Water Quality.

    ERIC Educational Resources Information Center

    Openshaw, Peter

    1984-01-01

    Discusses methods used in the biological assessment of river quality and such indicators of clean and polluted waters as the Trent Biotic Index, Chandler Score System, and species diversity indexes. Includes a summary of a river classification scheme based on quality criteria related to water use. (JN)

  20. Assessing Students' Writing Skills: A Comparison of Direct & Indirect Methods.

    ERIC Educational Resources Information Center

    Koffler, Stephen L.

    This research examined the results from direct and indirect writing assessments to determine the most effective method of discrimination. The New Jersey State Department of Education developed a test for ninth-grade students which was designed to measure the ability to apply writing mechanics to written text and to communicate effectively in…

  1. Using Empirical Article Analysis to Assess Research Methods Courses

    ERIC Educational Resources Information Center

    Bachiochi, Peter; Everton, Wendi; Evans, Melanie; Fugere, Madeleine; Escoto, Carlos; Letterman, Margaret; Leszczynski, Jennifer

    2011-01-01

    Developing students who can apply their knowledge of empirical research is a key outcome of the undergraduate psychology major. This learning outcome was assessed in two research methods courses by having students read and analyze a condensed empirical journal article. At the start and end of the semester, students in multiple sections of an…

  2. Methods of Assessing Bias and Fairness in Tests.

    ERIC Educational Resources Information Center

    Merz, William R.

    Several methods of assessing test item bias are described, and the concept of fair use of tests is examined. A test item is biased if individuals of equal ability have different probabilities of attaining the item correct. The following seven general procedures used to examine test items for bias are summarized and discussed: (1) analysis of…

  3. 50 CFR 270.18 - Method of imposing assessments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Method of imposing assessments. 270.18 Section 270.18 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISH AND SEAFOOD PROMOTION SPECIES-SPECIFIC SEAFOOD MARKETING...

  4. A Revised Class Play Method of Peer Assessment.

    ERIC Educational Resources Information Center

    Masten, Ann S.; And Others

    1985-01-01

    Revised Class Play (RCP) was presented as a measure of peer reputation designed to improve the assessment of social competence as well as the psychometric properties of class play method. Administered to third through sixth graders three test dimensions were revealed: sociability-leadership, aggressive-disruptive, and sensitive-isolated. Data…

  5. A Comparison of Assessment Methods and Raters in Product Creativity

    ERIC Educational Resources Information Center

    Lu, Chia-Chen; Luh, Ding-Bang

    2012-01-01

    Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…

  6. Can The EQ-5D Detect Meaningful Change? A Systematic Review

    PubMed Central

    Payakachat, Nalin; Ali, Mir M.; Tilford, J. Mick

    2015-01-01

    Background The EQ-5D is one of the most frequently used generic, preference-based instruments for measuring the health utilities of patients in economic evaluations. It is recommended for health technology assessment by the National Institute for Health and Clinical Excellence. Because the EQ-5D plays such an important role in economic evaluations, useful information on its responsiveness to detect meaningful change in health status is required. Objective This study systematically reviewed and synthesized evidence on the responsiveness of the EQ-5D to detect meaningful change in health status for clinical research and economic evaluations. Methods We searched the EuroQol website, PubMed, PsychINFO, and EconLit databases to identify studies published in English from the inception of the EQ-5D until August 15, 2014 using keywords that were related to responsiveness. Studies that used only the EQ-VAS were excluded from the final analysis. Narrative synthesis was conducted to summarize evidence on the responsiveness of the EQ-5D by conditions or physiological functions. Results Of 1,401 studies, 145 were included in the narrative synthesis and categorized into 19 categories for 56 conditions. The EQ-5D was found to be responsive in 25 conditions (45%) with the magnitude of responsiveness varying from small to large depending on the condition. There was mixed evidence of responsiveness in 27 conditions (48%). Only four conditions (7%) (i.e., alcohol dependency, schizophrenia, limb reconstruction, and hearing impairment) were identified where the EQ-5D was not responsive. Conclusion The EQ-5D is an appropriate measure for economic evaluation and health technology assessment in conditions where it has demonstrated evidence of responsiveness. In conditions with mixed evidence of responsiveness, researchers should consider using the EQ-5D with other condition-specific measures to ensure appropriate estimates of effectiveness. These conditions should be a main focus for

  7. Improving Educational Assessment: A Computer-Adaptive Multiple Choice Assessment Using NRET as the Scoring Method

    ERIC Educational Resources Information Center

    Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee

    2009-01-01

    Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…

  8. Total System Performance Assessment - License Application Methods and Approach

    SciTech Connect

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  9. Total System Performance Assessment-License Application Methods and Approach

    SciTech Connect

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  10. In Vivo Methods for the Assessment of Topical Drug Bioavailability

    PubMed Central

    Herkenne, Christophe; Alberti, Ingo; Naik, Aarti; Kalia, Yogeshvar N.; Mathy, François-Xavier; Préat, Véronique

    2007-01-01

    This paper reviews some current methods for the in vivo assessment of local cutaneous bioavailability in humans after topical drug application. After an introduction discussing the importance of local drug bioavailability assessment and the limitations of model-based predictions, the focus turns to the relevance of experimental studies. The available techniques are then reviewed in detail, with particular emphasis on the tape stripping and microdialysis methodologies. Other less developed techniques, including the skin biopsy, suction blister, follicle removal and confocal Raman spectroscopy techniques are also described. PMID:17985216

  11. Assessment of nonequilibrium radiation computation methods for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Sharma, Surendra

    1993-01-01

    The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

  12. Review of near-infrared methods for wound assessment

    NASA Astrophysics Data System (ADS)

    Sowa, Michael G.; Kuo, Wen-Chuan; Ko, Alex C.-T.; Armstrong, David G.

    2016-09-01

    Wound management is a challenging and costly problem that is growing in importance as people are living longer. Instrumental methods are increasingly being relied upon to provide objective measures of wound assessment to help guide management. Technologies that employ near-infrared (NIR) light form a prominent contingent among the existing and emerging technologies. We review some of these technologies. Some are already established, such as indocyanine green fluorescence angiography, while we also speculate on others that have the potential to be clinically relevant to wound monitoring and assessment. These various NIR-based technologies address clinical wound management needs along the entire healing trajectory of a wound.

  13. Assessment of Methods for Estimating Risk to Birds from ...

    EPA Pesticide Factsheets

    The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16

  14. Assessing the Impact of Tutorial Services

    ERIC Educational Resources Information Center

    Ticknor, Cindy S.; Shaw, Kimberly A.; Howard, Timothy

    2014-01-01

    Many institutions struggle to develop a meaningful way to assess the effectiveness of drop-in tutorial services provided to students. This article discusses the development of a data collection system based on a visitor sign-in system that proved to be an efficient method of gathering assessment data, including frequency of visits, end-of-course…

  15. Application of geosites assessment method in geopark context

    NASA Astrophysics Data System (ADS)

    Martin, Simon; Perret, Amandine; Renau, Pierre; Cartier-Moulin, Olivier; Regolini-Bissig, Géraldine

    2014-05-01

    The regional natural park of the Monts d'Ardèche (Ardèche and Haute-Loire departments, France) is candidate to the European Geopark Network (EGN) in 2014. The area has a wide geodiversity - with rocks from Cambrian to Pleistocene (basalt flows) - and interesting features like phonolitic protrusions, maars and granite boulders fields. Around 115 sites were selected and documented through a geosites inventory carried out in the territory. This pre-selection was supervised by the Ardèche Geological Society and is therefore expert advice based. In the context of EGN candidature, these potential geosites were assessed with a simplified method. It follows the spirit of the method from the University of Lausanne (Reynard et al., 2007) and its recent developments: assessment of the scientific (central) value and of a set of additional values (ecological and cultural). As this assessment aimed to offer a management tool to the future geopark's authorities, a special focus was given to management aspects. In particular, the opportunities to use the site for education (from schools to universities) and for tourism as well as the existence of protection and of interpretive facilities were documented and assessed. Several interesting conclusions may be drawn from this case study: (1) expert assessment is effective when it is based on a pre-existing inventory which is well structured and documented; (2) even simplified, an assessment method is a very useful framework to expert assessment as it focuses the discussions on most important points and helps to balance the assessment; (3) whereas the inventory can be extensively detailed and partly academic, the assessment in the geopark context is objective-driven in order to answer management needs. The place of the geosites assessment among the three key players of a geopark construction process (i.e. territory's managers, local geoscientists and EGN) is also discussed. This place can be defined as the point of consensus of needs

  16. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    USGS Publications Warehouse

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  17. Assessment and comparison of methods for solar ultraviolet radiation measurements

    NASA Astrophysics Data System (ADS)

    Leszczynski, K.

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented.

  18. An Observational Assessment Method for Aging Laboratory Rats

    PubMed Central

    Phillips, Pamela M; Jarema, Kimberly A; Kurtz, David M; MacPhail, Robert C

    2010-01-01

    The rapid growth of the aging human population highlights the need for laboratory animal models to study the basic biologic processes of aging and susceptibility to disease, drugs, and environmental pollutants. Methods are needed to evaluate the health of aging animals over time, particularly methods for efficiently monitoring large research colonies. Here we describe an observational assessment method that scores appearance, posture, mobility, and muscle tone on a 5-point scale that can be completed in about 1 min. A score of 1 indicates no deterioration, whereas a score of 5 indicates severe deterioration. Tests were applied to male Brown Norway rats between 12 and 36 mo of age (n = 32). The rats were participating concurrently in experiments on the behavioral effects of intermittent exposure (approximately every 4 mo) to short-acting environmental chemicals. Results demonstrated that aging-related signs of deterioration did not appear before 18 mo of age. Assessment scores and variability then increased with age. Body weights increased until approximately 24 mo, then remained stable, but decreased after 31 mo for the few remaining rats. The incidence of death increased slightly from 20 to 28 mo of age and then rose sharply; median survival age was approximately 30 mo, with a maximum of 36 mo. The results indicate that our observational assessment method supports efficient monitoring of the health of aging rats and may be useful in studies on susceptibility to diseases, drugs, and toxicants during old age. PMID:21205442

  19. Climate change and occupational heat stress: methods for assessment

    PubMed Central

    Holmér, Ingvar

    2010-01-01

    Background Presumed effects of global warming on occupational heat stress aggravate conditions in many parts of the world, in particular in developing countries. In order to assess and evaluate conditions, heat stress must be described and measured correctly. Objective Assessment of heat stress using internationally recognized methods. Design Two such methods are wet bulb globe temperature (WBGT; ISO 7243) and predicted heat strain (PHS; ISO 7933). Both methods measure relevant climatic factors and provide recommendations for limit values in terms of time when heat stress becomes imminent. The WBGT as a heat stress index is empirical and widely recognized. It requires, however, special sensors for the climatic factors that can introduce significant measurement errors if prescriptions in ISO 7243 are not followed. The PHS (ISO 7933) is based on climatic factors that can easily be measured with traditional instruments. It evaluates the conditions for heat balance in a more rational way and it applies equally to all combinations of climates. Results Analyzing similar climatic conditions with WBGT and PHS indicates that WBGT provides a more conservative assessment philosophy that allows much shorter working time than predicted with PHS. Conclusions PHS prediction of physiological strain appears to fit better with published data from warm countries. Both methods should be used and validated more extensively worldwide in order to give reliable and accurate information about the actual heat stress. PMID:21139697

  20. An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging

    PubMed Central

    Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.

    2017-01-01

    Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862

  1. CT Scan Method Accurately Assesses Humeral Head Retroversion

    PubMed Central

    Boileau, P.; Mazzoleni, N.; Walch, G.; Urien, J. P.

    2008-01-01

    Humeral head retroversion is not well described with the literature controversial regarding accuracy of measurement methods and ranges of normal values. We therefore determined normal humeral head retroversion and assessed the measurement methods. We measured retroversion in 65 cadaveric humeri, including 52 paired specimens, using four methods: radiographic, computed tomography (CT) scan, computer-assisted, and direct methods. We also assessed the distance between the humeral head central axis and the bicipital groove. CT scan methods accurately measure humeral head retroversion, while radiographic methods do not. The retroversion with respect to the transepicondylar axis was 17.9° and 21.5° with respect to the trochlear tangent axis. The difference between the right and left humeri was 8.9°. The distance between the central axis of the humeral head and the bicipital groove was 7.0 mm and was consistent between right and left humeri. Humeral head retroversion may be most accurately obtained using the patient’s own anatomic landmarks or, if not, identifiable retroversion as measured by those landmarks on contralateral side or the bicipital groove. PMID:18264854

  2. Meaningful learning: theoretical support for concept-based teaching.

    PubMed

    Getha-Eby, Teresa J; Beery, Theresa; Xu, Yin; O'Brien, Beth A

    2014-09-01

    Novice nurses’ inability to transfer classroom knowledge to the bedside has been implicated in adverse patient outcomes, including death. Concept-based teaching is a pedagogy found to improve knowledge transfer. Concept-based teaching emanates from a constructivist paradigm of teaching and learning and can be implemented most effectively when the underlying theory and principles are applied. Ausubel’s theory of meaningful learning and its construct of substantive knowledge integration provides a model to help educators to understand, implement, and evaluate concept-based teaching. Contemporary findings from the fields of cognitive psychology, human development, and neurobiology provide empirical evidence of the relationship between concept-based teaching, meaningful learning, and knowledge transfer. This article describes constructivist principles and meaningful learning as they apply to nursing pedagogy.

  3. An algorithm for encryption of secret images into meaningful images

    NASA Astrophysics Data System (ADS)

    Kanso, A.; Ghebleh, M.

    2017-03-01

    Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.

  4. DREAM: a method for semi-quantitative dermal exposure assessment.

    PubMed

    Van-Wendel-de-Joode, Berna; Brouwer, Derk H; Vermeulen, Roel; Van Hemmen, Joop J; Heederik, Dick; Kromhout, Hans

    2003-01-01

    This paper describes a new method (DREAM) for structured, semi-quantitative dermal exposure assessment for chemical or biological agents that can be used in occupational hygiene or epidemiology. It is anticipated that DREAM could serve as an initial assessment of dermal exposure, amongst others, resulting in a ranking of tasks and subsequently jobs. DREAM consists of an inventory and evaluation part. Two examples of dermal exposure of workers of a car-construction company show that DREAM characterizes tasks and gives insight into exposure mechanisms, forming a basis for systematic exposure reduction. DREAM supplies estimates for exposure levels on the outside clothing layer as well as on skin, and provides insight into the distribution of dermal exposure over the body. Together with the ranking of tasks and people, this provides information for measurement strategies and helps to determine who, where and what to measure. In addition to dermal exposure assessment, the systematic description of dermal exposure pathways helps to prioritize and determine most adequate measurement strategies and methods. DREAM could be a promising approach for structured, semi-quantitative, dermal exposure assessment.

  5. Milestones: a rapid assessment method for the Clinical Competency Committee

    PubMed Central

    Nabors, Christopher; Forman, Leanne; Peterson, Stephen J.; Gennarelli, Melissa; Aronow, Wilbert S.; DeLorenzo, Lawrence; Chandy, Dipak; Ahn, Chul; Sule, Sachin; Stallings, Gary W.; Khera, Sahil; Palaniswamy, Chandrasekar; Frishman, William H.

    2016-01-01

    Introduction Educational milestones are now used to assess the developmental progress of all U.S. graduate medical residents during training. Twice annually, each program’s Clinical Competency Committee (CCC) makes these determinations and reports its findings to the Accreditation Council for Graduate Medical Education (ACGME). The ideal way to conduct the CCC is not known. After finding that deliberations reliant upon the new milestones were time intensive, our internal medicine residency program tested an approach designed to produce rapid but accurate assessments. Material and methods For this study, we modified our usual CCC process to include pre-meeting faculty ratings of resident milestones progress with in-meeting reconciliation of their ratings. Data were considered largely via standard report and presented in a pre-arranged pattern. Participants were surveyed regarding their perceptions of data management strategies and use of milestones. Reliability of competence assessments was estimated by comparing pre-/post-intervention class rank lists produced by individual committee members with a master class rank list produced by the collective CCC after full deliberation. Results Use of the study CCC approach reduced committee deliberation time from 25 min to 9 min per resident (p < 0.001). Committee members believed milestones improved their ability to identify and assess expected elements of competency development (p = 0.026). Individual committee member assessments of trainee progress agreed well with collective CCC assessments. Conclusions Modification of the clinical competency process to include pre-meeting competence ratings with in-meeting reconciliation of these ratings led to shorter deliberation times, improved evaluator satisfaction and resulted in reliable milestone assessments. PMID:28144272

  6. Sample Size for Assessing Agreement between Two Methods of Measurement by Bland-Altman Method.

    PubMed

    Lu, Meng-Jie; Zhong, Wei-Hua; Liu, Yu-Xiu; Miao, Hua-Zhang; Li, Yong-Chang; Ji, Mu-Huo

    2016-11-01

    The Bland-Altman method has been widely used for assessing agreement between two methods of measurement. However, it remains unsolved about sample size estimation. We propose a new method of sample size estimation for Bland-Altman agreement assessment. According to the Bland-Altman method, the conclusion on agreement is made based on the width of the confidence interval for LOAs (limits of agreement) in comparison to predefined clinical agreement limit. Under the theory of statistical inference, the formulae of sample size estimation are derived, which depended on the pre-determined level of α, β, the mean and the standard deviation of differences between two measurements, and the predefined limits. With this new method, the sample sizes are calculated under different parameter settings which occur frequently in method comparison studies, and Monte-Carlo simulation is used to obtain the corresponding powers. The results of Monte-Carlo simulation showed that the achieved powers could coincide with the pre-determined level of powers, thus validating the correctness of the method. The method of sample size estimation can be applied in the Bland-Altman method to assess agreement between two methods of measurement.

  7. Assessing Security of Supply: Three Methods Used in Finland

    NASA Astrophysics Data System (ADS)

    Sivonen, Hannu

    Public Private Partnership (PPP) has an important role in securing supply in Finland. Three methods are used in assessing the level of security of supply. First, in national expert groups, a linear mathematical model has been used. The model is based on interdependency estimates. It ranks societal functions or its more detailed components, such as items in the food supply chain, according to the effect and risk pertinent to the interdependencies. Second, the security of supply is assessed in industrial branch committees (clusters and pools) in the form of indicators. The level of security of supply is assessed against five generic factors (dimension 1) and tens of business branch specific functions (dimension 2). Third, in two thousand individual critical companies, the maturity of operational continuity management is assessed using Capability Maturity Model (CMM) in an extranet application. The pool committees and authorities obtain an anonymous summary. The assessments are used in allocating efforts for securing supply. The efforts may be new instructions, training, exercising, and in some cases, investment and regulation.

  8. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    SciTech Connect

    Prinn, Ronald; Webster, Mort

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  9. Using the statistical analysis method to assess the landslide susceptibility

    NASA Astrophysics Data System (ADS)

    Chan, Hsun-Chuan; Chen, Bo-An; Wen, Yo-Ting

    2015-04-01

    This study assessed the landslide susceptibility in Jing-Shan River upstream watershed, central Taiwan. The landslide inventories during typhoons Toraji in 2001, Mindulle in 2004, Kalmaegi and Sinlaku in 2008, Morakot in 2009, and the 0719 rainfall event in 2011, which were established by Taiwan Central Geological Survey, were used as landslide data. This study aims to assess the landslide susceptibility by using different statistical methods including logistic regression, instability index method and support vector machine (SVM). After the evaluations, the elevation, slope, slope aspect, lithology, terrain roughness, slope roughness, plan curvature, profile curvature, total curvature, average of rainfall were chosen as the landslide factors. The validity of the three established models was further examined by the receiver operating characteristic curve. The result of logistic regression showed that the factor of terrain roughness and slope roughness had a stronger impact on the susceptibility value. Instability index method showed that the factor of terrain roughness and lithology had a stronger impact on the susceptibility value. Due to the fact that the use of instability index method may lead to possible underestimation around the river side. In addition, landslide susceptibility indicated that the use of instability index method laid a potential issue about the number of factor classification. An increase of the number of factor classification may cause excessive variation coefficient of the factor. An decrease of the number of factor classification may make a large range of nearby cells classified into the same susceptibility level. Finally, using the receiver operating characteristic curve discriminate the three models. SVM is a preferred method than the others in assessment of landslide susceptibility. Moreover, SVM is further suggested to be nearly logistic regression in terms of recognizing the medium-high and high susceptibility.

  10. Meaningful use of information technology: a local perspective.

    PubMed

    Hussain, Anwar A

    2011-05-17

    To reduce costs of care and improve quality, the federal government is stimulating adoption of health information technology through meaningful use policy. The legislation, however, is built on several assumptions that are unrealistic from a provider's perspective-the group that is expected to purchase, use, and sustain the information technology infrastructure. A viable meaningful use policy may do better to account for the realities of patient care by using a bottom-up approach for adoption rather than the current top-down strategy.

  11. Meaningful Peer Review in Radiology: A Review of Current Practices and Potential Future Directions.

    PubMed

    Moriarity, Andrew K; Hawkins, C Matthew; Geis, J Raymond; Dreyer, Keith J; Kamer, Aaron P; Khandheria, Paras; Morey, Jose; Whitfill, James; Wiggins, Richard H; Itri, Jason N

    2016-12-01

    The current practice of peer review within radiology is well developed and widely implemented compared with other medical specialties. However, there are many factors that limit current peer review practices from reducing diagnostic errors and improving patient care. The development of "meaningful peer review" requires a transition away from compliance toward quality improvement, whereby the information and insights gained facilitate education and drive systematic improvements that reduce the frequency and impact of diagnostic error. The next generation of peer review requires significant improvements in IT functionality and integration, enabling features such as anonymization, adjudication by multiple specialists, categorization and analysis of errors, tracking, feedback, and easy export into teaching files and other media that require strong partnerships with vendors. In this article, the authors assess various peer review practices, with focused discussion on current limitations and future needs for meaningful peer review in radiology.

  12. Meaningful Use Attestations among US Hospitals: The Growing Rural-Urban Divide.

    PubMed

    Sandefer, Ryan H; Marc, David T; Kleeberg, Paul

    2015-01-01

    The purpose of this study was to assess EHR Incentive Program attestations of eligible US hospitals across geography and hospital type. The proportions of attestations were compared between metropolitan, micropolitan, and rural hospitals and by whether a hospital was critical access or prospective payment system. From 2011 until December 2013, rural and critical access hospitals were attesting to meaningful use and receiving federal incentive payments at a significantly lower proportion than their urban counterparts. The data suggest that the digital divide between urban and rural hospitals that are adopting electronic health records and using the technology effectively is widening. These findings illustrate that the needs of rural hospitals currently and into the future are different than urban hospitals, and the meaningful use program does not appear to provide the resources needed to propel these rural hospitals forward.

  13. Two random repeat recall methods to assess alcohol use.

    PubMed Central

    Midanik, L T

    1993-01-01

    Two random repeat recall methods were compared with a summary measure to assess alcohol use. Subjects (n = 142) were randomly assigned to one of two groups; they were called either on 14 random days during three 30-day waves and asked about drinking yesterday, or on 2 random days during each wave and asked about drinking in the past week. Follow-up telephone interviews obtained summary measures for each wave. Random repeat methods generally obtained higher estimates. However, the high dropout rate makes questionable the feasibility of using this approach with general population samples. PMID:8498631

  14. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  15. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  16. Numerical methods for assessment of the ship's pollutant emissions

    NASA Astrophysics Data System (ADS)

    Jenaru, A.; Acomi, N.

    2016-08-01

    The maritime transportation sector constitutes a source of atmospheric pollution. To avoid or minimize ships pollutant emissions the first step is to assess them. Two methods of estimation of the ships’ emissions are proposed in this paper. These methods prove their utility for shipboard and shore based management personnel from the practical perspective. The methods were demonstrated for a product tanker vessel where a permanent monitoring system for the pollutant emissions has previously been fitted. The values of the polluting agents from the exhaust gas were determined for the ship from the shipyard delivery and were used as starting point. Based on these values, the paper aimed at numerical assessing of ship's emissions in order to determine the ways for avoiding environmental pollution: the analytical method of determining the concentrations of the exhaust gas components, by using computation program MathCAD, and the graphical method of determining the concentrations of the exhaust gas components, using variation diagrams of the parameters, where the results of the on board measurements were introduced, following the application of pertinent correction factors. The results should be regarded as a supporting tool during the decision making process linked to the reduction of ship's pollutant emissions.

  17. [Methods of radiological bone age assessment (author's transl)].

    PubMed

    Fendel, H

    1976-09-01

    An assessment of the bone age can be made in different manners. Numerical methods calculating the number of existing ossification centers are to inaccurate. The use of "age-of-appearance" tables gives a more accurate evaluation. In both methods, however, x-ray films of several body parts must be made. Therefore, they are complicated and lead to a higher patient radiation exposure. Methods using hand and wrist as a representative area of the whole skeleton are of greater value for routine bone-age assessments. There is a wide-spread use of the Greulich-Pyle atlas. The atlas-method is fully sufficient in the great majority of cases when certain rules are considered. A more detailed information can be achieved by using the so-called "bone-by-bone" evaluation. A score system was introduced by Tanner and Whitehouse which should be used to a greater extent than is done up to now. Metrical methods give no real information about the bone age but additional informations which can be helpful in following examinations with short intervals.

  18. Subjective video quality assessment methods for recognition tasks

    NASA Astrophysics Data System (ADS)

    Ford, Carolyn G.; McFarland, Mark A.; Stange, Irena W.

    2009-02-01

    To develop accurate objective measurements (models) for video quality assessment, subjective data is traditionally collected via human subject testing. The ITU has a series of Recommendations that address methodology for performing subjective tests in a rigorous manner. These methods are targeted at the entertainment application of video. However, video is often used for many applications outside of the entertainment sector, and generally this class of video is used to perform a specific task. Examples of these applications include security, public safety, remote command and control, and sign language. For these applications, video is used to recognize objects, people or events. The existing methods, developed to assess a person's perceptual opinion of quality, are not appropriate for task-based video. The Institute for Telecommunication Sciences, under a program from the Department of Homeland Security and the National Institute for Standards and Technology's Office of Law Enforcement, has developed a subjective test method to determine a person's ability to perform recognition tasks using video, thereby rating the quality according to the usefulness of the video quality within its application. This new method is presented, along with a discussion of two examples of subjective tests using this method.

  19. Assessment of nerve agent exposure: existing and emerging methods.

    PubMed

    Langenberg, Jan P; van der Schans, Marcel J; Noort, Daan

    2009-07-01

    The perceived threat of the use of nerve agents by terrorists against civilian targets implies the need for methods for point-of-care (POC) diagnosis. This review presents an overview of methods that are currently available for the assessment of exposure to nerve agents. Since these methods are mostly MS based, they require complex and expensive equipment and well-trained personnel and, consequently, they are not very suitable for rapid POC diagnosis. However, new technologies are emerging that allow, among others, immunochemical detection of acetylcholinesterase inhibited by nerve agents. Also, lab-on-a-chip methodologies are under development. It is anticipated that MS methods will be suitable for POC diagnosis within a few years, due to the miniaturization of equipment and the emergence of methodologies that enable mass spectrometric analysis with little sample pretreatment and that are potentially fieldable, such as direct analysis in real time and desorption electrospray ionization MS.

  20. Assessing the sensitivity of methods for estimating principal causal effects.

    PubMed

    Stuart, Elizabeth A; Jo, Booil

    2015-12-01

    The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE) - the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition - is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help researchers select between methods. In this article, we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood ('joint') method that assumes the 'exclusion restriction,' (ER) and a propensity score-based method that relies on 'principal ignorability.' We detail the assumptions underlying each approach, and assess each methods' sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the ER-based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership.

  1. [Procedures and methods of benefit assessments for medicines in Germany].

    PubMed

    Bekkering, G E; Kleijnen, J

    2008-12-01

    implement a scoping process to support the development of the research question. 2. To separate the work of the external experts performing the evidence assessment from that of the institute formulating recommendations. Therefore, the preliminary report as produced by external experts needs to be public, and published separately from any subsequent amendments or (draft-)reports made by the institute, which includes the institute's recommendations. 3. To implement open peer review by publishing both the comments of the reviewers and their names. Based on the legal framework, the institute must provide for adequate participation of relevant parties. These include organisations representing the interests of patients; experts of medical, pharmaceutical and health economic science and practice; the professional organisations of pharmacists and pharmaceutical companies; and experts on alternative therapies. Patients and health care professionals bring in new insights with respect to research priorities, treatment and outcomes. The relevant parties should be identified and contacted whenever the global scope of the assessment has been drafted. Subsequently, the relevant parties should be involved in defining the research question, developing the protocol and commenting on the preliminary report. To implement the involvement of relevant parties in defining the research question a scoping process is suggested. For the other phases, written comments followed by an oral discussion should be used. Finally, the relevant parties should have the right to appeal the final decision on judicial grounds. None of these steps mean that the institute would lose any part of its scientific independence. From the relevant sections of the legal framework with respect to the assessment methods, it can be concluded that: 1. The institute must ensure that the assessment is made in accordance with internationally recognised standards of evidence-based medicine (EBM). 2. The assessment is conducted in

  2. A proposed impact assessment method for genetically modified plants (AS-GMP Method)

    SciTech Connect

    Jesus-Hitzschky, Katia Regina Evaristo de; Silveira, Jose Maria F.J. da

    2009-11-15

    An essential step in the development of products based on biotechnology is an assessment of their potential economic impacts and safety, including an evaluation of the potential impact of transgenic crops and practices related to their cultivation on the environment and human or animal health. The purpose of this paper is to provide an assessment method to evaluate the impact of biotechnologies that uses quantifiable parameters and allows a comparative analysis between conventional technology and technologies using GMOs. This paper introduces a method to perform an impact analysis associated with the commercial release and use of genetically modified plants, the Assessment System GMP Method. The assessment is performed through indicators that are arranged according to their dimension criterion likewise: environmental, economic, social, capability and institutional approach. To perform an accurate evaluation of the GMP specific indicators related to genetic modification are grouped in common fields: genetic insert features, GM plant features, gene flow, food/feed field, introduction of the GMP, unexpected occurrences and specific indicators. The novelty is the possibility to include specific parameters to the biotechnology under assessment. In this case by case analysis the factors of moderation and the indexes are parameterized to perform an available assessment.

  3. Developing the RIAM method (rapid impact assessment matrix) in the context of impact significance assessment

    SciTech Connect

    Ijaes, Asko; Kuitunen, Markku T.; Jalava, Kimmo

    2010-02-15

    In this paper the applicability of the RIAM method (rapid impact assessment matrix) is evaluated in the context of impact significance assessment. The methodological issues considered in the study are: 1) to test the possibilities of enlarging the scoring system used in the method, and 2) to compare the significance classifications of RIAM and unaided decision-making to estimate the consistency between these methods. The data used consisted of projects for which funding had been applied for via the European Union's Regional Development Trust in the area of Central Finland. Cases were evaluated with respect to their environmental, social and economic impacts using an assessment panel. The results showed the scoring framework used in RIAM could be modified according to the problem situation at hand, which enhances its application potential. However the changes made in criteria B did not significantly affect the final ratings of the method, which indicates the high importance of criteria A1 (importance) and A2 (magnitude) to the overall results. The significance classes obtained by the two methods diverged notably. In general the ratings given by RIAM tended to be smaller compared to intuitive judgement implying that the RIAM method may be somewhat conservative in character.

  4. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  5. Edge method for on-orbit defocus assessment.

    PubMed

    Viallefont-Robinet, Françoise

    2010-09-27

    In the earth observation domain, two classes of sensors may be distinguished: a class for which sensor performances are driven by radiometric accuracy of the images and a class for which sensor performances are driven by spatial resolution. In this latter case, as spatial resolution depends on the triplet constituted by the Ground Sampling Distance (GSD), Modulation Transfer Function (MTF), and Signal to Noise Ratio (SNR), refocusing, acting as an MTF improvement, is very important. Refocusing is not difficult by itself as far as the on-board mechanism is reliable. The difficulty is on the defocus assessment side. Some methods such as those used for the SPOT family rely on the ability of the satellite to image the same landscape with two focusing positions. This can be done with a bi-sensor configuration, with adequate focal plane, or with the satellite agility. A new generation of refocusing mechanism will be taken aboard Pleiades. As the speed of this mechanism will be much slower than the speed of the older generation, it won't be possible, despite the agility of the satellite, to image the same landscape with two focusing positions on the same orbit. That's why methods relying on MTF measurement with edge method have been studied. This paper describes the methods and the work done to assess the defocus measurement accuracy in the Pleiades context.

  6. Geomorphometry-based method of landform assessment for geodiversity

    NASA Astrophysics Data System (ADS)

    Najwer, Alicja; Zwoliński, Zbigniew

    2015-04-01

    Climate variability primarily induces the variations in the intensity and frequency of surface processes and consequently, principal changes in the landscape. As a result, abiotic heterogeneity may be threatened and the key elements of the natural diversity even decay. The concept of geodiversity was created recently and has rapidly gained the approval of scientists around the world. However, the problem recognition is still at an early stage. Moreover, little progress has been made concerning its assessment and geovisualisation. Geographical Information System (GIS) tools currently provide wide possibilities for the Earth's surface studies. Very often, the main limitation in that analysis is acquisition of geodata in appropriate resolution. The main objective of this study was to develop a proceeding algorithm for the landform geodiversity assessment using geomorphometric parameters. Furthermore, final maps were compared to those resulting from thematic layers method. The study area consists of two peculiar valleys, characterized by diverse landscape units and complex geological setting: Sucha Woda in Polish part of Tatra Mts. and Wrzosowka in Sudetes Mts. Both valleys are located in the National Park areas. The basis for the assessment is a proper selection of geomorphometric parameters with reference to the definition of geodiversity. Seven factor maps were prepared for each valley: General Curvature, Topographic Openness, Potential Incoming Solar Radiation, Topographic Position Index, Topographic Wetness Index, Convergence Index and Relative Heights. After the data integration and performing the necessary geoinformation analysis, the next step with a certain degree of subjectivity is score classification of the input maps using an expert system and geostatistical analysis. The crucial point to generate the final maps of geodiversity by multi-criteria evaluation (MCE) with GIS-based Weighted Sum technique is to assign appropriate weights for each factor map by

  7. Methods to Assess Measurement Error in Questionnaires of Sedentary Behavior

    PubMed Central

    Sampson, Joshua N; Matthews, Charles E; Freedman, Laurence; Carroll, Raymond J.; Kipnis, Victor

    2015-01-01

    Sedentary behavior has already been associated with mortality, cardiovascular disease, and cancer. Questionnaires are an affordable tool for measuring sedentary behavior in large epidemiological studies. Here, we introduce and evaluate two statistical methods for quantifying measurement error in questionnaires. Accurate estimates are needed for assessing questionnaire quality. The two methods would be applied to validation studies that measure a sedentary behavior by both questionnaire and accelerometer on multiple days. The first method fits a reduced model by assuming the accelerometer is without error, while the second method fits a more complete model that allows both measures to have error. Because accelerometers tend to be highly accurate, we show that ignoring the accelerometer’s measurement error, can result in more accurate estimates of measurement error in some scenarios. In this manuscript, we derive asymptotic approximations for the Mean-Squared Error of the estimated parameters from both methods, evaluate their dependence on study design and behavior characteristics, and offer an R package so investigators can make an informed choice between the two methods. We demonstrate the difference between the two methods in a recent validation study comparing Previous Day Recalls (PDR) to an accelerometer-based ActivPal. PMID:27340315

  8. Assessment methods for eating disorders and body image disorders.

    PubMed

    Túry, Ferenc; Güleç, Hayriye; Kohls, Elisabeth

    2010-12-01

    The growing interest in the treatment and research of eating disorders has stimulated the development of assessment methods, and there are now many questionnaires for evaluating behavioral and attitudinal characteristics of eating pathology. The present article sets out to review the assessment tools that are widely used in clinical practice and research. In particular, it covers self-report measures with summaries of their psychometric properties. It also presents diagnostic questionnaires based on the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, diagnostic criteria. The instruments described include screening questionnaires, measurement tools for specific eating disorder symptoms, measurement of quality of life in eating disorders, and some tools for the measurement of body image disorder, a common feature of eating disorders. There is also a discussion of distorting factors that decrease the authenticity of assessment tools. These problems arise from the definition of some constructs and from the phenomena of denial and concealment, which are frequent among eating-disordered individuals. The frequent co-occurrence of other psychopathological features (e.g., multiimpulsive symptoms) shows that other psychological phenomena should also be evaluated in line with the assessment of eating disorders.

  9. A GIS-based method for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Kleomenis; Stathopoulos, Nikos; Psarogiannis, Athanasios; Penteris, Dimitris; Tsiakos, Chrisovalantis; Karagiannopoulou, Aikaterini; Krikigianni, Eleni; Karymbalis, Efthimios; Chalkias, Christos

    2016-04-01

    Floods are physical global hazards with negative environmental and socio-economic impacts on local and regional scale. The technological evolution during the last decades, especially in the field of geoinformatics, has offered new advantages in hydrological modelling. This study seeks to use this technology in order to quantify flood risk assessment. The study area which was used is an ungauged catchment and by using mostly GIS hydrological and geomorphological analysis together with a GIS-based distributed Unit Hydrograph model, a series of outcomes have risen. More specifically, this paper examined the behaviour of the Kladeos basin (Peloponnese, Greece) using real rainfall data, as well hypothetical storms. The hydrological analysis held using a Digital Elevation Model of 5x5m pixel size, while the quantitative drainage basin characteristics were calculated and were studied in terms of stream order and its contribution to the flood. Unit Hydrographs are, as it known, useful when there is lack of data and in this work, based on time-area method, a sequences of flood risk assessments have been made using the GIS technology. Essentially, the proposed methodology estimates parameters such as discharge, flow velocity equations etc. in order to quantify flood risk assessment. Keywords Flood Risk Assessment Quantification; GIS; hydrological analysis; geomorphological analysis.

  10. Making "Professionalism" Meaningful to Students in Higher Education

    ERIC Educational Resources Information Center

    Wilson, Anna; Åkerlind, Gerlese; Walsh, Barbara; Stevens, Bruce; Turner, Bethany; Shield, Alison

    2013-01-01

    With rising vocational expectations of higher education, universities are increasingly promoting themselves as preparing students for future professional lives. This makes it timely to ask what makes professionalism meaningful to students. In addressing this question, we first identify aspects of professionalism that might represent appropriate…

  11. Water Habitat Study: Prediction Makes It More Meaningful.

    ERIC Educational Resources Information Center

    Glasgow, Dennis R.

    1982-01-01

    Suggests a teaching strategy for water habitat studies to help students make a meaningful connection between physiochemical data (dissolved oxygen content, pH, and water temperature) and biological specimens they collect. Involves constructing a poster and using it to make predictions. Provides sample poster. (DC)

  12. Facilitating Meaningful Discussion Groups in the Primary Grades

    ERIC Educational Resources Information Center

    Moses, Lindsey; Ogden, Meridith; Kelly, Laura Beth

    2015-01-01

    This Teaching Tips describes a yearlong process of facilitating meaningful discussion groups about literature with first-grade students in an urban Title I school. At the beginning of the year, the teacher provided explicit instruction in speaking and listening skills to support students with the social skills needed for thoughtful discussion. She…

  13. Cache-Cache Comparison for Supporting Meaningful Learning

    ERIC Educational Resources Information Center

    Wang, Jingyun; Fujino, Seiji

    2015-01-01

    The paper presents a meaningful discovery learning environment called "cache-cache comparison" for a personalized learning support system. The processing of seeking hidden relations or concepts in "cache-cache comparison" is intended to encourage learners to actively locate new knowledge in their knowledge framework and check…

  14. "Meaningful Interactions" with Coleen Koester, a Classroom Teacher.

    ERIC Educational Resources Information Center

    Garcia, Lorenzo

    1995-01-01

    Describes several exercises in an elementary classroom led by Koester--a game in which differences are special, and one that simulates community living. Notes that Koester's class is diversified racially, and the exercises in her class are seen by each student as a shared lesson. Concludes with a call for meaningful interactions with and among…

  15. Using Meaningful Contexts to Promote Understanding of Pronumerals

    ERIC Educational Resources Information Center

    Linsell, Chris; Cavanagh, Michael; Tahir, Salma

    2013-01-01

    Developing a conceptual understanding of elementary algebra has been the focus of a number of recent articles in this journal. Baroudi (2006) advocated problem solving to assist students' transition from arithmetic to algebra, and Shield (2008) described the use of meaningful contexts for developing the concept of function. Samson (2011, 2012)…

  16. 42 CFR 495.210 - Meaningful EHR user attestation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Meaningful EHR user attestation. 495.210 Section 495.210 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION STANDARDS FOR THE ELECTRONIC HEALTH RECORD...

  17. 42 CFR 495.8 - Demonstration of meaningful use criteria.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Demonstration of meaningful use criteria. 495.8 Section 495.8 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION STANDARDS FOR THE ELECTRONIC HEALTH RECORD...

  18. Concept Maps: An Instructional Tool to Facilitate Meaningful Learning

    ERIC Educational Resources Information Center

    Safdar, Muhammad; Hussain, Azhar; Shah, Iqbal; Rifat, Qudsia

    2012-01-01

    This paper describes the procedure of developing an instructional tool, "concept mapping" and its effectiveness in making the material meaningful to the students. In Pakistan, the traditional way of teaching science subjects at all levels at school relies heavily on memorization. The up-to-date data obtained from qualitative and…

  19. Kilimanjaro: A Case of Meaningful Adventure and Service Learning Abroad

    ERIC Educational Resources Information Center

    Cavanaugh, Cathy; Gajer, Ewa; Mayberry, John; O'Connor, Brendan; Hargis, Jace

    2015-01-01

    This qualitative evaluation explored how female undergraduate students developed an understanding of themselves and the broader world as a result of an adventure and service learning experience in Tanzania, Africa. The project built upon theoretical frameworks regarding meaningful learning--active, constructive, intentional, and authentic--and…

  20. The Role of Meaningful Dialogue in Early Childhood Education Leadership

    ERIC Educational Resources Information Center

    Deakins, Eric

    2007-01-01

    Action research was used to study the effectiveness of Learning Organisation and Adaptive Enterprise theories for promoting organisation-wide learning and creating a more effective early childhood education organisation. This article describes the leadership steps taken to achieve shared vision via meaningful dialogue between board, management and…

  1. Comprehension for What? Preparing Students for Their Meaningful Future

    ERIC Educational Resources Information Center

    Conley, Mark W.; Wise, Antoinette

    2011-01-01

    Researchers, policymakers, and educators face a daunting task these days concerning literacy education for the here and now and literacy for the future. Even though one clings to the romantic notion that education provides the building blocks in a straight line to a meaningful future, the reality is that mixed goals and instructional messages…

  2. Creating Meaningful Inquiry in Inclusive Classrooms: Practitioners' Stories of Research

    ERIC Educational Resources Information Center

    Jones, Phyllis, Ed.; Whitehurst, Teresa, Ed.; Egerton, Jo, Ed.

    2012-01-01

    In recent years, the concept of teachers as researchers in both special and mainstream school settings has become part of our everyday language. Whilst many educational practitioners will see the need for research within their setting, many may not be familiar with the technical elements they believe are required. "Creating Meaningful Inquiry in…

  3. Types of Meaningfulness of Life and Values of Future Teachers

    ERIC Educational Resources Information Center

    Salikhova, Nailia R.

    2016-01-01

    The leading role of meaning of life in regulation of human's activity of all types provides the relevance of the research. The goal of the paper is to identify and describe types of meaningfulness of life in future teachers, and to reveal the specificity of values hierarchy indicative of each type. The leading approach applied in the research was…

  4. Attributes of Meaningful Learning Experiences in an Outdoor Education Program

    ERIC Educational Resources Information Center

    Taniguchi, Stacy T.; Freeman, Patti A.; Richards, A. LeGrand

    2005-01-01

    This phenomenological study sought to identify the attributes of meaningful learning experiences as found in an outdoor education program. Thirteen students in the Wilderness Writing Program at Brigham Young University were the sample of this study. Their participation in outdoor recreational activities and their reflections about their…

  5. Increasing Meaningful Assistive Technology Use in the Classrooms

    ERIC Educational Resources Information Center

    Connor, Cynthia; Beard, Lawrence A.

    2015-01-01

    Although personal technology is consistently used by students and teachers, meaningful use of technology for instruction may not be feasible without providing teachers specific training and support. One university is providing workshops, feedback through coursework, and hands-on training to teacher candidates and local area teachers. In addition,…

  6. Methods for assessing risks of dermal exposures in the workplace.

    PubMed

    McDougal, James N; Boeniger, Mark F

    2002-07-01

    The skin as a route of entry for toxic chemicals has caused increasing concern over the last decade. The assessment of systemic hazards from dermal exposures has evolved over time, often limited by the amount of experimental data available. The result is that there are many methods being used to assess safety of chemicals in the workplace. The process of assessing hazards of skin contact includes estimating the amount of substance that may end up on the skin and estimating the amount that might reach internal organs. Most times, toxicology studies by the dermal route are not available and extrapolations from other exposure routes are necessary. The hazards of particular chemicals can be expressed as "skin notations", actual exposure levels, or safe exposure times. Characterizing the risk of a specific procedure in the workplace involves determining the ratio of exposure standards to an expected exposure. The purpose of this review is to address each of the steps in the process and describe the assumptions that are part of the process. Methods are compared by describing their strengths and weaknesses. Recommendations for research in this area are also included.

  7. Assessments of lung digestion methods for recovery of fibers.

    PubMed

    Warheit, D B; Hwang, H C; Achinko, L

    1991-04-01

    Evaluation of the pulmonary hazards associated with exposure to fibrous materials tends to be more complicated than assessments required for particulate materials. Fibers are defined by aspect ratios and it is generally considered that physical dimensions play an important role in the pathogenesis of fiber-related lung diseases. Several digestion techniques have been used to recover fibers from exposed lung tissue for clearance studies. Because many of the digestion fluids are corrosive (e.g., bleach, KOH), it is conceivable that the dimensions of recovered fibers are modified during the tissue digestion methods to assess whether the physical dimensions of bulk samples of fibers were altered following simulated digestion processing. Aliquots of crocidolite and chrysotile asbestos, Kevlar aramid, wollastonite, polyacrylonitrile (pan)-based carbon, and glass fibers were incubated with either saline, bleach, or KOH and then filtered. Scanning electron microscopy techniques were utilized to measure the physical dimensions (i.e., lengths and diameters) of at least 160 fibers per treatment group of each fiber type. Our results showed that the lengths and diameters of glass fibers and wollastonite were altered after treatment with KOH. In addition, treatment with bleach produced a small reduction in both asbestos fiber-type diameters, and greater changes in Kevlar and wollastonite diameters and carbon fiber lengths (P less than 0.05). These results indicate that lung digestion methods should be carefully assessed for each fiber type before initiating fiber clearance studies.

  8. Assessment of a novel method for teaching veterinary parasitology.

    PubMed

    Pereira, Mary Mauldin; Yvorchuk-St Jean, Kathleen E; Wallace, Charles E; Krecek, Rosina C

    2014-01-01

    A student-centered innovative method of teaching veterinary parasitology was launched and evaluated at the Ross University School of Veterinary Medicine (RUSVM) in St. Kitts, where Parasitology is a required course for second-semester veterinary students. A novel method, named Iron Parasitology, compared lecturer-centered teaching with student-centered teaching and assessed the retention of parasitology knowledge of students in their second semester and again when they reached their seventh semester. Members of five consecutive classes chose to participate in Iron Parasitology with the opportunity to earn an additional 10 points toward their final grade by demonstrating their knowledge, communication skills, clarity of message, and creativity in the Iron Parasitology exercise. The participants and nonparticipants were assessed using seven parameters. The initial short-term study parameters used to evaluate lecturer- versus student-centered teaching were age, gender, final Parasitology course grade without Iron Parasitology, RUSVM overall grade point average (GPA), RUSVM second-semester GPA, overall GPA before RUSVM, and prerequisite GPA before RUSVM. The long-term reassessment study assessed retention of parasitology knowledge in members of the seventh-semester class who had Iron Parasitology as a tool in their second semester. These students were invited to complete a parasitology final examination during their seventh semester. There were no statistically significant differences for the parameters measured in the initial study. In addition, Iron Parasitology did not have an effect on the retention scores in the reassessment study.

  9. A qualitative method proposal to improve environmental impact assessment

    SciTech Connect

    Toro, Javier; Requena, Ignacio; Duarte, Oscar; Zamorano, Montserrat

    2013-11-15

    In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes the inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown.

  10. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  11. A new assessment method of outdoor tobacco smoke (OTS) exposure

    NASA Astrophysics Data System (ADS)

    Cho, Hyeri; Lee, Kiyoung

    2014-04-01

    Outdoor tobacco smoke (OTS) is concerned due to potential health effects. An assessment method of OTS exposure is needed to determine effects of OTS and validate outdoor smoking policies. The objective of this study was to develop a new method to assess OTS exposure. This study was conducted at 100 bus stops including 50 centerline bus stops and 50 roadside bus stops in Seoul, Korea. Using real-time aerosol monitor, PM2.5 was measured for 30 min at each bus stop in two seasons. ‘Peak analysis' method was developed to assess short term PM2.5 exposure by OTS. The 30-min average PM2.5 exposure at each bus stop was associated with season and bus stop location but not smoking activity. The PM2.5 peak occurrence rate by the peak analysis method was significantly associated with season, bus stop location, observed smoking occurrence, and the number of buses servicing a route. The PM2.5 peak concentration was significantly associated with season, smoking occurrence, and the number of buses servicing a route. When a smoker was standing still at the bus stop, magnitude of peak concentrations were significantly higher than when the smoker walking-through the bus stop. People were exposed to high short-term PM2.5 peak levels at bus stops, and the magnitude of peak concentrations were highest when a smoker was located close to the monitor. The magnitude of peak concentration was a good indicator helped distinguish nearby OTS exposure. Further research using ‘peak analysis' is needed to measure smoking-related exposure to PM2.5 in other outdoor locations.

  12. A new method for assessing surface solar irradiance: Heliosat-4

    NASA Astrophysics Data System (ADS)

    Qu, Z.; Oumbe, A.; Blanc, P.; Lefèvre, M.; Wald, L.; Schroedter-Homscheidt, M.; Gesell, G.

    2012-04-01

    Downwelling shortwave irradiance at surface (SSI) is more and more often assessed by means of satellite-derived estimates of optical properties of the atmosphere. Performances are judged satisfactory for the time being but there is an increasing need for the assessment of the direct and diffuse components of the SSI. MINES ParisTech and the German Aerospace Center (DLR) are currently developing the Heliosat-4 method to assess the SSI and its components in a more accurate way than current practices. This method is composed by two parts: a clear sky module based on the radiative transfer model libRadtran, and a cloud-ground module using two-stream and delta-Eddington approximations for clouds and a database of ground albedo. Advanced products derived from geostationary satellites and recent Earth Observation missions are the inputs of the Heliosat-4 method. Such products are: cloud optical depth, cloud phase, cloud type and cloud coverage from APOLLO of DLR, aerosol optical depth, aerosol type, water vapor in clear-sky, ozone from MACC products (FP7), and ground albedo from MODIS of NASA. In this communication, we briefly present Heliosat-4 and focus on its performances. The results of Heliosat-4 for the period 2004-2010 will be compared to the measurements made in five stations within the Baseline Surface Radiation Network. Extensive statistic analysis as well as case studies are performed in order to better understand Heliosat-4 and have an in-depth view of the performance of Heliosat-4, to understand its advantages comparing to existing methods and to identify its defaults for future improvements. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 218793 (MACC project) and no. 283576 (MACC-II project).

  13. Diffuse optical methods for assessing breast cancer chemotherapy

    NASA Astrophysics Data System (ADS)

    Tromberg, Bruce J.

    2014-03-01

    In his talk, "Diffuse Optical Methods for Assessing Breast Cancer Chemotherapy," SPIE Fellow Bruce Tromberg (Beckman Laser Institute and Medical Clinic) describes a method combining frequency domain photon migration, essentially a method of tracking photon motion in tissue, with a NIR spectroscopy technique using 850nm LEDs. The result is a scatter corrected absorption spectra. The technique takes advantage of elevated blood and water levels and decreased lipid levels in the presence of tumors to provide a more accurate mapping of the breast, allowing more effective treatment. Tromberg's team recently completed their first full mapping of the breast and have taken the instrument from a standalone unit to a portable one suitable for travel. In addition to providing feedback to enhance breast cancer treatment, Tromberg expects that this technique will be applicable in treating other forms of cancer as well.

  14. The OnyCOE-t™ questionnaire: responsiveness and clinical meaningfulness of a patient-reported outcomes questionnaire for toenail onychomycosis

    PubMed Central

    Potter, Lori P; Mathias, Susan D; Raut, Monika; Kianifard, Farid; Tavakkol, Amir

    2006-01-01

    Background This research was conducted to confirm the validity and reliability and to assess the responsiveness and clinical meaningfulness of the OnyCOE-t™, a questionnaire specifically designed to measure patient-reported outcomes (PRO) associated with toenail onychomycosis. Methods 504 patients with toenail onychomycosis randomized to receive 12 weeks of terbinafine 250 mg/day with or without target toenail debridement in the IRON-CLAD® trial completed the OnyCOE-t™ at baseline, weeks 6, 12, 24, and 48. The OnyCOE-t™ is composed of 6 multi-item scales and 1 single-item scale. These include a 7-item Toenail Symptom assessment, which comprises both Symptom Frequency and Symptom Bothersomeness scales; an 8-item Appearance Problems scale; a 7-item Physical Activities Problems scale; a 1-item Overall Problem scale; a 7-item Stigma scale; and a 3-item Treatment Satisfaction scale. In total, 33 toenail onychomycosis-specific items are included in the OnyCOE-t™. Clinical data, in particular the percent clearing of mycotic involvement in the target toenail, and OnyCOE-t™ responses were used to evaluate the questionnaire's reliability, validity, responsiveness, and the minimally clinical important difference (MCID). Results The OnyCOE-t™ was shown to be reliable and valid. Construct validity and known groups validity were acceptable. Internal consistency reliability of multi-item scales was demonstrated by Cronbach's alpha > .84. Responsiveness was good, with the Treatment Satisfaction, Symptom Frequency, Overall Problem, and Appearance Problem scales demonstrating the most responsiveness (Guyatt's statistic of 1.72, 1.31, 1.13, and 1.11, respectively). MCID was evaluated for three different clinical measures, and indicated that approximately an 8.5-point change (on a 0 to 100 scale) was clinically meaningful based on a 25% improvement in target nail clearing. Conclusion The OnyCOE-t™ questionnaire is a unique, toenail-specific PRO questionnaire that can be

  15. Cognitive assessment in mathematics with the least squares distance method.

    PubMed

    Ma, Lin; Çetin, Emre; Green, Kathy E

    2012-01-01

    This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.

  16. Methods for assessing autophagy and autophagic cell death.

    PubMed

    Tasdemir, Ezgi; Galluzzi, Lorenzo; Maiuri, M Chiara; Criollo, Alfredo; Vitale, Ilio; Hangen, Emilie; Modjtahedi, Nazanine; Kroemer, Guido

    2008-01-01

    Autophagic (or type 2) cell death is characterized by the massive accumulation of autophagic vacuoles (autophagosomes) in the cytoplasm of cells that lack signs of apoptosis (type 1 cell death). Here we detail and critically assess a series of methods to promote and inhibit autophagy via pharmacological and genetic manipulations. We also review the techniques currently available to detect autophagy, including transmission electron microscopy, half-life assessments of long-lived proteins, detection of LC3 maturation/aggregation, fluorescence microscopy, and colocalization of mitochondrion- or endoplasmic reticulum-specific markers with lysosomal proteins. Massive autophagic vacuolization may cause cellular stress and represent a frustrated attempt of adaptation. In this case, cell death occurs with (or in spite of) autophagy. When cell death occurs through autophagy, on the contrary, the inhibition of the autophagic process should prevent cellular demise. Accordingly, we describe a strategy for discriminating cell death with autophagy from cell death through autophagy.

  17. Screening method for assessing verbal learning efficiency using the Cognistat.

    PubMed

    Fouty, H Edward; Smith, Cassandra R; Briceno, Karen Y; Brown, Katelyn D; Guzman, Daniel; Ailes, Erica L; DeVries, Christopher T; Diluccia, Christina M; McLarnan, Kristy M; Betancourt, Stephanie C; Catoe, Whitney L

    2017-04-07

    The Cognistat is a widely used neurobehavioral screening instrument that addresses functioning across multiple domains. Unlike many popular neuropsychological tests, the Cognistat does not currently assess learning efficiency for verbal material. The purpose of this study was to develop a screening method for assessing verbal learning efficiency with the Cognistat, investigate the effects of two demographic variables (age and gender) on performance, and to establish cutoff scores for impairment. Participants were 253 volunteers between the ages of 18 and 96 years. Participants were classified into two age groups: 18-64 years and 65 + years. The data revealed a significant age and gender performance difference. Implications for the present findings and for future research are presented.

  18. Falcon: automated optimization method for arbitrary assessment criteria

    DOEpatents

    Yang, Tser-Yuan; Moses, Edward I.; Hartmann-Siantar, Christine

    2001-01-01

    FALCON is a method for automatic multivariable optimization for arbitrary assessment criteria that can be applied to numerous fields where outcome simulation is combined with optimization and assessment criteria. A specific implementation of FALCON is for automatic radiation therapy treatment planning. In this application, FALCON implements dose calculations into the planning process and optimizes available beam delivery modifier parameters to determine the treatment plan that best meets clinical decision-making criteria. FALCON is described in the context of the optimization of external-beam radiation therapy and intensity modulated radiation therapy (IMRT), but the concepts could also be applied to internal (brachytherapy) radiotherapy. The radiation beams could consist of photons or any charged or uncharged particles. The concept of optimizing source distributions can be applied to complex radiography (e.g. flash x-ray or proton) to improve the imaging capabilities of facilities proposed for science-based stockpile stewardship.

  19. Comparison of Body Composition Assessment Methods in Pediatric Intestinal Failure

    PubMed Central

    Mehta, Nilesh M.; Raphael, Bram; Guteirrez, Ivan; Quinn, Nicolle; Mitchell, Paul D.; Litman, Heather J.; Jaksic, Tom; Duggan, Christopher P.

    2015-01-01

    Objectives To examine the agreement of multifrequency bioelectric impedance analysis (BIA) and anthropometry with reference methods for body composition assessment in children with intestinal failure (IF). Methods We conducted a prospective pilot study in children 14 years of age or younger with IF resulting from either short bowel syndrome (SBS) or motility disorders. Bland Altman analysis was used to examine the agreement between BIA and deuterium dilution in measuring total body water (TBW) and lean body mass (LBM); and between BIA and dual X-ray absorptiometry (DXA) techniques in measuring LBM and FM. Fat mass (FM) and percent body fat (%BF) measurements by BIA and anthropometry, were also compared in relation to those measured by deuterium dilution. Results Fifteen children with IF, median (IQR) age 7.2 (5.0, 10.0) years, 10 (67%) male, were studied. BIA and deuterium dilution were in good agreement with a mean bias (limits of agreement) of 0.9 (-3.2, 5.0) for TBW (L) and 0.1 (-5.4 to 5.6) for LBM (kg) measurements. The mean bias (limits) for FM (kg) and %BF measurements were 0.4 (-3.8, 4.6) kg and 1.7 (-16.9, 20.3)% respectively. The limits of agreement were within 1 SD of the mean bias in 12/14 (86%) subjects for TBW and LBM, and in 11/14 (79%) for FM and %BF measurements. Mean bias (limits) for LBM (kg) and FM (kg) between BIA and DXA were 1.6 (-3.0 to 6.3) kg and -0.1 (-3.2 to 3.1) kg, respectively. Mean bias (limits) for FM (kg) and %BF between anthropometry and deuterium dilution were 0.2 (-4.2, 4.6) and -0.2 (-19.5 to 19.1), respectively. The limits of agreement were within 1 SD of the mean bias in 10/14 (71%) subjects. Conclusions In children with intestinal failure, TBW and LBM measurements by multifrequency BIA method were in agreement with isotope dilution and DXA methods, with small mean bias. In comparison to deuterium dilution, BIA was comparable to anthropometry for FM and %BF assessments with small mean bias. However, the limits of agreement

  20. Assessing the Accuracy of Ancestral Protein Reconstruction Methods

    PubMed Central

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-01-01

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of “ancestral sequences” inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a “best guess” amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated. PMID:16789817

  1. Electromechanical impedance method to assess dental implant stability

    NASA Astrophysics Data System (ADS)

    Tabrizi, Aydin; Rizzo, Piervincenzo; Ochs, Mark W.

    2012-11-01

    The stability of a dental implant is a prerequisite for supporting a load-bearing prosthesis and establishment of a functional bone-implant system. Reliable and noninvasive methods able to assess the bone interface of dental and orthopedic implants (osseointegration) are increasingly demanded for clinical diagnosis and direct prognosis. In this paper, we propose the electromechanical impedance method as a novel approach for the assessment of dental implant stability. Nobel Biocare® implants with a size of 4.3 mm diameter ×13 mm length were placed inside bovine bones that were then immersed in a solution of nitric acid to allow material degradation. The degradation simulated the inverse process of bone healing. The implant-bone systems were monitored by bonding a piezoceramic transducer (PZT) to the implants’ abutment and measuring the admittance of the PZT over time. It was found that the PZT’s admittance and the statistical features associated with its analysis are sensitive to the degradation of the bones and can be correlated to the loss of calcium measured by means of the atomic absorption spectroscopy method. The present study shows promising results and may pave the road towards an innovative approach for the noninvasive monitoring of dental implant stability and integrity.

  2. Methods for assessing relative importance in preference based outcome measures.

    PubMed

    Kaplan, R M; Feeny, D; Revicki, D A

    1993-12-01

    This paper reviews issues relevant to preference assessment for utility based measures of health-related quality of life. Cost/utility studies require a common measurement of health outcome, such as the quality adjusted life year (QALY). A key element in the QALY methodology is the measure of preference that estimates subjective health quality. Economists and psychologists differ on their preferred approach to preference measurement. Economists rely on utility assessment methods that formally consider economic trades. These methods include the standard gamble, time-trade off and person trade-off. However, some evidence suggests that many of the assumptions that underlie economic measurements of choice are open to challenge because human information processors do poorly at integrating complex probability information when making decisions that involve risk. Further, economic analysis assumes that choices accurately correspond to the way rational humans use information. Psychology experiments suggest that methods commonly used for economic analysis do not represent the underlying true preference continuum and some evidence supports the use of simple rating scales. More recent research by economists attempts integrated cognitive models, while contemporary research by psychologists considers economic models of choice. The review also suggests that difference in preference between different social groups tends to be small.

  3. New actigraphic assessment method for periodic leg movements (PLM).

    PubMed

    Kazenwadel, J; Pollmächer, T; Trenkwalder, C; Oertel, W H; Kohnen, R; Künzel, M; Krüger, H P

    1995-10-01

    A new actigraphic method by which periodic leg movements (PLM) can be measured is presented. Data acquistion and analysis were brought into line to distinguish short-lasting repetive leg movements from random motor restlessness. The definition of PLM follows the generally accepted criteria for PLM scoring. Thirty restless legs patients, all also suffering from PLM, were investigated three times by polysomnography, including tibialis anterior surface electromyography and actigraphy. A high correlation (reliability) was found for the number of PLM per hour spent in bed between the two methods. Furthermore, the actigraph records PLM specifically. An index of random motor restlessness is not sufficient for a reliable PLM according. In addition, periodic movements in sleep (PMS) and PLM show comparable variability in general. The actigraphic assessment of PLM, however, gives a better measure because PMS recordings may result in a substantial underestimation of PLM when sleep efficiency is reduced. This method is an ambulatory assessment tool that can also be used for screening purposes.

  4. Comparison of methods for assessing integrity of equine sperm membranes.

    PubMed

    Foster, M L; Love, C C; Varner, D D; Brinsko, S P; Hinrichs, K; Teague, S; Lacaze, K; Blanchard, T L

    2011-07-15

    Sperm membrane integrity (SMI) is thought to be an important measure of stallion sperm quality. The objective was to compare three methods for evaluating SMI: flow cytometry using SYBR-14/propidium iodide (PI) stain; an automated cell counting device using PI stain; and eosin-nigrosin stain. Raw equine semen was subjected to various treatments containing 20 to 80% seminal plasma in extender, with differing sperm concentrations, to simulate spontaneous loss of SMI. The SMI was assessed immediately, and after 1 and 2 d of cooled storage. Agreement between methods was determined according to Bland-Altman methodology. Eosin-nigrosin staining yielded higher (2%) overall mean values for SMI than did flow cytometry. Flow cytometry yielded higher (6%) overall mean values for SMI than did the automated cell counter. As percentage of membrane-damaged sperm increased, agreement of SMI measurement between methods decreased. When semen contained 50-79% membrane-intact sperm, the 95% limits of agreement between SMI determined by flow cytometry and eosin-nigrosin staining were greater (range = -26.9 to 24.3%; i.e., a 51.2% span) than for SMI determined by flow cytometry and the automated cell counter (range = -3.1 to 17.0%; 20.1% span). When sperm populations contained <50% membrane-intact sperm, the 95% limits of agreement between SMI determined by flow cytometry and eosin-nigrosin staining were greater (range = -35.9 to 19.0%; 54.9% span) than for SMI determined by flow cytometry and the automated cell counter (range = -11.6 to 28.7%; 40.3% span). We concluded that eosin-nigrosin staining assessments of percent membrane-intact sperm agreed less with flow cytometry when <80% of sperm had intact membranes, whereas automated cell counter assessments of percent membrane-intact sperm agreed less with flow cytometry when <30% of sperm had intact membranes.

  5. Data processing costs for three posture assessment methods

    PubMed Central

    2013-01-01

    Background Data processing contributes a non-trivial proportion to total research costs, but documentation of these costs is rare. This paper employed a priori cost tracking for three posture assessment methods (self-report, observation of video, and inclinometry), developed a model describing the fixed and variable cost components, and simulated additional study scenarios to demonstrate the utility of the model. Methods Trunk and shoulder postures of aircraft baggage handlers were assessed for 80 working days using all three methods. A model was developed to estimate data processing phase costs, including fixed and variable components related to study planning and administration, custom software development, training of analysts, and processing time. Results Observation of video was the most costly data processing method with total cost of € 30,630, and was 1.2-fold more costly than inclinometry (€ 26,255), and 2.5-fold more costly than self-reported data (€ 12,491). Simulated scenarios showed altering design strategy could substantially impact processing costs. This was shown for both fixed parameters, such as software development and training costs, and variable parameters, such as the number of work-shift files processed, as well as the sampling frequency for video observation. When data collection and data processing costs were combined, the cost difference between video and inclinometer methods was reduced to 7%; simulated data showed this difference could be diminished and, even, reversed at larger study sample sizes. Self-report remained substantially less costly under all design strategies, but produced alternate exposure metrics. Conclusions These findings build on the previously published data collection phase cost model by reporting costs for post-collection data processing of the same data set. Together, these models permit empirically based study planning and identification of cost-efficient study designs. PMID:24118872

  6. Research Spotlight: New method to assess coral reef health

    NASA Astrophysics Data System (ADS)

    Tretkoff, Ernie

    2011-03-01

    Coral reefs around the world are becoming stressed due to rising temperatures, ocean acidification, overfishing, and other factors. Measuring community level rates of photosynthesis, respiration, and biogenic calcification is essential to assessing the health of coral reef ecosystems because the balance between these processes determines the potential for reef growth and the export of carbon. Measurements of biological productivity have typically been made by tracing changes in dissolved oxygen in seawater as it passes over a reef. However, this is a labor-intensive and difficult method, requiring repeated measurements. (Geophysical Research Letters, doi:10.1029/2010GL046179, 2011)

  7. Comparative assessment of the methods for exchangeable acidity measuring

    NASA Astrophysics Data System (ADS)

    Vanchikova, E. V.; Shamrikova, E. V.; Bespyatykh, N. V.; Zaboeva, G. A.; Bobrova, Yu. I.; Kyz"yurova, E. V.; Grishchenko, N. V.

    2016-05-01

    A comparative assessment of the results of measuring the exchangeable acidity and its components by different methods was performed for the main mineral genetic horizons of texturally-differentiated gleyed and nongleyed soddy-podzolic and gley-podzolic soils of the Komi Republic. It was shown that the contents of all the components of exchangeable soil acidity determined by the Russian method (with potassium chloride solution as extractant, c(KCl) = 1 mol/dm3) were significantly higher than those obtained by the international method (with barium chloride solution as extractant, c(BaCl2) = 0.1 mol/dm3). The error of the estimate of the concentration of H+ ions extracted with barium chloride solution equaled 100%, and this allowed only qualitative description of this component of the soil acidity. In the case of the extraction with potassium chloride, the error of measurements was 50%. It was also shown that the use of potentiometric titration suggested by the Russian method overestimates the results of soil acidity measurement caused by the exchangeable metal ions (Al(III), Fe(III), and Mn(II)) in comparison with the atomic emission method.

  8. A Solution Quality Assessment Method for Swarm Intelligence Optimization Algorithms

    PubMed Central

    Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of “value performance,” the “ordinal performance” is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and “good enough” set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845

  9. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  10. Compatibility assessment of methods used for soil hydrophobicity determination

    NASA Astrophysics Data System (ADS)

    Papierowska, Ewa; Szatyłowicz, Jan; Kalisz, Barbara; Łachacz, Andrzej; Matysiak, Wojciech; Debaene, Guillaume

    2016-04-01

    Soil hydrophobicity is a global problem. Effect of hydrophobicity on the soil environment is very important, because it can cause irreversible changes in ecosystems, leading to their complete degradation. The choice of method used to determine soil hydrophobicity is not simple because there is no obvious criteria for their selection. The results obtained by various methods may not be coherent and may indicate different degrees of hydrophobicity within the same soil sample. The objective of the study was to assess the compatibility between methods used to determine the hydrophobicity of selected organic and mineral-organic soils. Two groups of soil materials were examined: hydrogenic (87 soil samples) and autogenic soils (19 soil samples) collected from 41 soil profiles located in north-eastern Poland. Air-dry soil samples were used. Hydrophobicity was determined using two different methods i.e. on the basis of wetting contact angle measurements between water and solid phase of soils and with water drop penetration time tests. The value of the wetting contact angle was measured using the sessile drop method with optical goniometer CAM 100 (KSV Instruments). The wetting contact angles were determined at room temperature (20° C) within 10 min after sample preparation using standard procedure. In addition, water drop penetration time was measured. In order to compare the methods used for the assessment of soil hydrophobicity, the agreement between observers model was applied. In this model five categories of soil hydrophobicity were proposed according to the class used in the soil hydrofobicity classification based on water drop penetration time test. Based on this classification the values of the weighted kappa coefficients were calculated using SAS 9.4 (SAS Institute, 2013, Cary NC) for evaluating relationships between between the different investigated methods. The results of agreement were presented in forms of agreement charts. Research results indicated good

  11. Using different methods to assess the discomfort during car driving.

    PubMed

    Ravnik, David; Otáhal, Stanislav; Dodic Fikfak, Metoda

    2008-03-01

    This study investigated the discomfort caused by car driving. Discomfort estimates were achieved by self-administered questionnaire, measured by different testing methods, and through the goniometry of principal angles. Data from a total of 200 non-professional drivers who fulfilled the questionnaire was analysed. 118 subjects were analysed by goniometry and 30 drivers were assessed using the OWAS (Ovaco orking Posture Analysis), RULA (Rapid Upper Limb Assessment), and CORLETT tests. The aim of this paper was to assess the appearance of the discomfort and to find some correlations between drivers' postures. Results suggest that different levels of discomfort are perceived in different body regions when driving cars. Differences appear mostly between the genders concerning the discomfort. With the questionnaire and the different estimation techniques, it is possible to identify 'at risk' drivers and ensure urgent attention when necessary. It can be concluded that the questionnare and the CORLETT test are good in predicting location of discomfort. TheB org CRI10scale is good indicator of the level of the discomfort, while OWAS and RULA can appraise the body posture to predict discomfort appearance. According to the goniometry data, the drivers posture could be one of the contributing factors in appearing of discomfort.

  12. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  13. Cumulative Risk Assessment Toolbox: Methods and Approaches for the Practitioner

    PubMed Central

    MacDonell, Margaret M.; Haroun, Lynne A.; Teuschler, Linda K.; Rice, Glenn E.; Hertzberg, Richard C.; Butler, James P.; Chang, Young-Soo; Clark, Shanna L.; Johns, Alan P.; Perry, Camarie S.; Garcia, Shannon S.; Jacobi, John H.; Scofield, Marcienne A.

    2013-01-01

    The historical approach to assessing health risks of environmental chemicals has been to evaluate them one at a time. In fact, we are exposed every day to a wide variety of chemicals and are increasingly aware of potential health implications. Although considerable progress has been made in the science underlying risk assessments for real-world exposures, implementation has lagged because many practitioners are unaware of methods and tools available to support these analyses. To address this issue, the US Environmental Protection Agency developed a toolbox of cumulative risk resources for contaminated sites, as part of a resource document that was published in 2007. This paper highlights information for nearly 80 resources from the toolbox and provides selected updates, with practical notes for cumulative risk applications. Resources are organized according to the main elements of the assessment process: (1) planning, scoping, and problem formulation; (2) environmental fate and transport; (3) exposure analysis extending to human factors; (4) toxicity analysis; and (5) risk and uncertainty characterization, including presentation of results. In addition to providing online access, plans for the toolbox include addressing nonchemical stressors and applications beyond contaminated sites and further strengthening resource accessibility to support evolving analyses for cumulative risk and sustainable communities. PMID:23762048

  14. Fuzzy pattern recognition method for assessing soil erosion.

    PubMed

    Saadatpour, Motahareh; Afshar, Abbas; Afshar, Mohammad Hadi

    2011-09-01

    In this paper a PSIAC-based multi-parameter fuzzy pattern recognition (MPFPR) model is proposed and applied for classifying and ranking the potential soil erosion (PSE). In this approach, standard value matrix is used to define the membership degrees of each catchment to each class and the feature values are used for alternative ranking. The characteristic of PSE for each class is expressed by linguistic variables. The proposed method is straightforward, easy to understand, very practical, and its results may easily be interpreted. To assess the performance of the model, the results of PSIAC MPFPR and original PSIAC method are interpreted and compared with the observed data. It is shown that the proposed approach reflects the fuzzy nature of the soil erosion more efficiently and is quite robust for application in real world cases.

  15. Thermography as a quantitative imaging method for assessing postoperative inflammation

    PubMed Central

    Christensen, J; Matzen, LH; Vaeth, M; Schou, S; Wenzel, A

    2012-01-01

    Objective To assess differences in skin temperature between the operated and control side of the face after mandibular third molar surgery using thermography. Methods 127 patients had 1 mandibular third molar removed. Before the surgery, standardized thermograms were taken of both sides of the patient's face using a Flir ThermaCam™ E320 (Precisions Teknik AB, Halmstad, Sweden). The imaging procedure was repeated 2 days and 7 days after surgery. A region of interest including the third molar region was marked on each image. The mean temperature within each region of interest was calculated. The difference between sides and over time were assessed using paired t-tests. Results No significant difference was found between the operated side and the control side either before or 7 days after surgery (p > 0.3). The temperature of the operated side (mean: 32.39 °C, range: 28.9–35.3 °C) was higher than that of the control side (mean: 32.06 °C, range: 28.5–35.0 °C) 2 days after surgery [0.33 °C, 95% confidence interval (CI): 0.22–0.44 °C, p < 0.001]. No significant difference was found between the pre-operative and the 7-day post-operative temperature (p > 0.1). After 2 days, the operated side was not significantly different from the temperature pre-operatively (p = 0.12), whereas the control side had a lower temperature (0.57 °C, 95% CI: 0.29–0.86 °C, p < 0.001). Conclusions Thermography seems useful for quantitative assessment of inflammation between the intervention side and the control side after surgical removal of mandibular third molars. However, thermography cannot be used to assess absolute temperature changes due to normal variations in skin temperature over time. PMID:22752326

  16. Passive sampling methods for contaminated sediments: Risk assessment and management

    PubMed Central

    Greenberg, Marc S; Chapman, Peter M; Allan, Ian J; Anderson, Kim A; Apitz, Sabine E; Beegan, Chris; Bridges, Todd S; Brown, Steve S; Cargill, John G; McCulloch, Megan C; Menzie, Charles A; Shine, James P; Parkerton, Thomas F

    2014-01-01

    This paper details how activity-based passive sampling methods (PSMs), which provide information on bioavailability in terms of freely dissolved contaminant concentrations (Cfree), can be used to better inform risk management decision making at multiple points in the process of assessing and managing contaminated sediment sites. PSMs can increase certainty in site investigation and management, because Cfree is a better predictor of bioavailability than total bulk sediment concentration (Ctotal) for 4 key endpoints included in conceptual site models (benthic organism toxicity, bioaccumulation, sediment flux, and water column exposures). The use of passive sampling devices (PSDs) presents challenges with respect to representative sampling for estimating average concentrations and other metrics relevant for exposure and risk assessment. These challenges can be addressed by designing studies that account for sources of variation associated with PSMs and considering appropriate spatial scales to meet study objectives. Possible applications of PSMs include: quantifying spatial and temporal trends in bioavailable contaminants, identifying and evaluating contaminant source contributions, calibrating site-specific models, and, improving weight-of-evidence based decision frameworks. PSM data can be used to assist in delineating sediment management zones based on likelihood of exposure effects, monitor remedy effectiveness, and, evaluate risk reduction after sediment treatment, disposal, or beneficial reuse after management actions. Examples are provided illustrating why PSMs and freely dissolved contaminant concentrations (Cfree) should be incorporated into contaminated sediment investigations and study designs to better focus on and understand contaminant bioavailability, more accurately estimate exposure to sediment-associated contaminants, and better inform risk management decisions. Research and communication needs for encouraging broader use are discussed. Integr

  17. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    SciTech Connect

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  18. Employing the radiological and nuclear risk assessment methods (RNRAM) for assessing radiological and nuclear detection architectures

    SciTech Connect

    Brigantic, Robert T.; Eddy, Ryan R.

    2014-03-20

    The United States Department of Homeland Security’s Domestic Nuclear Detection Office (DNDO) is charged with implementing domestic nuclear detection efforts to protect the U.S. from radiological and nuclear threats. DNDO is also responsible for coordinating the development of the Global Nuclear Detection Architecture (GNDA). DNDO utilizes a unique risk analysis tool to conduct a holistic risk assessment of the GNDA known as the Radiological and Nuclear Risk Assessment Methods (RNRAM). The capabilities of this tool will be used to support internal DNDO analyses and has also been used for other entities such as the International Atomic Energy Agency. The model uses a probabilistic risk assessment methodology and includes the ability to conduct a risk assessment of the effectiveness of layered architectures in the GNDA against an attack by an intelligent, adaptive adversary. This paper overviews the basic structure, capabilities, and use of RNRAM as used to assess different architectures and how various risk components are calculated through a series of interconnected modules. Also highlighted is flexible structure of RNRAM which can accommodate new modules in order to examine a variety of threat detection architectures and concepts.

  19. Novel Method for Border Irregularity Assessment in Dermoscopic Color Images

    PubMed Central

    Jaworek-Korjakowska, Joanna

    2015-01-01

    Background. One of the most important lesion features predicting malignancy is border irregularity. Accurate assessment of irregular borders is clinically important due to significantly different occurrence in benign and malignant skin lesions. Method. In this research, we present a new approach for the detection of border irregularities, as one of the major parameters in a widely used diagnostic algorithm the ABCD rule of dermoscopy. The proposed work is focused on designing an efficient automatic algorithm containing the following steps: image enhancement, lesion segmentation, borderline calculation, and irregularities detection. The challenge lies in determining the exact borderline. For solving this problem we have implemented a new method based on lesion rotation and borderline division. Results. The algorithm has been tested on 350 dermoscopic images and achieved accuracy of 92% indicating that the proposed computational approach captured most of the irregularities and provides reliable information for effective skin mole examination. Compared to the state of the art, we obtained improved classification results. Conclusions. The current study suggests that computer-aided system is a practical tool for dermoscopic image assessment and could be recommended for both research and clinical applications. The proposed algorithm can be applied in different fields of medical image analysis including, for example, CT and MRI images. PMID:26604980

  20. Comparison of 3 methods of selenium assessment in cattle.

    PubMed Central

    Waldner, C; Campbell, J; Jim, G K; Guichon, P T; Booker, C

    1998-01-01

    Three tests are routinely done to assess blood status of selenium in cattle: serum selenium, whole blood selenium, and glutathione peroxidase. The objective of this study was to compare the various analytical methods for determining blood selenium status in groups of mature cows and beef calves. Twenty to 30 blood samples per herd were collected from 8 beef herds in central Alberta and 1 dairy in Alberta herd twice a year from the spring of 1992 through the fall of 1995, and once from 185 spring calves in 2 beef herds in Saskatchewan. Serum and whole blood samples were submitted to 1 laboratory and whole blood samples were submitted to a 2nd laboratory. Samples for glutathione peroxidase determinations were submitted to a 3rd laboratory. Pearson's correlation coefficients and Cohen's kappa were calculated for each possible comparison among the different measures. The best agreement was observed between serum and whole blood analysis within Laboratory A. The remaining comparisons reflected poor agreement. Comparison of herd-level assessment resulted in better agreement than comparison of individual sample results among laboratories and procedures for all combinations tested. Serum selenium analysis was the only laboratory procedure for which external reference material was utilized. Serum selenium, whole blood selenium, and glutathione peroxidase measure different compartments of the blood selenium pool. The time frame of interest, supplementation practices, and the stability of recent dietary intake determine the optimum assessment method for individual animals or herds. Determination of the serum status or of blood selenium is more consistently measured at the herd-level than for individual samples. PMID:9559213

  1. Methods for assessing pre-induction cervical ripening

    PubMed Central

    Ezebialu, Ifeanyichukwu U; Eke, Ahizechukwu C; Eleje, George U; Nwachukwu, Chukwuemeka E

    2015-01-01

    Background Induction of labour is the artificial initiation of labour in a pregnant woman after the age of fetal viability but without any objective evidence of active phase labour and with intact fetal membranes. The need for induction of labour may arise due to a problem in the mother, her fetus or both, and the procedure may be carried out at or before term. Obstetricians have long known that for this to be successful, it is important that the uterine cervix (the neck of the womb) has favourable characteristics in terms of readiness to go into the labour state. Objectives To compare Bishop score with any other method for assessing pre-induction cervical ripening in women admitted for induction of labour. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 March 2015) and reference lists of retrieved studies to identify randomised controlled trials (RCTs). Selection criteria All RCTs comparing Bishop score with any other methods of pre-induction cervical assessment in women admitted for induction of labour. Cluster-RCTs were eligible for inclusion but none were identified. Quasi-RCTs and studies using a cross-over design were not eligible for inclusion. Studies published in abstract form were eligible for inclusion if they provided sufficient information. Comparisons could include the following. Bishop score versus transvaginal ultrasound (TVUS). Bishop score versus Insulin-like growth factor binding protein-1 (IGFBP-1). Bishop score versus vaginal fetal fibronectin (fFN). However, we only identified data for a comparison of Bishop score versus TVUS. Data collection and analysis Two review authors independently assessed the trials for inclusion, extracted the data and assessed trial quality. Data were checked for accuracy. Main results We included two trials that recruited a total of 234 women. The overall risk of bias was low for the two studies. Both studies compared Bishop score withTVUS. The two included studies did

  2. Improving Academic Program Assessment: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Rodgers, Megan; Grays, Makayla P.; Fulcher, Keston H.; Jurich, Daniel P.

    2013-01-01

    Starting with the premise that better assessment leads to more informed decisions about student learning, we investigated the factors that lead to assessment improvement. We used "meta-assessment" (i.e., evaluating the assessment process) to identify academic programs in which the assessment process had improved over a two-year period.…

  3. A hierarchical network modeling method for railway tunnels safety assessment

    NASA Astrophysics Data System (ADS)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  4. Point shear wave elastography method for assessing liver stiffness

    PubMed Central

    Ferraioli, Giovanna; Tinelli, Carmine; Lissandrin, Raffaella; Zicchetti, Mabel; Dal Bello, Barbara; Filice, Gaetano; Filice, Carlo

    2014-01-01

    AIM: To estimate the validity of the point shear-wave elastography method by evaluating its reproducibility and accuracy for assessing liver stiffness. METHODS: This was a single-center, cross-sectional study. Consecutive patients with chronic viral hepatitis scheduled for liver biopsy (LB) (Group 1) and healthy volunteers (Group 2) were studied. In each subject 10 consecutive point shear-wave elastography (PSWE) measurements were performed using the iU22 ultrasound system (Philips Medical Systems, Bothell, WA, United States). Patients in Group 1 underwent PSWE, transient elastography (TE) using FibroScan (Echosens, Paris, France) and ultrasound-assisted LB. For the assessment of PSWE reproducibility two expert raters (rater 1 and rater 2) independently performed the examinations. The performance of PSWE was compared to that of TE using LB as a reference standard. Fibrosis was staged according to the METAVIR scoring system. Receiver operating characteristic curve analyses were performed to calculate the area under the receiver operating characteristic curve (AUC) for F ≥ 2, F ≥ 3 and F = 4. The intraobserver and interobserver reproducibility of PSWE were assessed by calculating Lin’s concordance correlation coefficient. RESULTS: To assess the performance of PSWE, 134 consecutive patients in Group 1 were studied. The median values of PSWE and TE (in kilopascals) were 4.7 (IQR = 3.8-5.4) and 5.5 (IQR = 4.7-6.5), respectively, in patients at the F0-F1 stage and 3.5 (IQR = 3.2-4.0) and 4.4 (IQR = 3.5-4.9), respectively, in the healthy volunteers in Group 2 (P < 10-5). In the univariate analysis, the PSWE and TE values showed a high correlation with the fibrosis stage; low correlations with the degree of necroinflammation, aspartate aminotransferase and gamma-glutamyl transferase (GGT); and a moderate negative correlation with the platelet count. A multiple regression analysis confirmed the correlations of both PSWE and TE with fibrosis stage and GGT but not with

  5. Concept Mapping Using Cmap Tools to Enhance Meaningful Learning

    NASA Astrophysics Data System (ADS)

    Cañas, Alberto J.; Novak, Joseph D.

    Concept maps are graphical tools that have been used in all facets of education and training for organizing and representing knowledge. When learners build concept maps, meaningful learning is facilitated. Computer-based concept mapping software such as CmapTools have further extended the use of concept mapping and greatly enhanced the potential of the tool, facilitating the implementation of a concept map-centered learning environment. In this chapter, we briefly present concept mapping and its theoretical foundation, and illustrate how it can lead to an improved learning environment when it is combined with CmapTools and the Internet. We present the nationwide “Proyecto Conéctate al Conocimiento” in Panama as an example of how concept mapping, together with technology, can be adopted by hundreds of schools as a means to enhance meaningful learning.

  6. In search of meaningfulness: nostalgia as an antidote to boredom.

    PubMed

    van Tilburg, Wijnand A P; Igou, Eric R; Sedikides, Constantine

    2013-06-01

    We formulated, tested, and supported, in 6 studies, a theoretical model according to which individuals use nostalgia as a way to reinject meaningfulness in their lives when they experience boredom. Studies 1-3 established that induced boredom causes increases in nostalgia when participants have the opportunity to revert to their past. Studies 4 and 5 examined search for meaning as a mediator of the effect of boredom on nostalgia. Specifically, Study 4 showed that search for meaning mediates the effect of state boredom on nostalgic memory content, whereas Study 5 demonstrated that search for meaning mediates the effect of dispositional boredom on dispositional nostalgia. Finally, Study 6 examined the meaning reestablishment potential of nostalgia during boredom: Nostalgia mediates the effect of boredom on sense of meaningfulness and presence of meaning in one's life. Nostalgia counteracts the meaninglessness that individuals experience when they are bored.

  7. A quantitative assessment method for Ascaris eggs on hands.

    PubMed

    Jeandron, Aurelie; Ensink, Jeroen H J; Thamsborg, Stig M; Dalsgaard, Anders; Sengupta, Mita E

    2014-01-01

    The importance of hands in the transmission of soil transmitted helminths, especially Ascaris and Trichuris infections, is under-researched. This is partly because of the absence of a reliable method to quantify the number of eggs on hands. Therefore, the aim of this study was to develop a method to assess the number of Ascaris eggs on hands and determine the egg recovery rate of the method. Under laboratory conditions, hands were seeded with a known number of Ascaris eggs, air dried and washed in a plastic bag retaining the washing water, in order to determine recovery rates of eggs for four different detergents (cationic [benzethonium chloride 0.1% and cetylpyridinium chloride CPC 0.1%], anionic [7X 1% - quadrafos, glycol ether, and dioctyl sulfoccinate sodium salt] and non-ionic [Tween80 0.1% -polyethylene glycol sorbitan monooleate]) and two egg detection methods (McMaster technique and FLOTAC). A modified concentration McMaster technique showed the highest egg recovery rate from bags. Two of the four diluted detergents (benzethonium chloride 0.1% and 7X 1%) also showed a higher egg recovery rate and were then compared with de-ionized water for recovery of helminth eggs from hands. The highest recovery rate (95.6%) was achieved with a hand rinse performed with 7X 1%. Washing hands with de-ionized water resulted in an egg recovery rate of 82.7%. This washing method performed with a low concentration of detergent offers potential for quantitative investigation of contamination of hands with Ascaris eggs and of their role in human infection. Follow-up studies are needed that validate the hand washing method under field conditions, e.g. including people of different age, lower levels of contamination and various levels of hand cleanliness.

  8. A comparison of geoengineering methods: assessment of precipitation side effects

    NASA Astrophysics Data System (ADS)

    Jackson, L. S.; Crook, J. A.; Osprey, S. M.; Forster, P.

    2014-12-01

    Intentional modification of Earth's climate by geoengineering can restore global mean temperature in climate model simulations but is expected to cause regional inequalities in temperature change and shifts in precipitation which may depend on the geoengineering method employed. In simulations of twenty-first century climate using the UKMO HadGEM2 climate model, we have assessed the effectiveness of two regional scale geoengineering methods (crop and desert albedo modification) and four large scale geoengineering methods (ocean albedo modification, marine cloud brightening by sea salt, cirrus cloud thinning and stratospheric sulphur). We projected anthropogenic emissions based on RCP4.5, applied geoengineering from 2020 to 2069 and quantified the impact on temperature and precipitation for 2040-2059 compared to a no-geoengineering control simulation. We found forcing for crop albedo modification was largely insignificant (-0.3 ± 0.3 Wm-2). Desert albedo modification had a catastrophic impact on tropical precipitation drying the Amazon, the Sahel, India and China. Of the large scale geoengineering simulations, only stratospheric sulphur and ocean albedo modification were potentially scalable to temporarily return global mean temperature to the late twentieth century climate. Cirrus cloud thinning was the only method that increased global mean precipitation (+0.7%) while in other respects the four methods were remarkable in the consistency of their precipitation response to geoengineering compared to the control simulation (Figure 1). Over land, precipitation reduced less (between -0.5% and +1.8%) than global precipitation (between -3.8% and +0.7%). A northward shift in tropical precipitation over the Atlantic and eastern Pacific was found for all four methods, likely driven by cloud rapid adjustments and changes in atmospheric circulation. After geoengineering, during 2080-2099, significant differences in maritime tropical precipitation persisted despite regional

  9. Solubility assessment of 232Th from various types of soil in Malaysia using USP and DIN In Vitro digestion method

    NASA Astrophysics Data System (ADS)

    Rashid, Nur Shahidah Abdul; Perama, Yasmin Mohd Idris; Salih, Fitri Hakeem Mohd; Sarmani, Sukiman; Majid, Amran Ab.; Siong, Khoo Kok

    2016-11-01

    The overall results of the study showed that the concentrations of 232Th radionuclide using DIN digestion method during gastric phase are 0.0015 mg/kg - 0.0554 mg/kg and 0.0015 mg/kg - 0.0139 mg/kg during intestinal phase, respectively. As for USP digestion method during gastric phase are between 0.0877 mg/kg - 0.4964 mg/kg and 0.0207 mg/kg - 0.2291 mg/kg. The results from the measurements in various types of soils indicates some elevation of 232Th concentration in some types of soil compared to UNSCEAR reference values, in which may be a result from the impact of previous mining activity in the surrounding area and considered to be safe. In general, the results of 232Th concentrations from in vitro extraction technique is considered to be safe. By natural processes, thorium ingestion is getting transferred to living beings through different pathways and need to be monitored in order to assess possible hazards. Environmental studies are generally carried out to trace the pathway of radionuclides/radiotoxic elements to reach living organism. Environmental monitoring and meaningful interpretation of data from man-made pollution are more complicated without adequate knowledge about the natural abundance of radioactive elements in the environment.

  10. Implications of the CMS final meaningful use information technology rule.

    PubMed

    Thompson, Tamar; Harolds, Jay A

    2010-11-01

    The importance of Health Information Technology and the Electronic Medical Record were discussed in the October issue of Clinical Nuclear Medicine (Henkin and Harolds, Clin Nucl Med. 2010;35). Since that article was written the Final Rule has been issued on what constitutes the meaningful use of certain information technology, such that it would qualify for an incentive payment. Since billions of dollars will be used for such payments, the issuance of the Final Rule has been eagerly anticipated.

  11. Assessment of Proper Bonding Methods and Mechanical Characterization FPGA CQFPs

    NASA Technical Reports Server (NTRS)

    Davis, Milton C.

    2008-01-01

    This presentation discusses fractured leads on field-programmable gate array (FPGA) during flight vibration. Actions taken to determine root cause and resolution of the failure include finite element analysis (FEA) and vibration testing and scanning electron microscopy (with X-ray microanalysis) and energy dispersive spectrometry (SEM/EDS) failure assessment. Bonding methods for surface mount parts is assessed, including critical analysis and assessment of random fatigue damage. Regarding ceramic quad flat pack (CQFP) lead fracture, after disassembling the attitude control electronics (ACE) configuration, photographs showed six leads cracked on FPGA RTSX72SU-1 CQ208B package located on the RWIC card. An identical package (FPGA RTSX32SU-1 CQ208B) mounted on the RWIC did not results in cracked pins due to vibration. FPGA lead failure theories include workmanship issues in the lead-forming, material defect in the leads of the FPGA packages, and the insecure mounting of the board in the card guides, among other theories. Studies were conducted using simple calculations to determine the response and fatigue life of the package. Shorter packages exhibited more response when loaded by out-of-plane displacement of PCB while taller packages exhibit more response when loaded by in-plane acceleration of PCB. Additionally, under-fill did not contribute to reducing stress in leads due to out-of-plane PCB loading or from component twisting, as much as corner bonding. The combination of corner bond and under-fill is best to address mechanical and thermal S/C environment. Test results of bonded parts showed reduced (dampened) amplitude and slightly shifted peaks at the un-bonded natural frequency and an additional response at the bonded frequency. Stress due to PCBB out-of-plane loading was decreased on in the corners when only a corner bond was used. Future work may address CQFP fatigue assessment, including the investigation of discrepancy in predicted fatigue damage, as well as

  12. A simple method for assessing intestinal inflammation in Crohn's disease

    PubMed Central

    Tibble, J; Teahon, K; Thjodleifsson, B; Roseth, A; Sigthorsson, G; Bridger, S; Foster, R; Sherwood, R; Fagerhol, M; Bjarnason, I

    2000-01-01

    BACKGROUND AND AIMS—Assessing the presence and degree of intestinal inflammation objectively, simply, and reliably is a significant problem in gastroenterology. We assessed faecal excretion of calprotectin, a stable neutrophil specific marker, as an index of intestinal inflammation and its potential use as a screening test to discriminate between patients with Crohn's disease and those with irritable bowel syndrome.
METHODS—The validity of faecal calprotectin as a marker of intestinal inflammation was assessed in 22 patients with Crohn's disease (35 studies) by comparing faecal excretions and concentrations using four day faecal excretion of 111indium white cells. A cross sectional study assessed the sensitivity of faecal calprotectin concentration for the detection of established Crohn's disease (n=116). A prospective study assessed the value of faecal calprotectin in discriminating between patients with Crohn's disease and irritable bowel syndrome in 220 patients referred to a gastroenterology clinic.
RESULTS—Four day faecal excretion of 111indium (median 8.7%; 95% confidence interval (CI) 7-17%; normal <1.0%) correlated significantly (p<0.0001) with daily (median ranged from 39 to 47 mg; normal <3 mg; r=0.76-0.82) and four day faecal calprotectin excretion (median 101 mg; 95% CI 45-168 mg; normal <11 mg; r=0.80) and single stool calprotectin concentrations (median 118 mg/l; 95% CI 36-175 mg/l; normal <10 mg/l; r=0.70) in patients with Crohn's disease. The cross sectional study showed a sensitivity of 96% for calprotectin in discriminating between normal subjects (2 mg/l; 95% CI 2-3 mg/l) and those with Crohn's disease (91 mg/l; 95% CI 59-105 mg/l). With a cut off point of 30 mg/l faecal calprotectin has 100% sensitivity and 97% specificity in discriminating between active Crohn's disease and irritable bowel syndrome.
CONCLUSION—The calprotectin method may be a useful adjuvant for discriminating between patients with Crohn's disease and

  13. Alternative Assessment Methods Based on Categorizations, Supporting Technologies, and a Model for Betterment

    ERIC Educational Resources Information Center

    Ben-Jacob, Marion G.; Ben-Jacob, Tyler E.

    2014-01-01

    This paper explores alternative assessment methods from the perspective of categorizations. It addresses the technologies that support assessment. It discusses initial, formative, and summative assessment, as well as objective and subjective assessment, and formal and informal assessment. It approaches each category of assessment from the…

  14. Comparison of methods for assessing thyroid function in nonthyroidal illness

    SciTech Connect

    Melmed, S.; Geola, F.L.; Reed, A.W.; Pekary, A.E.; Park, J.; Hershman, J.M.

    1982-02-01

    Various tests of thyroid function were studied in sick patients with nonthyroidal illness (NTI) in order to determine the utility of each test for differentiating these patients from a group with hypothyroidism. We evaluated each test in 22 healthy volunteers who served as controls, 20 patients with hypothyroidism, 14 patients admitted to medical intensive care unit whose serum T/sub 4/ was less than 5 ..mu..g/dl, 13 patients with chronic liver disease, 32 patients on chronic hemodialysis for renal failure, 13 ambulatory oncology patients receiving chemotherapy 16 pregnant women, 7 women on estrogens, and 20 hyperthyroid patients. On all samples, we measured serum T/sub 4/, the free T/sub 4/ index by several methods, free T/sub 4/ by equilibrium dialysis, free T/sub 4/ calculated from thyronine-binding globulin (TBG) RIA, free T/sub 4/ by three commercial kits (Gammacoat, Immophase, and Liquisol), T/sub 3/, rT/sub 3/, and TSH (by 3 different RIA). Although all of the methods used for measuring free T/sub 4/ (including free T/sub 4/ index, free T/sub 4/ by dialysis, free T/sub 4/ assessed by TBG, and free T/sub 4/ assessed by the 3 commercial kits) were excellent for the diagnosis of hypothyroidism, hyperthyroidism, and euthyroidism in the presence of high TBG, none of these methods showed that free T/sub 4/ was consistently normal in patients with NTI; with each method, a number of NTI patients had subnormal values. In the NTI groups, free T/sub 4/ measured by dialysis and the free T/sub 4/ index generally correlated significantly with the commercial free T/sub 4/ methods. Serum rT/sub 3/ was elevated or normal in NTI patients and low in hypothyroid subjects. Serum TSH provided the most reliable differentiation between patients with primary hypothyroidism and those with NTI and low serum T/sub 4/ levels.

  15. A comparison of radiological risk assessment methods for environmental restoration

    SciTech Connect

    Dunning, D.E. Jr.; Peterson, J.M.

    1993-09-01

    Evaluation of risks to human health from exposure to ionizing radiation at radioactively contaminated sites is an integral part of the decision-making process for determining the need for remediation and selecting remedial actions that may be required. At sites regulated under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), a target risk range of 10{sup {minus}4} to 10{sup {minus}6} incremental cancer incidence over a lifetime is specified by the US Environmental Protection Agency (EPA) as generally acceptable, based on the reasonable maximum exposure to any individual under current and future land use scenarios. Two primary methods currently being used in conducting radiological risk assessments at CERCLA sites are compared in this analysis. Under the first method, the radiation dose equivalent (i.e., Sv or rem) to the receptors of interest over the appropriate period of exposure is estimated and multiplied by a risk factor (cancer risk/Sv). Alternatively, incremental cancer risk can be estimated by combining the EPA`s cancer slope factors (previously termed potency factors) for radionuclides with estimates of radionuclide intake by ingestion and inhalation, as well as radionuclide concentrations in soil that contribute to external dose. The comparison of the two methods has demonstrated that resulting estimates of lifetime incremental cancer risk under these different methods may differ significantly, even when all other exposure assumptions are held constant, with the magnitude of the discrepancy depending upon the dominant radionuclides and exposure pathways for the site. The basis for these discrepancies, the advantages and disadvantages of each method, and the significance of the discrepant results for environmental restoration decisions are presented.

  16. Dental age assessment among Tunisian children using the Demirjian method

    PubMed Central

    Aissaoui, Abir; Salem, Nidhal Haj; Mougou, Meryam; Maatouk, Fethi; Chadly, Ali

    2016-01-01

    Context: Since Demirjian system of estimating dental maturity was first described, many researchers from different countries have tested its accuracy among diverse populations. Some of these studies have pointed out a need to determine population-specific standards. Aim: The aim of this study is to evaluate the suitability of the Demirjian's method for dental age assessment in Tunisian children. Materials and Methods: This is a prospective study previously approved by the Research Ethics Local Committee of the University Hospital Fattouma Bourguiba of Monastir (Tunisia). Panoramic radiographs of 280 healthy Tunisian children of age 2.8–16.5 years were examined with Demirjian method and scored by three trained observers. Statistical Analysis Used: Dental age was compared to chronological age by using the analysis of variance (ANOVA) test. Cohen's Kappa test was performed to calculate the intra- and inter-examiner agreements. Results: Underestimation was seen in children aged between 9 and 16 years and the range of accuracy varied from −0.02 to 3 years. The advancement in dental age as determined by Demirjian system when compared to chronological age ranged from 0.3 to 1.32 year for young males and from 0.26 to 1.37 year for young females (age ranged from 3 to 8 years). Conclusions: The standards provided by Demirjian for French-Canadian children may not be suitable for Tunisian children. Each population of children may need their own specific standard for an accurate estimation of chronological age. PMID:27051223

  17. Probabilistic seismic hazard assessment of Italy using kernel estimation methods

    NASA Astrophysics Data System (ADS)

    Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.

    2013-07-01

    A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.

  18. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  19. Object relations and real life relationships: a cross method assessment.

    PubMed

    Handelzalts, Jonathan E; Fisher, Shimrit; Naot, Rachel

    2014-04-01

    This study examines the relationship between the psychoanalytic concept of object relations and real life behavior of being in an intimate relationship among heterosexual women. In a multi-method approach we used two different measures; the self-report Bell Object Relations and Reality Testing Inventory (BORRTI; Bell, Billington & Becker, 1986) and the performance based Thematic Apperception Test (TAT) Social Cognition & Object Relations Scale- Global Rating Method SCORS-G (Westen, 1995) to measure the object relations of 60 women. The Alienation subscale of the BORRTI and understanding of social causality subscale of the SCORS-G explained 34.8% of variance of the intimate relationship variable. Thus, women involved in a romantic relationship reported lower rates of alienation on the BORRTI and produced TAT narratives that were more adaptive with regard to understanding of social causality as measured by the SCORS-G than those not currently in a relationship. Results are discussed with reference to the relationship between object relations and real life measures of healthy individuals and in light of the need for a multi-method approach of assessment.

  20. Assessment of intercalibration methods for satellite microwave humidity sounders

    NASA Astrophysics Data System (ADS)

    John, Viju O.; Allan, Richard P.; Bell, William; Buehler, Stefan A.; Kottayil, Ajil

    2013-05-01

    Three methods for intercalibrating humidity sounding channels are compared to assess their merits and demerits. The methods use the following: (1) natural targets (Antarctica and tropical oceans), (2) zonal average brightness temperatures, and (3) simultaneous nadir overpasses (SNOs). Advanced Microwave Sounding Unit-B instruments onboard the polar-orbiting NOAA 15 and NOAA 16 satellites are used as examples. Antarctica is shown to be useful for identifying some of the instrument problems but less promising for intercalibrating humidity sounders due to the large diurnal variations there. Owing to smaller diurnal cycles over tropical oceans, these are found to be a good target for estimating intersatellite biases. Estimated biases are more resistant to diurnal differences when data from ascending and descending passes are combined. Biases estimated from zonal-averaged brightness temperatures show large seasonal and latitude dependence which could have resulted from diurnal cycle aliasing and scene-radiance dependence of the biases. This method may not be the best for channels with significant surface contributions. We have also tested the impact of clouds on the estimated biases and found that it is not significant, at least for tropical ocean estimates. Biases estimated from SNOs are the least influenced by diurnal cycle aliasing and cloud impacts. However, SNOs cover only relatively small part of the dynamic range of observed brightness temperatures.

  1. Assessments of lung digestion methods for recovery of fibers

    SciTech Connect

    Warheit, D.B.; Hwang, H.C.; Achinko, L. )

    1991-04-01

    Evaluation of the pulmonary hazards associated with exposure to fibrous materials tends to be more complicated than assessments required for particulate materials. Fibers are defined by aspect ratios and it is generally considered that physical dimensions play an important role in the pathogenesis of fiber-related lung diseases. Several digestion techniques have been used to recover fibers from exposed lung tissue for clearance studies. Because many of the digestion fluids are corrosive (e.g., bleach, KOH), it is conceivable that the dimensions of recovered fibers are modified during the tissue digestion process, thus creating erroneous data. Accordingly, the authors evaluated two lung digestion methods to assess whether the physical dimensions of bulk samples of fibers were altered following simulated digestion processing. Aliquots of crocidolite and chrysotile asbestos, Kevlar aramid, wollastonite, polyacrylonitrile (pan)-based carbon, and glass fibers were incubated with either saline, bleach, or KOH and then filtered. Scanning electron microscopy techniques were utilized to measure the physical dimensions (i.e., lengths and diameters) of at least 160 fibers per treatment group of each fiber type. Their results showed that the lengths and diameters of glass fibers and wollastonite were altered after treatment with KOH. In addition, treatment with bleach produced a small reduction in both asbestos fiber-type diameters, and greater changes in Kevlar and wollastonite diameters and carbon fiber lengths.

  2. Ultrasonic Apparatus and Method to Assess Compartment Syndrome

    NASA Technical Reports Server (NTRS)

    Yost, William T. (Inventor); Ueno, Toshiaki (Inventor); Hargens, Alan R. (Inventor)

    2009-01-01

    A process and apparatus for measuring pressure buildup in a body compartment that encases muscular tissue. The method includes assessing the body compartment configuration and identifying the effect of pulsatible components on compartment dimensions and muscle tissue characteristics. This process is used in preventing tissue necrosis, and in decisions of whether to perform surgery on the body compartment for prevention of Compartment Syndrome. An apparatus is used for measuring pressure build-up in the body compartment having components for imparting ultrasonic waves such as a transducer, placing the transducer to impart the ultrasonic waves, capturing the imparted ultrasonic waves, mathematically manipulating the captured ultrasonic waves and categorizing pressure build-up in the body compartment from the mathematical manipulations.

  3. Reduced-reference image quality assessment using moment method

    NASA Astrophysics Data System (ADS)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  4. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  5. Methods for assessing the quality of runoff from Minnesota peatlands

    SciTech Connect

    Clausen, J.C.

    1981-01-01

    The quality of runoff from large, undisturbed peatlands in Minnesota is chaaracterized and sampling results from a number of bogs (referred to as a multiple watershed approach) was used to assess the effects of peat mining on the quality of bog runoff. Runoff from 45 natural peatlands and one mined bog was sampled five times in 1979-80 and analyzed for 34 water quality characteristics. Peatland watersheds were classified as bog, transition, or fen, based upon both water quality and watershed characteristics. Alternative classification methods were based on frequency distributions, cluster analysis, discriminant analysis, and principal component analysis results. A multiple watershed approach was used as a basis of drawing inferences regarding the quality of runoff from a representative sample of natural bogs and a mined bog. The multiple watershed technique applied provides an alternative to long-term paired watershed experiments in evaluating the effects of land use activities on the quality of runoff from peatlands in Minnesota.

  6. Comparative Assessment of Advanced Gay Hydrate Production Methods

    SciTech Connect

    M. D. White; B. P. McGrail; S. K. Wurstner

    2009-06-30

    Displacing natural gas and petroleum with carbon dioxide is a proven technology for producing conventional geologic hydrocarbon reservoirs, and producing additional yields from abandoned or partially produced petroleum reservoirs. Extending this concept to natural gas hydrate production offers the potential to enhance gas hydrate recovery with concomitant permanent geologic sequestration. Numerical simulation was used to assess a suite of carbon dioxide injection techniques for producing gas hydrates from a variety of geologic deposit types. Secondary hydrate formation was found to inhibit contact of the injected CO{sub 2} regardless of injectate phase state, thus diminishing the exchange rate due to pore clogging and hydrate zone bypass of the injected fluids. Additional work is needed to develop methods of artificially introducing high-permeability pathways in gas hydrate zones if injection of CO{sub 2} in either gas, liquid, or micro-emulsion form is to be more effective in enhancing gas hydrate production rates.

  7. Methods to assess lipid accumulation in cancer cells.

    PubMed

    Sikkeland, Jørgen; Jin, Yang; Saatcioglu, Fahri

    2014-01-01

    Oncogenesis and tumor progression are associated with significant alterations in cellular metabolism. One metabolic pathway that is commonly deregulated in malignant cells is de novo lipogenesis. Lipogenesis is indeed highly upregulated in several types of cancer, a phenomenon that is linked to tumor progression and poor prognosis. Steroid hormones play an essential role in the growth of a variety of cancers and have been shown to increase the expression and activity of several lipogenic factors, including fatty acid synthase and sterol regulatory element-binding proteins. Such an altered gene expression profile promotes lipid biogenesis and may result in the accumulation of neutral lipids, which become visible as cytoplasmic lipid droplets. By using breast and prostate cancer cells exposed to steroid hormones as a model, here we describe methods for the direct qualitative and quantitative assessment of neutral lipid accumulation in malignant cells.

  8. In vivo imaging methods to assess glaucomatous optic neuropathy

    PubMed Central

    Fortune, Brad

    2015-01-01

    The goal of this review is to summarize the most common imaging methods currently applied for in vivo assessment of ocular structure in animal models of experimental glaucoma with an emphasis on translational relevance to clinical studies of the human disease. The most common techniques in current use include optical coherence tomography and scanning laser ophthalmoscopy. In reviewing the application of these and other imaging modalities to study glaucomatous optic neuropathy, this article is organized into three major sections: 1) imaging the optic nerve head, 2) imaging the retinal nerve fiber layer and 3) imaging retinal ganglion cell soma and dendrites. The article concludes with a brief section on possible future directions. PMID:26048475

  9. Assessment of methods for hydrogen production using concentrated solar energy

    SciTech Connect

    Glatzmaier, G.; Blake, D.; Showalter, S.

    1998-01-01

    The purpose of this work was to assess methods for hydrogen production using concentrated solar energy. The results of this work can be used to guide future work in the application of concentrated solar energy to hydrogen production. Specifically, the objectives were to: (1) determine the cost of hydrogen produced from methods that use concentrated solar thermal energy, (2) compare these costs to those of hydrogen produced by electrolysis using photovoltaics and wind energy as the electricity source. This project had the following scope of work: (1) perform cost analysis on ambient temperature electrolysis using the 10 MWe dish-Stirling and 200 MWe power tower technologies; for each technology, sue two cases for projected costs, years 2010 and 2020 the dish-Stirling system, years 2010 and 2020 for the power tower, (2) perform cost analysis on high temperature electrolysis using the 200 MWe power tower technology and projected costs for the year 2020, and (3) identify and describe the key technical issues for high temperature thermal dissociation and the thermochemical cycles.

  10. Organisational impact: Definition and assessment methods for medical devices.

    PubMed

    Roussel, Christophe; Carbonneil, Cédric; Audry, Antoine

    2016-02-01

    Health technology assessment (HTA) is a rapidly developing area and the value of taking non-clinical fields into consideration is growing. Although the health-economic aspect is commonly recognised, evaluating organisational impact has not been studied nearly as much. The goal of this work was to provide a definition of organisational impact in the sector of medical devices by defining its contours and exploring the evaluation methods specific to this field. Following an analysis of the literature concerning the impact of technologies on organisations as well as the medical literature, and also after reviewing the regulatory texts in this respect, the group of experts identified 12 types of organisational impact. A number of medical devices were carefully screened using the criteria grid, which proved to be operational and to differentiate properly. From the analysis of the practice and of the methods described, the group was then able to derive a few guidelines to successfully evaluate organisational impact. This work shows that taking organisational impact into consideration may be critical alongside of the other criteria currently in favour (clinically and economically). What remains is to confer a role in the decision-making process on this factor and one that meets the economic efficiency principle.

  11. Gingival Biotype Assessment in a Healthy Periodontium: Transgingival Probing Method

    PubMed Central

    Manjunath, R. G. Shiva; Sarkar, Arijit

    2015-01-01

    Background Gingival biotype is the thickness of the gingiva in the faciopalatal dimension. It has a significant impact on the outcome of the restorative, regenerative and implant therapy. It has been suggested that a direct co-relation exists with the susceptibility of gingival recession followed by any surgical procedure. So, the study was aimed to assess gingival biotype in different age groups of males and females using transgingival probing method. Materials and Methods Gingival thickness (GT) was evaluated in 336 patients including males and females of different age groups. The latter was based on the transparency of the periodontal probe through the gingival margin while probing the buccal sulcus. Final data collected was then used for statistical analysis. Results A significant difference was found between males and females with males showing thick biotype. Out of the total samples 76.9% of males showed thick biotype compared to 13.3 % of females which was statistically significant. Conclusion This was probably one of the few attempts to correlate gingival biotype with different age groups in males and females. A clear thick gingiva was found in more than two-third of the male subjects whereas majority of female subjects showed thin biotype. Also, it was seen that in females, the gingival biotype varies with age unlike in male. PMID:26155566

  12. Quantitative assessment of gene expression network module-validation methods.

    PubMed

    Li, Bing; Zhang, Yingying; Yu, Yanan; Wang, Pengqian; Wang, Yongcheng; Wang, Zhong; Wang, Yongyan

    2015-10-16

    Validation of pluripotent modules in diverse networks holds enormous potential for systems biology and network pharmacology. An arising challenge is how to assess the accuracy of discovering all potential modules from multi-omic networks and validating their architectural characteristics based on innovative computational methods beyond function enrichment and biological validation. To display the framework progress in this domain, we systematically divided the existing Computational Validation Approaches based on Modular Architecture (CVAMA) into topology-based approaches (TBA) and statistics-based approaches (SBA). We compared the available module validation methods based on 11 gene expression datasets, and partially consistent results in the form of homogeneous models were obtained with each individual approach, whereas discrepant contradictory results were found between TBA and SBA. The TBA of the Zsummary value had a higher Validation Success Ratio (VSR) (51%) and a higher Fluctuation Ratio (FR) (80.92%), whereas the SBA of the approximately unbiased (AU) p-value had a lower VSR (12.3%) and a lower FR (45.84%). The Gray area simulated study revealed a consistent result for these two models and indicated a lower Variation Ratio (VR) (8.10%) of TBA at 6 simulated levels. Despite facing many novel challenges and evidence limitations, CVAMA may offer novel insights into modular networks.

  13. Advanced criticality assessment method for sewer pipeline assets.

    PubMed

    Syachrani, S; Jeong, H D; Chung, C S

    2013-01-01

    For effective management of water and wastewater infrastructure, the United States Environmental Protection Agency (US-EPA) has long emphasized the significant role of risk in prioritizing and optimizing asset management decisions. High risk assets are defined as assets with a high probability of failure (e.g. soon to fail, old, poor condition) and high consequences of failure (e.g. environmental impact, high expense, safety concerns, social disruption). In practice, the consequences of failure are often estimated by experts through a Delphi method. However, the estimation of the probability of failure has been challenging as it requires the thorough analysis of the historical condition assessment data, repair and replacement records, and other factors influencing the deterioration of the asset. The most common predictor in estimating the probability of failure is calendar age. However, a simple reliance on calendar age as a basis for estimating the asset's deterioration pattern completely ignores the different aging characteristics influenced by various operational and environmental conditions. This paper introduces a new approach of using 'real age' in estimating the probability of failure. Unlike the traditional calendar age method, the real age represents the adjusted age based on the unique operational and environmental conditions of the asset. Depending on the individual deterioration pattern, the real age could be higher or lower than its calendar age. Using the concept of real age, the probability of failure of an asset can be more accurately estimated.

  14. Irritable bowel syndrome: methods, mechanisms, and pathophysiology. Methods to assess visceral hypersensitivity in irritable bowel syndrome.

    PubMed

    Keszthelyi, D; Troost, F J; Masclee, A A

    2012-07-15

    Irritable bowel syndrome (IBS) is a common functional gastrointestinal disorder, characterized by recurrent abdominal pain or discomfort in combination with disturbed bowel habits in the absence of identifiable organic cause. Visceral hypersensitivity has emerged as a key hypothesis in explaining the painful symptoms in IBS and has been proposed as a "biological hallmark" for the condition. Current techniques of assessing visceral perception include the computerized barostat using rectal distensions, registering responses induced by sensory stimuli including the flexor reflex and cerebral evoked potentials, as well as brain imaging modalities such as functional magnetic resonance imaging and positron emission tomography. These methods have provided further insight into alterations in pain processing in IBS, although the most optimal method and condition remain to be established. In an attempt to give an overview of these methods, a literature search in the electronic databases PubMed and MEDLINE was executed using the search terms "assessment of visceral pain/visceral nociception/visceral hypersensitivity" and "irritable bowel syndrome." Both original articles and review articles were considered for data extraction. This review aims to discuss currently used modalities in assessing visceral perception, along with advantages and limitations, and aims also to define future directions for methodological aspects in visceral pain research. Although novel paradigms such as brain imaging and neurophysiological recordings have been introduced in the study of visceral pain, confirmative studies are warranted to establish their robustness and clinical relevance. Therefore, subjective verbal reporting following rectal distension currently remains the best-validated technique in assessing visceral perception in IBS.

  15. Critical Assessment of Correction Methods for Fisheye Lens Distortion

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Tian, C.; Huang, Y.

    2016-06-01

    A fisheye lens is widely used to create a wide panoramic or hemispherical image. It is an ultra wide-angle lens that produces strong visual distortion. The distortion modeling and estimation of the fisheye lens are the crucial step for fisheye lens calibration and image rectification in computer vision and close-range photography. There are two kinds of distortion: radial and tangential distortion. Radial distortion is large for fisheye imaging and critical for the subsequent image processing. Although many researchers have developed calibration algorithms of radial distortion of fisheye lens, quantitative evaluation of the correction performance has remained a challenge. This is the first paper that intuitively and objectively evaluates the performance of five different calibration algorithms. Upto- date research on fisheye lens calibration is comprehensively reviewed to identify the research need. To differentiate their performance in terms of precision and ease-using, five methods are then tested using a diverse set of actual images of the checkerboard that are taken at Wuhan University, China under varying lighting conditions, shadows, and shooting angles. The method of rational function model, which was generally used for wide-angle lens correction, outperforms the other methods. However, the one parameter division model is easy for practical use without compromising too much the precision. The reason is that it depends on the linear structure in the image and requires no preceding calibration. It is a tradeoff between correction precision and ease-using. By critically assessing the strengths and limitations of the existing algorithms, the paper provides valuable insight and guideline for future practice and algorithm development that are important for fisheye lens calibration. It is promising for the optimal design of lens correction models that are suitable for the millions of portable imaging devices.

  16. Basic theory and methods of dosimetry for use in risk assessment of genotoxic chemicals. Final report

    SciTech Connect

    Ehrenberg, L.; Granath, F.

    1992-12-31

    This project is designed to be theoretical, with limited experimental input. The work then would have to be directed towards an identification of problems, with an emphasis on the potential ability of molecular/biochemical methods to reach a solution, rather than aiming at solutions of the problems. In addition, the work is dependent on experimental work within parallel projects. Initially, projects running at this laboratory were strongly tied up with practical matters, such as the development of monitoring methods for specific exposures, with limited resources for basic research. As sketched in the scientific report below, section 4 the meaningfulness of molecular/biochemical methods and their potential contribution to the problem of dsk estimation has to be seen against a broad overview of this problem and current efforts to solve it. This overview, given as a brief summary in section 3, shows the necessity of combining different fields of research, holding them together by strictly quantitative aspects.

  17. Development of Probabilistic Methods to Assess Meteotsunami Hazards

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Ten Brink, U. S.

    2014-12-01

    A probabilistic method to assess the hazard from meteotsunamis is developed from both probabilistic tsunami hazard analysis (PTHA) and probabilistic storm-surge forecasting. Meteotsunamis are unusual sea level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation, similar to that used in PTHA, incorporates different meteotsunami sources. A historical record of 116 pressure disturbances recorded between 2000 and 2013 by the U.S. Automated Surface Observing Stations (ASOS) along the U.S. East Coast is used to establish a continuous analytic distribution of each source parameter as well as the overall Poisson rate of occurrence. Initially, atmospheric parameters are considered independently such that the joint probability distribution is given by the product of each marginal distribution. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of pressure disturbances is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a finite-difference hydrodynamic model that solves for the linearized long-wave equations. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using 20 synthetic catalogs of 116 events each, resampled from the parent parameter distributions, yield mean and quantile hazard curves. An example is presented for four Mid-Atlantic sites using ASOS data in which only atmospheric pressure disturbances from squall lines and derechos are considered. Results indicate that site-to-site variations among meteotsunami hazard curves are related to the geometry and width of the adjacent continental shelf. The new hazard analysis of meteotsunamis is important for

  18. Use of the Attribute Hierarchy Method for Development of Student Cognitive Models and Diagnostic Assessments in Geoscience Education

    NASA Astrophysics Data System (ADS)

    Corrigan, S.; Brodsky, L. M.; Loper, S.; Brown, N.; Curley, J.; Baker, J.; Goss, M.; Castek, J.; Barber, J.

    2010-12-01

    There is a recognized need to better understand student learning in the geosciences (Stofflet, 1994; Zalles, Quallmalz, Gobert and Pallant, 2007). Educators, cognitive psychologists and practicing scientists have also called for instructional approaches that support deep conceptual development (Manduca, Mogk and Stillings, 2004, Libarkin and Kurdziel, 2006). In both cases there is an important role for educational measures that can generate descriptions of how student understanding develops over time and inform instruction. The presenters will suggest one way of responding to these needs by describing the Attribute Hierarchy Method (AHM) of assessment (Leighton, Gierl and Hunka, 2004; Gierl, Cui, Wang and Zhou, 2008) as enacted in a large-scale earth science curriculum development project funded by the Bill and Melinda Gates Foundation. The AHM is one approach to criterion referenced, diagnostic assessment that ties measure design to cognitive models of student learning in order to support justified inferences about students’ understanding and the knowledge required for continued development. The Attribute Hierarchy Method bears potential for researchers and practitioners interested in learning progressions and solves many problems associated with making meaningful, justified inferences about students’ understanding based on their assessment performances. The process followed to design and develop the project’s cognitive models as well as a description of how they are used in subsequent assessment task design will be emphasized in order to demonstrate how the AHM may be applied in the context of geoscience education. Results from over twenty student cognitive interviews, and two hypothesized cognitive models -- one describing a student pathway for understanding rock formation and a second describing a student pathway for increasingly sophisticated use of maps and models in the geosciences - are also described. Sample assessment items will be provided as

  19. Evaluation of Current Assessment Methods in Engineering Entrepreneurship Education

    ERIC Educational Resources Information Center

    Purzer, Senay; Fila, Nicholas; Nataraja, Kavin

    2016-01-01

    Quality assessment is an essential component of education that allows educators to support student learning and improve educational programs. The purpose of this study is to evaluate the current state of assessment in engineering entrepreneurship education. We identified 52 assessment instruments covered in 29 journal articles and conference…

  20. Measuring meaningful learning in the undergraduate chemistry laboratory

    NASA Astrophysics Data System (ADS)

    Galloway, Kelli R.

    The undergraduate chemistry laboratory has been an essential component in chemistry education for over a century. The literature includes reports on investigations of singular aspects laboratory learning and attempts to measure the efficacy of reformed laboratory curriculum as well as faculty goals for laboratory learning which found common goals among instructors for students to learn laboratory skills, techniques, experimental design, and to develop critical thinking skills. These findings are important for improving teaching and learning in the undergraduate chemistry laboratory, but research is needed to connect the faculty goals to student perceptions. This study was designed to explore students' ideas about learning in the undergraduate chemistry laboratory. Novak's Theory of Meaningful Learning was used as a guide for the data collection and analysis choices for this research. Novak's theory states that in order for meaningful learning to occur the cognitive, affective, and psychomotor domains must be integrated. The psychomotor domain is inherent in the chemistry laboratory, but the extent to which the cognitive and affective domains are integrated is unknown. For meaningful learning to occur in the laboratory, students must actively integrate both the cognitive domain and the affective domains into the "doing" of their laboratory work. The Meaningful Learning in the Laboratory Instrument (MLLI) was designed to measure students' cognitive and affective expectations and experiences within the context of conducting experiments in the undergraduate chemistry laboratory. Evidence for the validity and reliability of the data generated by the MLLI were collected from multiple quantitative studies: a one semester study at one university, a one semester study at 15 colleges and universities across the United States, and a longitudinal study where the MLLI was administered 6 times during two years of general and organic chemistry laboratory courses. Results from

  1. Qualitative Insights from a Canadian Multi-Institutional Research Study: In Search of Meaningful E-Learning

    ERIC Educational Resources Information Center

    Carter, Lorraine M.; Salyers, Vince; Myers, Sue; Hipfner, Carol; Hoffart, Caroline; MacLean, Christa; White, Kathy; Matus, Theresa; Forssman, Vivian; Barrett, Penelope

    2014-01-01

    This paper reports the qualitative findings of a mixed methods research study conducted at three Canadian post-secondary institutions. Called the Meaningful E-learning or MEL project, the study was an exploration of the teaching and learning experiences of faculty and students as well as their perceptions of the benefits and challenges of…

  2. Parental Functioning in Families of Children with ADHD: Evidence for Behavioral Parent Training and Importance of Clinically Meaningful Change

    ERIC Educational Resources Information Center

    Gerdes, Alyson C.; Haack, Lauren M.; Schneider, Brian W.

    2012-01-01

    Objective/Method: Statistically significant and clinically meaningful effects of behavioral parent training on parental functioning were examined for 20 children with ADHD and their parents who had successfully completed a psychosocial treatment for ADHD. Results/Conclusion: Findings suggest that behavioral parent training resulted in…

  3. Developing a Spiritual Assessment Toolbox: A Discussion of the Strengths and Limitations of Five Different Assessment Methods

    ERIC Educational Resources Information Center

    Hodge, David R.

    2005-01-01

    Increasingly, social workers are being called on to conduct spiritual assessments, yet few assessment methods have appeared in academic literature. This article reviews five complementary assessment approaches that have recently been developed to highlight different facets of clients' spiritual lives. Specifically, one verbal model, spiritual…

  4. Using slides to test for changes in crown defoliation assessment methods. Part I: Visual assessment of slides.

    PubMed

    Dobbertin, Matthias; Hug, Christian; Mizoue, Nobuya

    2004-11-01

    In this study we used photographs of tree crowns to test whether the assessment methods for tree defoliation in Switzerland have changed over time. We randomly selected 24 series of slides of Norway spruce with field assessments made between 1986 and 1995. The slides were randomly arranged and assessed by three experts without prior knowledge of the year when the slide was taken or the tree number. Defoliation was assessed using the Swiss reference photo guide. Although the correlations between the field assessments and slide assessments were high (Spearman's rank correlation coefficient ranged between 0.79 and 0.83), we found significant differences between field and slide assessments (4.3 to 9% underprediction by the slide assessors) and between the slide assessments. However, no significant trends in field assessment methods could be detected. When the mean differences between field and slide assessments were subtracted, in some years, field assessors consistently underpredicted (1990, 1992) or overpredicted defoliation (1987, 1991). Defoliation tended to be overpredicted in slides taken against the light, and underpredicted for trees with more than 25% crown overlap. We conclude that slide series can be used to detect changes in assessment methods. However, potential observer bias calls for more objective methods of assessment.

  5. 76 FR 71934 - Tobacco Transition Payment Program; Availability of Current Assessment Methods Determination...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-21

    ... Farm Service Agency Tobacco Transition Payment Program; Availability of Current Assessment Methods... regarding two consolidated determinations with respect to the current methods used to calculate manufacturer... matter, that the continued use of current procedure to calculate manufacturer and importer assessments...

  6. Assessment of Methods for the Intracellular Blockade of GABAA Receptors

    PubMed Central

    Atherton, Laura A.; Burnell, Erica S.; Mellor, Jack R.

    2016-01-01

    Selective blockade of inhibitory synaptic transmission onto specific neurons is a useful tool for dissecting the excitatory and inhibitory synaptic components of ongoing network activity. To achieve this, intracellular recording with a patch solution capable of blocking GABAA receptors has advantages over other manipulations, such as pharmacological application of GABAergic antagonists or optogenetic inhibition of populations of interneurones, in that the majority of inhibitory transmission is unaffected and hence the remaining network activity preserved. Here, we assess three previously described methods to block inhibition: intracellular application of the molecules picrotoxin, 4,4’-dinitro-stilbene-2,2’-disulphonic acid (DNDS) and 4,4’-diisothiocyanostilbene-2,2’-disulphonic acid (DIDS). DNDS and picrotoxin were both found to be ineffective at blocking evoked, monosynaptic inhibitory postsynaptic currents (IPSCs) onto mouse CA1 pyramidal cells. An intracellular solution containing DIDS and caesium fluoride, but lacking nucleotides ATP and GTP, was effective at decreasing the amplitude of IPSCs. However, this effect was found to be independent of DIDS, and the absence of intracellular nucleotides, and was instead due to the presence of fluoride ions in this intracellular solution, which also blocked spontaneously occurring IPSCs during hippocampal sharp waves. Critically, intracellular fluoride ions also caused a decrease in both spontaneous and evoked excitatory synaptic currents and precluded the inclusion of nucleotides in the intracellular solution. Therefore, of the methods tested, only fluoride ions were effective for intracellular blockade of IPSCs but this approach has additional cellular effects reducing its selectivity and utility. PMID:27501143

  7. 76 FR 4345 - A Method To Assess Climate-Relevant Decisions: Application in the Chesapeake Bay

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... AGENCY A Method To Assess Climate-Relevant Decisions: Application in the Chesapeake Bay AGENCY... review draft document titled, ``A Method to Assess Climate-Relevant Decisions: Application in the.../conferences/peerreview/register-chesapeake.htm . The draft ``A Method to Assess Climate-Relevant...

  8. Using nursing clinical decision support systems to achieve meaningful use.

    PubMed

    Harrison, Roberta L; Lyerla, Frank

    2012-07-01

    The Health Information Technology and Clinical Health Act (one component of the American Recovery and Reinvestment Act) is responsible for providing incentive payments to hospitals and eligible providers in an effort to support the adoption of electronic health records. Future penalties are planned for electronic health record noncompliance. In order to receive incentives and avoid penalties, hospitals and eligible providers must demonstrate "meaningful use" of their electronic health records. One of the meaningful-use objectives established by the Centers for Medicare & Medicaid Services involves the use of a clinical decision support rule that addresses a hospital-defined, high-priority condition. This article describes the Plan-Do-Study-Act process for creating and implementing a nursing clinical decision support system designed to improve guideline adherence for hypoglycemia management. This project identifies hypoglycemia management as the high-priority area. However, other facilities with different high-priority conditions may find the process presented in this article useful for implementing additional clinical decision support rules geared toward improving outcomes and meeting federal mandates.

  9. HDMR methods to assess reliability in slope stability analyses

    NASA Astrophysics Data System (ADS)

    Kozubal, Janusz; Pula, Wojciech; Vessia, Giovanna

    2014-05-01

    -soil masses) resulting in sliding mechanisms have been investigated in this study. The reliability indices values drawn from the HDRM method have been compared with conventional approaches as neural networks: the efficiency of HDRM is shown in the case studied. References Chowdhury R., Rao B.N. and Prasad A.M. 2009. High-dimensional model representation for structural reliability analysis. Commun. Numer. Meth. Engng, 25: 301-337. Chowdhury R. and Rao B. 2010. Probabilistic Stability Assessment of Slopes Using High Dimensional Model Representation. Computers and Geotechnics, 37: 876-884.

  10. Current Development in Elderly Comprehensive Assessment and Research Methods

    PubMed Central

    Jiang, Shantong; Li, Pingping

    2016-01-01

    Comprehensive geriatric assessment (CGA) is a core and an essential part of the comprehensive care of the aging population. CGA uses specific tools to summarize elderly status in several domains that may influence the general health and outcomes of diseases of elderly patients, including assessment of medical, physical, psychological, mental, nutritional, cognitive, social, economic, and environmental status. Here, in this paper, we review different assessment tools used in elderly patients with chronic diseases. The development of comprehensive assessment tools and single assessment tools specially used in a dimension of CGA was discussed. CGA provides substantial insight into the comprehensive management of elderly patients. Developing concise and effective assessment instruments is helpful to carry out CGA widely to create a higher clinical value. PMID:27042661

  11. Making Each Other’s Daily Life: Nurse Assistants’ Experiences and Knowledge on Developing a Meaningful Daily Life in Nursing Homes

    PubMed Central

    James, Inger; Fredriksson, Carin; Wahlström, Catrin; Kihlgren, Annica; Blomberg, Karin

    2014-01-01

    Background: In a larger action research project, guidelines were generated for how a meaningful daily life could be developed for older persons. In this study, we focused on the nurse assistants’ (NAs) perspectives, as their knowledge is essential for a well-functioning team and quality of care. The aim was to learn from NAs’ experiences and knowledge about how to develop a meaningful daily life for older persons in nursing homes and the meaning NAs ascribe to their work. Methods: The project is based on Participatory and Appreciative Action and Reflection. Data were generated through interviews, participating observations and informal conversations with 27 NAs working in nursing homes in Sweden, and a thematic analysis was used. Result: NAs developed a meaningful daily life by sensing and finding the “right” way of being (Theme 1). They sense and read the older person in order to judge how the person was feeling (Theme 2). They adapt to the older person (Theme 3) and share their daily life (Theme 4). NAs use emotional involvement to develop a meaningful daily life for the older person and meaning in their own work (Theme 5), ultimately making each other’s daily lives meaningful. Conclusion: It was obvious that NAs based the development of a meaningful daily life on different forms of knowledge: the oreticaland practical knowledge, and practical wisdom, all of which are intertwined. These results could be used within the team to constitute a meaningful daily life for older persons in nursing homes. PMID:25246997

  12. Proactive interference does not meaningfully distort visual working memory capacity estimates in the canonical change detection task.

    PubMed

    Lin, Po-Han; Luck, Steven J

    2012-01-01

    The change detection task has become a standard method for estimating the storage capacity of visual working memory. Most researchers assume that this task isolates the properties of an active short-term storage system that can be dissociated from long-term memory systems. However, long-term memory storage may influence performance on this task. In particular, memory traces from previous trials may create proactive interference that sometimes leads to errors, thereby reducing estimated capacity. Consequently, the capacity of visual working memory may be higher than is usually thought, and correlations between capacity and other measures of cognition may reflect individual differences in proactive interference rather than individual differences in the capacity of the short-term storage system. Indeed, previous research has shown that change detection performance can be influenced by proactive interference under some conditions. The purpose of the present study was to determine whether the canonical version of the change detection task - in which the to-be-remembered information consists of simple, briefly presented features - is influenced by proactive interference. Two experiments were conducted using methods that ordinarily produce substantial evidence of proactive interference, but no proactive interference was observed. Thus, the canonical version of the change detection task can be used to assess visual working memory capacity with no meaningful influence of proactive interference.

  13. Comparative assessment of life cycle assessment methods used for personal computers.

    PubMed

    Yao, Marissa A; Higgs, Tim G; Cullen, Michael J; Stewart, Scott; Brady, Todd A

    2010-10-01

    This article begins with a summary of findings from commonly cited life cycle assessments (LCA) of Information and Communication Technology (ICT) products. While differing conclusions regarding environmental impact are expected across product segments (mobile phones, personal computers, servers, etc.) significant variation and conflicting conclusions are observed even within product segments such as the desktop Personal Computer (PC). This lack of consistent conclusions and accurate data limits the effectiveness of LCA to influence policy and product design decisions. From 1997 to 2010, the majority of published studies focused on the PC concluded that the use phase contributes most to the life cycle energy demand of PC products with a handful of studies suggesting that manufacturing phase of the PC has the largest impact. The purpose of this article is to critically review these studies in order to analyze sources of uncertainty, including factors that extend beyond data quality to the models and assumptions used. These findings suggest existing methods to combine process-based LCA data with product price data and remaining value adjustments are not reliable in conducting life cycle assessments for PC products. Recommendations are provided to assist future LCA work.

  14. Historical overview of diet assessment and food consumption surveys in Spain: assessment methods and applications.

    PubMed

    Morán Fagúndez, Luis Juan; Rivera Torres, Alejandra; González Sánchez, María Eugenia; de Torres Aured, Mari Lourdes; López-Pardo Martínez, Mercedes; Irles Rocamora, José Antonio

    2015-02-26

    The food consumption assessment methods are used in nutrition and health population surveys and are the basis for the development of guidelines, nutritional recommendations and health plans, The study of these issues is one of the major tasks of the research and health policy in developed countries. Major advances nationally in this area have been made since 1940, both in the reliability of the data and in the standardization of studies, which is a necessary condition to compare changes over time. In this article the history and application of different dietary surveys, dietary history and food frequency records are analyzed. Besides information from surveys conducted at a national level, the main data currently available for public health planning in nutrition comes from nutritional analysis of household budget surveys and food balance sheets, based on data provided by the Ministry of Agriculture.

  15. A Comparison of Direct and Indirect Writing Assessment Methods.

    ERIC Educational Resources Information Center

    Stiggins, Richard J.

    1982-01-01

    Compares direct and indirect writing assessment strategies and contrasts them in terms of the relationship each has to specific classroom decision-making situations, the components of writing assessed, practical testing matters, characteristics of test exercises, test scoring procedures, and procedures for determining test quality. (HOD)

  16. AN OVERVIEW OF DATA INTEGRATION METHODS FOR REGIONAL ASSESSMENT

    EPA Science Inventory

    One of the goals of the EPA's Regional Vulnerability Assessment (ReVA) project is to take diverse environmental data and develop objective criteria to evaluate environmental risk assessments at the regions: scale. The data include (but are not limited to) variables for forests, ...

  17. New Testing Methods to Assess Technical Problem-Solving Ability.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    Tests to assess problem-solving ability being provided for the Air Force are described, and some details on the development and validation of these computer-administered diagnostic achievement tests are discussed. Three measurement approaches were employed: (1) sequential problem solving; (2) context-free assessment of fundamental skills and…

  18. Predicting Optimal Preference Assessment Methods for Individuals with Developmental Disabilities

    ERIC Educational Resources Information Center

    Thomson, Kendra M.; Czarnecki, Diana; Martin, Toby L.; Yu, C. T.; Martin, Garry L.

    2007-01-01

    The single-stimulus (SS) preference assessment procedure has been described as more appropriate than the paired stimulus (PS) procedure for "lower functioning" individuals, but this guideline's vagueness limits its usefulness. We administered the SS and PS preference assessment procedures with food items to seven individuals with severe…

  19. Effects of Rater Characteristics and Scoring Methods on Speaking Assessment

    ERIC Educational Resources Information Center

    Matsugu, Sawako

    2013-01-01

    Understanding the sources of variance in speaking assessment is important in Japan where society's high demand for English speaking skills is growing. Three challenges threaten fair assessment of speaking. First, in Japanese university speaking courses, teachers are typically the only raters, but teachers' knowledge of their students may unfairly…

  20. New methods for assessing the fascinating nature of nature experiences.

    PubMed

    Joye, Yannick; Pals, Roos; Steg, Linda; Evans, Ben Lewis

    2013-01-01

    In recent years, numerous environmental psychology studies have demonstrated that contact with nature as opposed to urban settings can improve an individual's mood, can lead to increased levels of vitality, and can offer an opportunity to recover from stress. According to Attention Restoration Theory (ART) the restorative potential of natural environments is situated in the fact that nature can replenish depleted attentional resources. This replenishment takes place, in part, because nature is deemed to be a source of fascination, with fascination being described as having an "attentional", an "affective" and an "effort" dimension. However, the claim that fascination with nature involves these three dimensions is to a large extent based on intuition or derived from introspection-based measurement methods, such as self-reports. In three studies, we aimed to more objectively assess whether these three dimensions indeed applied to experiences related to natural environments, before any (attentional) depletion has taken place. The instruments that were used were: (a) the affect misattribution procedure (Study 1), (b) the dot probe paradigm (Study 2) and (c) a cognitively effortful task (Study 3). These instrument were respectively aimed at verifying the affective, attentional and effort dimension of fascination. Overall, the results provide objective evidence for the claims made within the ART framework, that natural as opposed to urban settings are affectively positive (cfr., affective dimension) and that people have an attentional bias to natural (rather than urban) environments (cfr., attentional dimension). The results regarding the effort dimension are less straightforward, and suggest that this dimension only becomes important in sufficiently difficult cognitive tasks.

  1. Assessing groundwater quality for irrigation using indicator kriging method

    NASA Astrophysics Data System (ADS)

    Delbari, Masoomeh; Amiri, Meysam; Motlagh, Masoud Bahraini

    2016-11-01

    One of the key parameters influencing sprinkler irrigation performance is water quality. In this study, the spatial variability of groundwater quality parameters (EC, SAR, Na+, Cl-, HCO3 - and pH) was investigated by geostatistical methods and the most suitable areas for implementation of sprinkler irrigation systems in terms of water quality are determined. The study was performed in Fasa county of Fars province using 91 water samples. Results indicated that all parameters are moderately to strongly spatially correlated over the study area. The spatial distribution of pH and HCO3 - was mapped using ordinary kriging. The probability of concentrations of EC, SAR, Na+ and Cl- exceeding a threshold limit in groundwater was obtained using indicator kriging (IK). The experimental indicator semivariograms were often fitted well by a spherical model for SAR, EC, Na+ and Cl-. For HCO3 - and pH, an exponential model was fitted to the experimental semivariograms. Probability maps showed that the risk of EC, SAR, Na+ and Cl- exceeding the given critical threshold is higher in lower half of the study area. The most proper agricultural lands for sprinkler irrigation implementation were identified by evaluating all probability maps. The suitable areas for sprinkler irrigation design were determined to be 25,240 hectares, which is about 34 percent of total agricultural lands and are located in northern and eastern parts. Overall the results of this study showed that IK is an appropriate approach for risk assessment of groundwater pollution, which is useful for a proper groundwater resources management.

  2. Ecosystem assessment methods for cumulative effects at the regional scale

    SciTech Connect

    Hunsaker, C.T.

    1989-01-01

    Environmental issues such as nonpoint-source pollution, acid rain, reduced biodiversity, land use change, and climate change have widespread ecological impacts and require an integrated assessment approach. Since 1978, the implementing regulations for the National Environmental Policy Act (NEPA) have required assessment of potential cumulative environmental impacts. Current environmental issues have encouraged ecologists to improve their understanding of ecosystem process and function at several spatial scales. However, management activities usually occur at the local scale, and there is little consideration of the potential impacts to the environmental quality of a region. This paper proposes that regional ecological risk assessment provides a useful approach for assisting scientists in accomplishing the task of assessing cumulative impacts. Critical issues such as spatial heterogeneity, boundary definition, and data aggregation are discussed. Examples from an assessment of acidic deposition effects on fish in Adirondack lakes illustrate the importance of integrated data bases, associated modeling efforts, and boundary definition at the regional scale.

  3. Assessment of methods for mapping snow cover from MODIS

    NASA Astrophysics Data System (ADS)

    Rittger, Karl; Painter, Thomas H.; Dozier, Jeff

    2013-01-01

    Characterization of snow is critical for understanding Earth’s water and energy cycles. Maps of snow from MODIS have seen growing use in investigations of climate, hydrology, and glaciology, but the lack of rigorous validation of different snow mapping methods compromises these studies. We examine three widely used MODIS snow products: the “binary” (i.e., snow yes/no) global snow maps that were among the initial MODIS standard products; a more recent standard MODIS fractional snow product; and another fractional snow product, MODSCAG, based on spectral mixture analysis. We compare them to maps of snow obtained from Landsat ETM+ data, whose 30 m spatial resolution provides nearly 300 samples within a 500 m MODIS nadir pixel. The assessment uses 172 images spanning a range of snow and vegetation conditions, including the Colorado Rocky Mountains, the Upper Rio Grande, California’s Sierra Nevada, and the Nepal Himalaya. MOD10A1 binary and fractional fail to retrieve snow in the transitional periods during accumulation and melt while MODSCAG consistently maintains its retrieval ability during these periods. Averaged over all regions, the RMSE for MOD10A1 fractional is 0.23, whereas the MODSCAG RMSE is 0.10. MODSCAG performs the most consistently through accumulation, mid-winter and melt, with median differences ranging from -0.16 to 0.04 while differences for MOD10A1 fractional range from -0.34 to 0.35. MODSCAG maintains its performance over all land cover classes and throughout a larger range of land surface properties. Characterizing snow cover by spectral mixing is more accurate than empirical methods based on the normalized difference snow index, both for identifying where snow is and is not and for estimating the fractional snow cover within a sensor’s instantaneous field-of-view. Determining the fractional value is particularly important during spring and summer melt in mountainous terrain, where large variations in snow, vegetation and soil occur over

  4. A semi-probabilistic assessment method for flow slides

    NASA Astrophysics Data System (ADS)

    van den Ham, G.; Mastbergen, D.; de Groot, M.

    2013-12-01

    Flow slides in submerged slopes in non-lithified sandy and silty sediments form a major threat for flood defences along (estuary) coastlines and riverbanks in the Netherlands. Such flow slides may result in failure of levees and structures, eventually leading to flooding of the hinterland. Flow slide is a complex failure mechanism that includes both soil mechanical and hydraulic features. Two important sub-mechanisms are static liquefaction and breach flow. Static liquefaction entails the sudden loss of strength of loosely packed saturated sand or silt resulting in a collapse of the sand body. Breach flow is a more superficial process, involving the upslope retrogression of a local steep part of the slope which generates a turbulent sand-water mixture flow along the sand surface of the under water slope. Both mechanisms need a trigger, e.g. local steepening of the slope by erosion or slip failure. Although a breach flow slide generally takes more time than a liquefaction flow slide, both mechanisms result in a flowing sand-water mixture, that eventually resedimentates under a very gentle slope. Therefore in the analysis of historical flow slides it is often not clear to what extent static soil liquefaction and/or breach flow has played a role. In the current Dutch practice the prediction of levee failure due to flow sliding is based on either simple but conservative empirical rules based on documented historical flow slides in which distinction between mentioned sub-mechanisms is disregarded, or rather complex physical-based models describing mechanisms such as static liquefaction or breach flow. It will be presented how both approaches can be combined into a practical, probabilistic method for assessing dike failure due to flow sliding, accounting for uncertainties of the main influence factors. The method has recently been implemented in the so-called Dike Analysis Module (DAM). DAM is a platform for performing semi-automatic stability analyses on a large number

  5. Structural assessment of roof decking using visual inspection methods

    SciTech Connect

    Giller, R.A.; McCoy, R.M.; Wagenblast, G.R.

    1993-10-01

    The Hanford Site has approximately 1,100 buildings, some of which date back to the early 1940s. The roof on these buildings provides a weather resisting cover as well as the load resisting structure. Past experience has been that these roof structures may have structural modifications, the weather resisting membrane may have been replaced several times, and the members may experience some type of material degradation. This material degradation has progressed to cause the collapse of some roof deck members. The intent of the Hanford Site Central Engineering roof assessment effort is to provide an expedient structural assessment of the large number of buildings at the Hanford Site. This assessment is made by qualified structural inspectors following the {open_quotes}Preliminary Assessment{close_quote} procedures given in the American Society of Civil Engineers (ASCE) Standard ASCE 11-90. This roof assessment effort does not provide a total qualification of the roof for the design or in-place loads. This inspection does provide a reasonable estimate of the roof loading capacity to determine if personnel access restrictions are needed. A document search and a visual walkdown inspection provide the initial screening to identify modifications and components having questionable structural integrity. The structural assessment consists of baseline dead and live load stress calculations of all roofing components based on original design material strengths. The results of these assessments are documented in a final report which is retrievable form that future inspections will have comparative information.

  6. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method

    PubMed Central

    2017-01-01

    Background The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Objective Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. Methods A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Results Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant

  7. Are there meaningful individual differences in temporal inconsistency in self-reported personality?

    PubMed Central

    Soubelet, Andrea; Salthouse, Timothy A.; Oishi, Shigehiro

    2014-01-01

    The current project had three goals. The first was to examine whether it is meaningful to refer to across-time variability in self-reported personality as an individual differences characteristic. The second was to investigate whether negative affect was associated with variability in self-reported personality, while controlling for mean levels, and correcting for measurement errors. The third goal was to examine whether variability in self-reported personality would be larger among young adults than among older adults, and whether the relation of variability with negative affect would be stronger at older ages than at younger ages. Two moderately large samples of participants completed the International Item Pool Personality questionnaire assessing the Big Five personality dimensions either twice or thrice, in addition to several measures of negative affect. Results were consistent with the hypothesis that within-person variability in self-reported personality is a meaningful individual difference characteristic. Some people exhibited greater across-time variability than others after removing measurement error, and people who showed temporal instability in one trait also exhibited temporal instability across the other four traits. However, temporal variability was not related to negative affect, and there was no evidence that either temporal variability or its association with negative affect varied with age. PMID:25132698

  8. Are there meaningful individual differences in temporal inconsistency in self-reported personality?

    PubMed

    Soubelet, Andrea; Salthouse, Timothy A; Oishi, Shigehiro

    2014-11-01

    The current project had three goals. The first was to examine whether it is meaningful to refer to across-time variability in self-reported personality as an individual differences characteristic. The second was to investigate whether negative affect was associated with variability in self-reported personality, while controlling for mean levels, and correcting for measurement errors. The third goal was to examine whether variability in self-reported personality would be larger among young adults than among older adults, and whether the relation of variability with negative affect would be stronger at older ages than at younger ages. Two moderately large samples of participants completed the International Item Pool Personality questionnaire assessing the Big Five personality dimensions either twice or thrice, in addition to several measures of negative affect. Results were consistent with the hypothesis that within-person variability in self-reported personality is a meaningful individual difference characteristic. Some people exhibited greater across-time variability than others after removing measurement error, and people who showed temporal instability in one trait also exhibited temporal instability across the other four traits. However, temporal variability was not related to negative affect, and there was no evidence that either temporal variability or its association with negative affect varied with age.

  9. Application of Watershed Ecological Risk Assessment Methods to Watershed Management

    EPA Science Inventory

    Watersheds are frequently used to study and manage environmental resources because hydrologic boundaries define the flow of contaminants and other stressors. Ecological assessments of watersheds are complex because watersheds typically overlap multiple jurisdictional boundaries,...

  10. Clinical assessment in dental education: a new method.

    PubMed

    Tennant, M; Scriva, J

    2000-06-01

    Among the many challenges that face modern dental schools is the development of appropriate assessment systems. The more litigious nature of modern education makes it important that the systems developed are transparent and can withstand the processes of legal challenge. Coupled with this demand for robust assessment is a growing demand from universities and health providers for dental schools to keep rigorous records of student clinical productivity. This brief review outlines a system developed at the School of Oral Health Sciences at the University of Western Australia. The system integrates both qualitative and quantitative assessment and uses criterion-based assessment as its foundation. Detailed analysis and real-time reporting mechanisms using a suite of personally written software tools is now possible. The system provides both students and staff with effective data to enhance the learning process.

  11. Assessment of Automated Measurement and Verification (M&V) Methods

    SciTech Connect

    Granderson, Jessica; Touzani, Samir; Custodio, Claudine; Sohn, Michael; Fernandes, Samuel; Jump, David

    2015-07-01

    This report documents the application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-building energy savings.

  12. FIFRA Peer Review: Proposed Risk Assessment Methods Process

    EPA Pesticide Factsheets

    From September 11-14, 2012, EPA participated in a Federal Insecticide, Fungicide and Rodenticide Act Scientific Advisory Panel (SAP) meeting on a proposed pollinator risk assessment framework for determining the potential risks of pesticides to honey bees.

  13. The measurement of water scarcity: Defining a meaningful indicator.

    PubMed

    Damkjaer, Simon; Taylor, Richard

    2017-03-15

    Metrics of water scarcity and stress have evolved over the last three decades from simple threshold indicators to holistic measures characterising human environments and freshwater sustainability. Metrics commonly estimate renewable freshwater resources using mean annual river runoff, which masks hydrological variability, and quantify subjectively socio-economic conditions characterising adaptive capacity. There is a marked absence of research evaluating whether these metrics of water scarcity are meaningful. We argue that measurement of water scarcity (1) be redefined physically in terms of the freshwater storage required to address imbalances in intra- and inter-annual fluxes of freshwater supply and demand; (2) abandons subjective quantifications of human environments and (3) be used to inform participatory decision-making processes that explore a wide range of options for addressing freshwater storage requirements beyond dams that include use of renewable groundwater, soil water and trading in virtual water. Further, we outline a conceptual framework redefining water scarcity in terms of freshwater storage.

  14. Meaningful interactions can enhance visual discrimination of human agents.

    PubMed

    Neri, Peter; Luu, Jennifer Y; Levi, Dennis M

    2006-09-01

    The ability to interpret and predict other people's actions is highly evolved in humans and is believed to play a central role in their cognitive behavior. However, there is no direct evidence that this ability confers a tangible benefit to sensory processing. Our quantitative behavioral experiments show that visual discrimination of a human agent is influenced by the presence of a second agent. This effect depended on whether the two agents interacted (by fighting or dancing) in a meaningful synchronized fashion that allowed the actions of one agent to serve as predictors for the expected actions of the other agent, even though synchronization was irrelevant to the visual discrimination task. Our results demonstrate that action understanding has a pervasive impact on the human ability to extract visual information from the actions of other humans, providing quantitative evidence of its significance for sensory performance.

  15. Comparing Yes/No Angoff and Bookmark Standard Setting Methods in the Context of English Assessment

    ERIC Educational Resources Information Center

    Hsieh, Mingchuan

    2013-01-01

    The Yes/No Angoff and Bookmark method for setting standards on educational assessment are currently two of the most popular standard-setting methods. However, there is no research into the comparability of these two methods in the context of language assessment. This study compared results from the Yes/No Angoff and Bookmark methods as applied to…

  16. Friends in the Classroom: A Comparison between Two Methods for the Assessment of Students' Friendship Networks

    ERIC Educational Resources Information Center

    Pijl, Sip Jan; Koster, Marloes; Hannink, Anne; Stratingh, Anna

    2011-01-01

    One of the methods used most often to assess students' friendships and friendship networks is the reciprocal nomination method. However, an often heard complaint is that this technique produces rather negative outcomes. This study compares the reciprocal nomination method with another method to assess students' friendships and friendship networks:…

  17. Review of Methods Related to Assessing Human Performance in Nuclear Power Plant Control Room Simulations

    SciTech Connect

    Katya L Le Blanc; Ronald L Boring; David I Gertman

    2001-11-01

    With the increased use of digital systems in Nuclear Power Plant (NPP) control rooms comes a need to thoroughly understand the human performance issues associated with digital systems. A common way to evaluate human performance is to test operators and crews in NPP control room simulators. However, it is often challenging to characterize human performance in meaningful ways when measuring performance in NPP control room simulations. A review of the literature in NPP simulator studies reveals a variety of ways to measure human performance in NPP control room simulations including direct observation, automated computer logging, recordings from physiological equipment, self-report techniques, protocol analysis and structured debriefs, and application of model-based evaluation. These methods and the particular measures used are summarized and evaluated.

  18. Meaningful Engagement to Enhance Diversity: Broadened Impact Actualized

    NASA Astrophysics Data System (ADS)

    Whitney, V. W.; Pyrtle, A. J.

    2008-12-01

    The MS PHD'S Professional Development Program was established by and for UR/US populations to facilitate increased and sustained participation within the Earth system science community. MS PHD'S is jointly funded by NSF and NASA. Fourteen (14) minority Earth system scientists served as Program mentors and one- hundred fifteen (115) minority and non-minority scientists served as Meeting Mentors to student participants. Representatives from fifty-six (56) agencies and institutions provided support and exposure to MS PHD'S student participants. Two hundred fifty-eight (258) highly qualified UR/US students completed on-line applications to participate in the MS PHD'S Professional Development Program. Because of funding limitations, slightly fewer than 50% of the applicants were selected to participate. One-hundred twenty-six (126) undergraduate and graduate students from 26 states and Puerto Rico participated in the MS PHD'S program. Sixty-eight (68) MS PHD'S student participants self-identified as African American; thirty-four (34) as Puerto Rican; nine (9) as Hispanic/Mexican American, ten (10) as Native American and one (1) each as African, Asian, Pacific Islander, Hispanic and Multi-Ethnic. During the five year span of MS PHD'S programming, sixteen (16) student participants completed BS degrees, twelve (12) completed MS degrees and ten (10) completed the Doctoral degrees. How did MS PHD'S establish meaningful engagement to enhance diversity within the Earth system science community? This case study reveals replicable processes and constructs to enhance the quality of meaningful collaboration and engagement. In addition, the study addresses frequently asked questions (FAQ's) on outreach, recruitment, engagement, retention and success of students from underrepresented populations within diversity-focused programs.

  19. Dynamics of Boolean networks controlled by biologically meaningful functions.

    PubMed

    Raeymaekers, L

    2002-10-07

    The remarkably stable dynamics displayed by randomly constructed Boolean networks is one of the most striking examples of the spontaneous emergence of self-organization in model systems composed of many interacting elements (Kauffman, S., J. theor. Biol.22, 437-467, 1969; The Origins of Order, Oxford University Press, Oxford, 1993). The dynamics of such networks is most stable for a connectivity of two inputs per element, and decreases dramatically with increasing number of connections. Whereas the simplicity of this model system allows the tracing of the dynamical trajectories, it leaves out many features of real biological connections. For instance, the dynamics has been studied in detail only for networks constructed by allowing all theoretically possible Boolean rules, whereas only a subset of them make sense in the material world. This paper analyses the effect on the dynamics of using only Boolean functions which are meaningful in a biological sense. This analysis is particularly relevant for nets with more than two inputs per element because biological networks generally appear to be more extensively interconnected. Sets of the meaningful functions were assembled for up to four inputs per element. The use of these rules results in a smaller number of distinct attractors which have a shorter length, with relatively little sensitivity to the size of the network and to the number of inputs per element. Forcing away the activator/inhibitor ratio from the expected value of 50% further enhances the stability. This effect is more pronounced for networks consisting of a majority of activators than for networks with a corresponding majority of inhibitors, indicating that the former allow the evolution of larger genetic networks. The data further support the idea of the usefulness of logical networks as a conceptual framework for the understanding of real-world phenomena.

  20. Relative Contributions of Three Descriptive Methods: Implications for Behavioral Assessment

    ERIC Educational Resources Information Center

    Pence, Sacha T.; Roscoe, Eileen M.; Bourret, Jason C.; Ahearn, William H.

    2009-01-01

    This study compared the outcomes of three descriptive analysis methods--the ABC method, the conditional probability method, and the conditional and background probability method--to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior…

  1. The Patient-Centered Medical Home and Meaningful Use: a challenge for better care.

    PubMed

    Coffin, Janis; Duffie, Carla; Furno, Megan

    2014-01-01

    This article discusses and illustrates the alignment between the National Committee for Quality Assurance's Patient-Centered Medical Home and Meaningful Use. In addition to the various overlaps, there is also significant discussion about Patient-Centered Medical Home and Meaningful Use as well as their distinct requirements. With impending deadlines for Meaningful Use and potential penalties being imposed, this article provides a layout of dates, stages, and incentive payments and penalties for Meaningful Use, and discusses how obtaining Patient-Centered Medical Home recognition could be beneficial to achieving Meaningful Use.

  2. Methods of failure and reliability assessment for mechanical heart pumps.

    PubMed

    Patel, Sonna M; Allaire, Paul E; Wood, Houston G; Throckmorton, Amy L; Tribble, Curt G; Olsen, Don B

    2005-01-01

    Artificial blood pumps are today's most promising bridge-to-recovery (BTR), bridge-to-transplant (BTT), and destination therapy solutions for patients suffering from intractable congestive heart failure (CHF). Due to an increased need for effective, reliable, and safe long-term artificial blood pumps, each new design must undergo failure and reliability testing, an important step prior to approval from the United States Food and Drug Administration (FDA), for clinical testing and commercial use. The FDA has established no specific standards or protocols for these testing procedures and there are only limited recommendations provided by the scientific community when testing an overall blood pump system and individual system components. Product development of any medical device must follow a systematic and logical approach. As the most critical aspects of the design phase, failure and reliability assessments aid in the successful evaluation and preparation of medical devices prior to clinical application. The extent of testing, associated costs, and lengthy time durations to execute these experiments justify the need for an early evaluation of failure and reliability. During the design stages of blood pump development, a failure modes and effects analysis (FMEA) should be completed to provide a concise evaluation of the occurrence and frequency of failures and their effects on the overall support system. Following this analysis, testing of any pump typically involves four sequential processes: performance and reliability testing in simple hydraulic or mock circulatory loops, acute and chronic animal experiments, human error analysis, and ultimately, clinical testing. This article presents recommendations for failure and reliability testing based on the National Institutes of Health (NIH), Society for Thoracic Surgeons (STS) and American Society for Artificial Internal Organs (ASAIO), American National Standards Institute (ANSI), the Association for Advancement of

  3. Finding consensus on frailty assessment in acute care through Delphi method

    PubMed Central

    2016-01-01

    Objective We seek to address gaps in knowledge and agreement around optimal frailty assessment in the acute medical care setting. Frailty is a common term describing older persons who are at increased risk of developing multimorbidity, disability, institutionalisation and death. Consensus has not been reached on the practical implementation of this concept to assess clinically and manage older persons in the acute care setting. Design Modified Delphi, via electronic questionnaire. Questions included ranking items that best recognise frailty, optimal timing, location and contextual elements of a successful tool. Intraclass correlation coefficients for overall levels of agreement, with consensus and stability tested by 2-way ANOVA with absolute agreement and Fisher's exact test. Participants A panel of national experts (academics, front-line clinicians and specialist charities) were invited to electronic correspondence. Results Variables reflecting accumulated deficit and high resource usage were perceived by participants as the most useful indicators of frailty in the acute care setting. The Acute Medical Unit and Care of the older Persons Ward were perceived as optimum settings for frailty assessment. ‘Clinically meaningful and relevant’, ‘simple (easy to use)’ and ‘accessible by multidisciplinary team’ were perceived as characteristics of a successful frailty assessment tool in the acute care setting. No agreement was reached on optimal timing, number of variables and organisational structures. Conclusions This study is a first step in developing consensus for a clinically relevant frailty assessment model for the acute care setting, providing content validation and illuminating contextual requirements. Testing on clinical data sets is a research priority. PMID:27742633

  4. Quantitative risk assessment methods for cancer and noncancer effects.

    PubMed

    Baynes, Ronald E

    2012-01-01

    Human health risk assessments have evolved from the more qualitative approaches to more quantitative approaches in the past decade. This has been facilitated by the improvement in computer hardware and software capability and novel computational approaches being slowly recognized by regulatory agencies. These events have helped reduce the reliance on experimental animals as well as better utilization of published animal toxicology data in deriving quantitative toxicity indices that may be useful for risk management purposes. This chapter briefly describes some of the approaches as described in the guidance documents from several of the regulatory agencies as it pertains to hazard identification and dose-response assessment of a chemical. These approaches are contrasted with more novel computational approaches that provide a better grasp of the uncertainty often associated with chemical risk assessments.

  5. A Faculty Team Works to Create Content Linkages among Various Courses to Increase Meaningful Learning of Targeted Concepts of Microbiology

    PubMed Central

    Marbach-Ad, Gili; Briken, Volker; Frauwirth, Kenneth; Gao, Lian-Yong; Hutcheson, Steven W.; Joseph, Sam W.; Mosser, David; Parent, Beth; Shields, Patricia; Song, Wenxia; Stein, Daniel C.; Swanson, Karen; Thompson, Katerina V.; Yuan, Robert

    2007-01-01

    As research faculty with expertise in the area of host–pathogen interactions (HPI), we used a research group model to effect our professional development as scientific educators. We have established a working hypothesis: The implementation of a curriculum that forms bridges between our seven HPI courses allows our students to achieve deep and meaningful learning of HPI concepts. Working collaboratively, we identified common learning goals, and we chose two microorganisms to serve as anchors for student learning. We instituted variations of published active-learning methods to engage students in research-oriented learning. In parallel, we are developing an assessment tool. The value of this work is in the development of a teaching model that successfully allowed faculty who already work collaboratively in the research area of HPI to apply a “research group approach” to further scientific teaching initiatives at a research university. We achieved results that could not be accomplished by even the most dedicated instructor working in isolation. PMID:17548877

  6. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  7. A comparison of three methods of assessing renal function.

    PubMed

    Macleod, M A; Houston, A S

    1981-01-01

    In a study of 100 patients referred for assessment of renal function a comparison is made between gamma camera computer assisted displays of 99mTc-DTPA(Sn) nephrograms, intra-venous urography (I.V.U.) and triple probe 131I-Hippuran renography. The computer processed data displays renal morphology and activity/time curves which are deconvolved enabling quantitative assessment of glomerular function to be made. Pattern recognition techniques for feature extraction are employed to facilitate classification of the curves. It is concluded that the computer processed data gives better results both in the recognition of morphological defects and in the indication and measurement of renal damage.

  8. Test method improves motor bearing wear assessment at Calvert Cliffs

    SciTech Connect

    Gradin, L.P. ); Cartwright, W.B. ); Burstein, N.M.

    1994-06-01

    This article describes how motor current signature analysis is helping plant maintenance engineers assess the condition of inaccessible motors during plant operation. At Baltimore Gas Electric Co.'s Calvert Cliffs Nuclear Power Plant, maintenance activities are based on reliability-centered maintenance (RCM) concepts and guided by a non-intrusive condition evaluation (NICE) policy wherever achievable. One technique that fits these criteria is motor current signature analysis (MCSA). The new technique has helped plant maintenance personnel assess the condition of relatively inaccessible containment cooling fan motors inside reactor containment during normal plant operation.

  9. Efavirenz does not meaningfully affect the single dose pharmacokinetics of 1200 mg raltegravir.

    PubMed

    Krishna, Rajesh; East, Lilly; Larson, Patrick; Siringhaus, Tara; Herpok, Lisa; Bethel-Brown, Crystal; Manthos, Helen; Brejda, John; Gartner, Michael

    2016-12-01

    Raltegravir is a human immunodeficiency virus (HIV)-1 integrase strand transfer inhibitor currently marketed at a dose of 400 mg twice daily (BID). Raltegravir for once daily regimen (QD) at a dose of 1200 mg (2 x 600 mg) is under development and offers a new treatment option for HIV-1 infected treatment-naive subjects. Since raltegravir is eliminated mainly by metabolism via an UDP-glucuronosyltransferase (UGT) 1 A1-mediated glucuronidation pathway, co-administration of UGT1A1 inducers may alter plasma levels of raltegravir. Efavirenz, an UGT1A1 inducer, was used to assess the impact of altered UGT activity on a 1200 mg QD dose of raltegravir. An open label, randomized, 2-period fixed-sequence Phase 1 study was performed in adult healthy male and female subjects (non-childbearing potential) ≥ 19 and ≤55 years of age, with a body mass index (BMI) ≥ 18.5 and ≤32.0 kg/m(2) . Subjects (n = 21) received a single oral dose of 1200 mg raltegravir at bedtime on an empty stomach on Day 1 in Period 1. After a washout period of at least 7 days, subjects received oral doses of 600 mg efavirenz QD at bedtime for 14 consecutive days in Period 2. Subjects received a single oral dose of 1200 mg raltegravir co-administered with 600 mg efavirenz on Day 12 of Period 2. Pharmacokinetic (PK) samples were collected for 72 hours following raltegravir dosing and analyzed using a validated bioanalytical method to quantify raltegravir plasma concentrations. PK parameters were estimated using non-compartmental analysis. Administration of single 1200 mg oral doses of raltegravir alone and co-administered with multiple oral doses of efavirenz were generally well tolerated in healthy subjects. Co-administration with efavirenz yielded geometric mean ratios (GMRs) and their associated 90% confidence intervals (90% CIs) for raltegravir AUC0-∞, Cmax , and C24 of 0.86 (0.73, 1.01), 0.91 (0.70, 1.17), and 0.94 (0.76, 1.17), respectively. The results show that efavirenz

  10. Plant disease severity assessment - How rater bias, assessment method and experimental design affect hypothesis testing and resource use efficiency

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The impact of rater bias and assessment method on hypothesis testing was studied for different experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed ‘balanced’, and those ...

  11. RECOVERY ACT - Methods for Decision under Technological Change Uncertainty and Risk Assessment for Integrated Assessment of Climate Change

    SciTech Connect

    Webster, Mort D.

    2015-11-30

    This report presents the final outcomes and products of the project as performed both at the Massachusetts Institute of Technology and subsequently at Pennsylvania State University. The research project can be divided into three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment.

  12. RECOVERY ACT - Methods for Decision under Technological Change Uncertainty and Risk Assessment for Integrated Assessment of Climate Change

    SciTech Connect

    Webster, Mort David

    2015-03-10

    This report presents the final outcomes and products of the project as performed at the Massachusetts Institute of Technology. The research project consists of three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment. Results in each area is described in the report.

  13. The Adult Asperger Assessment (AAA): A Diagnostic Method

    ERIC Educational Resources Information Center

    Baron-Cohen, Simon; Wheelwright, Sally; Robinson, Janine; Woodbury-Smith, Marc

    2005-01-01

    At the present time there are a large number of adults who have "suspected" Asperger syndrome (AS). In this paper we describe a new instrument, the Adult Asperger Assessment (AAA), developed in our clinic for adults with AS. The need for a new instrument relevant to the diagnosis of AS in adulthood arises because existing instruments are designed…

  14. Paper Trail: One Method of Information Literacy Assessment

    ERIC Educational Resources Information Center

    Nutefall, Jennifer

    2004-01-01

    Assessing students' information literacy skills can be difficult depending on the involvement of the librarian in a course. To overcome this, librarians created an assignment called the Paper Trail, where students wrote a short essay about their research process and reflected on what they would do differently. Through reviewing and grading these…

  15. Out of This World Genetics: A Fun, Simple Assessment Method.

    ERIC Educational Resources Information Center

    Nelson, Julie M.

    2002-01-01

    Presents a science activity in genetics that explains concepts such as dominant and recessive traits, monohybrid and dihybrid crosses, Punnett squares, and Mendel's Laws of Segregation and Independent Assortment. Uses the activity as an assessment tool to measure students' fundamental understanding. (YDS)

  16. Engine non-containment: UK risk assessment methods

    NASA Technical Reports Server (NTRS)

    Wallin, J. C.

    1977-01-01

    More realistic guideline data must be developed for use in aircraft design in order to comply with recent changes in British civil airworthiness requirements. Unrealistically pessimistic results were obtained when the methodology developed during the Concorde SST certification program was extended to assess catastrophic risks resulting from uncontained engine rotors.

  17. A Brief Method for Conducting a Negative-Reinforcement Assessment.

    ERIC Educational Resources Information Center

    Zarcone, Jennifer R.; Crosland, Kimberly; Fisher, Wayne W.; Worsdell, April S.; Herman, Kelly

    1999-01-01

    A brief negative-reinforcement assessment was conducted with five children (ages 4 to 14) with developmental disabilities with severe destructive behavior. Children were trained to engage in an escape response and were presented with a variety of stimuli. For each child, several stimuli were identified that may serve as effective negative…

  18. Assessment of in silico methods to estimate aquatic species sensitivity

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...

  19. Calibration with confidence: a principled method for panel assessment

    PubMed Central

    MacKay, R. S.; Low, R. J.; Parker, S.

    2017-01-01

    Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, ‘true’ values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options. PMID:28386432

  20. METHODS FOR ASSESSMENT OF URBAN WET-WEATHER FLOW IMPACTS

    EPA Science Inventory

    During the past decade, it has become apparent during numerous receiving water assessment studies that no one single approach (e.g., chemical-specific criteria, benthic microorganisms, or habitat surveys) can routinely be used to accurately determine or predict ecosystem health a...

  1. [The method of quantitative assessment of dentition aesthetic parameters].

    PubMed

    Ryakhovsky, A N; Kalacheva, Ya A

    2016-01-01

    This article describes the formula for calculating the aesthetic index of treatment outcome. The formula was derived on the basis of the obtained regression equations showing the dependence of visual assessment of the value of aesthetic violations. The formula can be used for objective quantitative evaluation of the aesthetics of the teeth when smiling before and after dental treatment.

  2. "Portfolios" as a method of assessment in medical education.

    PubMed

    Haldane, Thea

    2014-01-01

    Portfolios are increasingly used in postgraduate medical education and in gastroenterology training as an assessment tool, as documentation of competence, a database of procedure experience (for example endoscopy experience) and for revalidation purposes. In this paper the educational theory behind their use is described and the evidence for their use is discussed.

  3. Probing meaningfulness of oscillatory EEG components with bootstrapping, label noise and reduced training sets.

    PubMed

    Castaño-Candamil, Sebastián; Meinel, Andreas; Dähne, Sven; Tangermann, Michael

    2015-01-01

    As oscillatory components of the Electroencephalogram (EEG) and other electrophysiological signals may co-modulate in power with a target variable of interest (e.g. reaction time), data-driven supervised methods have been developed to automatically identify such components based on labeled example trials. Under conditions of challenging signal-to-noise ratio, high-dimensional data and small training sets, however, these methods may overfit to meaningless solutions. Examples are spatial filtering methods like Common Spatial Patterns (CSP) and Source Power Comodulation (SPoC). It is difficult for the practitioner to tell apart meaningful from arbitrary, random components. We propose three approaches to probe the robustness of extracted oscillatory components and show their application to both, simulated and EEG data recorded during a visually cued hand motor reaction time task.

  4. The potential for new methods to assess human reproductive genotoxicity

    SciTech Connect

    Mendelsohn, M.L.

    1987-09-01

    The immediate prospects are not good for practical methods for measuring the human heritable mutation rate. The methods discussed here range from speculative to impractical, and at best are sensitive enough only for large numbers of subjects. Given the rapid development of DNA methods and the current status of two-dimensional gel electrophoresis, there is some hope that the intermediate prospects may be better. In contrast, the prospects for useful cellular-based male germinal methods seem more promising and immediate. Effective specific locus methods for sperm are already conceivable and may be practical in a few years. Obviously such methods will not predict heritable effects definitively, but they will provide direct information on reproductive genotoxicity and should contribute significantly to many current medical and environmental situations where genetic damage is suspected. 22 refs.

  5. Fostering Self-Reflection and Meaningful Learning: Earth Science Professional Development for Middle School Science Teachers

    NASA Astrophysics Data System (ADS)

    Monet, Julie A.; Etkina, Eugenia

    2008-10-01

    This paper describes the analysis of teachers’ journal reflections during an inquiry-based professional development program. As a part of their learning experience, participants reflected on what they learned and how they learned. Progress of subject matter and pedagogical content knowledge was assessed though surveys and pre- and posttests. We found that teachers have difficulties reflecting on their learning and posing meaningful questions. The teachers who could describe how they reasoned from evidence to understand a concept had the highest learning gains. In contrast those teachers who seldom or never described learning a concept by reasoning from evidence showed the smallest learning gains. This analysis suggests that learning to reflect on one’s learning should be an integral part of teachers’ professional development experiences.

  6. Peer Assessment in Group Projects Accounting for Assessor Reliability by an Iterative Method

    ERIC Educational Resources Information Center

    Ko, Sung-Seok

    2014-01-01

    This study proposes an advanced method to factor in the contributions of individual group members engaged in an integrated group project using peer assessment procedures. Conway et al. proposed the Individual Weight Factor (IWF) method for peer assessment which has been extensively developed over the years. However, most methods associated with…

  7. Assessing Key Assumptions of Network Meta-Analysis: A Review of Methods

    ERIC Educational Resources Information Center

    Donegan, Sarah; Williamson, Paula; D'Alessandro, Umberto; Tudur Smith, Catrin

    2013-01-01

    Background: Homogeneity and consistency assumptions underlie network meta-analysis (NMA). Methods exist to assess the assumptions but they are rarely and poorly applied. We review and illustrate methods to assess homogeneity and consistency. Methods: Eligible articles focussed on indirect comparison or NMA methodology. Articles were sought by…

  8. Assessing Computational Methods of Cis-Regulatory Module Prediction

    PubMed Central

    Su, Jing; Teichmann, Sarah A.; Down, Thomas A.

    2010-01-01

    Computational methods attempting to identify instances of cis-regulatory modules (CRMs) in the genome face a challenging problem of searching for potentially interacting transcription factor binding sites while knowledge of the specific interactions involved remains limited. Without a comprehensive comparison of their performance, the reliability and accuracy of these tools remains unclear. Faced with a large number of different tools that address this problem, we summarized and categorized them based on search strategy and input data requirements. Twelve representative methods were chosen and applied to predict CRMs from the Drosophila CRM database REDfly, and across the human ENCODE regions. Our results show that the optimal choice of method varies depending on species and composition of the sequences in question. When discriminating CRMs from non-coding regions, those methods considering evolutionary conservation have a stronger predictive power than methods designed to be run on a single genome. Different CRM representations and search strategies rely on different CRM properties, and different methods can complement one another. For example, some favour homotypical clusters of binding sites, while others perform best on short CRMs. Furthermore, most methods appear to be sensitive to the composition and structure of the genome to which they are applied. We analyze the principal features that distinguish the methods that performed well, identify weaknesses leading to poor performance, and provide a guide for users. We also propose key considerations for the development and evaluation of future CRM-prediction methods. PMID:21152003

  9. Sacrifice: an ethical dimension of caring that makes suffering meaningful.

    PubMed

    Helin, Kaija; Lindström, Unni A

    2003-07-01

    This article is intended to raise the question of whether sacrifice can be regarded stituting a deep ethical structure in the relationship between patient and carer. The significance of sacrifice in a patient-carer relationship cannot, however, be fully understood from the standpoint of the consistently utilitarian ethic that characterizes today's ethical discourse. Deontological ethics, with its universal principles, also does not provide a suitable point of departure. Ethical recommendations and codices are important and serve as general sources of knowledge when making decisions, but they should be supplemented by an ethic that takes into consideration contextual and situational factors that make every encounter between patient and carer unique. Caring science research literature presents, on the whole, general agreement on the importance of responsibility and devotian with regard to sense of duty, warmth and genuine engagement in caring. That sacrifice may also constitute an important ethical element in the patient-carer relationship is, however, a contradictory and little considered theme. Caring literature that deals with sacrifice/self-sacrifice indicates contradictory import. It is nevertheless interesting to notice that both the negative and the positive aspects bring out importance of the concept for the professional character of caring. The tradition of ideas in medieval Christian mysticism with reference to Lévinas' ethic of responsibility offers a deeper perspective in which the meaningfulness of sacrifice in the caring relationship can be sought. The theme of sacrifice is not of interest merely as a carer's ethical outlook, but sacrifice can also be understood as a potential process of transformation health. The instinctive or conscious experience of sacrifice on the part of the individual patient can, on a symbolic level, be regarded as analogous to the cultic or religious sacrifice aiming at atonement. Sacrifice appears to the patient as an act of

  10. The influence of digital filter type, amplitude normalisation method, and co-contraction algorithm on clinically relevant surface electromyography data during clinical movement assessments.

    PubMed

    Devaprakash, Daniel; Weir, Gillian J; Dunne, James J; Alderson, Jacqueline A; Donnelly, Cyril J

    2016-12-01

    There is a large and growing body of surface electromyography (sEMG) research using laboratory-specific signal processing procedures (i.e., digital filter type and amplitude normalisation protocols) and data analyses methods (i.e., co-contraction algorithms) to acquire practically meaningful information from these data. As a result, the ability to compare sEMG results between studies is, and continues to be challenging. The aim of this study was to determine if digital filter type, amplitude normalisation method, and co-contraction algorithm could influence the practical or clinical interpretation of processed sEMG data. Sixteen elite female athletes were recruited. During data collection, sEMG data was recorded from nine lower limb muscles while completing a series of calibration and clinical movement assessment trials (running and sidestepping). Three analyses were conducted: (1) signal processing with two different digital filter types (Butterworth or critically damped), (2) three amplitude normalisation methods, and (3) three co-contraction ratio algorithms. Results showed the choice of digital filter did not influence the clinical interpretation of sEMG; however, choice of amplitude normalisation method and co-contraction algorithm did influence the clinical interpretation of the running and sidestepping task. Care is recommended when choosing amplitude normalisation method and co-contraction algorithms if researchers/clinicians are interested in comparing sEMG data between studies.

  11. Sustaining “Meaningful Use” of Health Information Technology in Low-Resource Practices

    PubMed Central

    Green, Lee A.; Potworowski, Georges; Day, Anya; May-Gentile, Rachelle; Vibbert, Danielle; Maki, Bruce; Kiesel, Leslie

    2015-01-01

    PURPOSE The implementation of electronic health records (EHRs) has been extensively studied, but their maintenance once implemented has not. The Regional Extension Center (REC) program provides implementation assistance to priority practices—those with limited financial, technical, and organizational resources—but the assistance is time limited. Our objective was to identify potential barriers to maintenance of meaningful use of EHRs in priority primary care practices using a qualitative observational study for federally qualified health centers (FQHCs) and priority practices in Michigan. METHODS We conducted cognitive task analysis (CTA) interviews and direct observations of health information technology implementation in FQHCs. In addition, we conducted semistructured interviews with implementation specialists serving priority practices to detect emergent themes relevant to maintenance. RESULTS Maintaining EHR technology will require ongoing expert technical support indefinitely beyond implementation to address upgrades and security needs. Maintaining meaningful use for quality improvement will require ongoing support for leadership and change management. Priority practices not associated with larger systems lack access to the necessary technical expertise, financial resources, and leverage with vendors to continue alone. Rural priority practices are particularly challenged, because expertise is often not available locally. CONCLUSIONS Priority practices, especially in rural areas, are at high risk for falling on the wrong side of a “digital divide” as payers and regulators enact increasing expectations for EHR use and information management. For those without affiliation to maintain the necessary expert staff, ongoing support will be needed for those practices to remain viable. PMID:25583887

  12. System Maturity and Architecture Assessment Methods, Processes, and Tools

    DTIC Science & Technology

    2012-03-02

    1 For a detailed description of the SRL methodology see Sauser, B., J.E. Ramirez- Marquez , D. Nowicki, A...and Ramirez- Marquez 2009; Magnaye, Sauser et al. 2010). Although there are guidelines and tools to support the assessment process (Nolte, Kennedy...employ these metrics (Tan, Sauser et al. 2011). Graettinger, et al. (Graettinger, Garcia et al. 2002) reports that approaches for readiness level

  13. Research on Utilization of Assessment Results and Methods

    DTIC Science & Technology

    1974-06-01

    tailored K programs based upon assessment results. Conclusions. The data resulting from this subtask will (1) con- tribute to curriculum planning designed...Officer Advanced Course. Data for OCS students were insufficient for reliable analysis. For each of the above groups, mean scores were computed on 26...on identified deficiencies of student groups. Since such data result from systematic evaluations made within controlled environments, they can be used

  14. Method and system for dynamic probabilistic risk assessment

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)

    2013-01-01

    The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.

  15. New method to assess manual lymph drainage using lymphoscintigraphy.

    PubMed

    de Godoy, José Maria Pereira; Iozzi, Adriana Joaquim; Azevedo, Walter Ferreira; Godoy, Maria de Fátima Guerreiro

    2012-08-27

    The aim of this study was to describe a new variation of the technique to evaluate lymph drainage utilizing lymphoscintigraphy. A LS scan marks the route of lymphatic vessels and may be used to assess both manual lymph drainage and lymph drainage after using some apparatuses. This evaluation may be dynamic, collecting images whilst performing lymph drainage or static, with scans before and after the lymph drainage procedure.

  16. A brief method for conducting a negative-reinforcement assessment.

    PubMed

    Zarcone, J R; Crosland, K; Fisher, W W; Worsdell, A S; Herman, K

    1999-01-01

    A brief negative-reinforcement assessment was conducted with developmentally disabled children with severe destructive behavior. Five children were trained to engage in a simple escape response (e.g., a hand clap). Then each child was presented with a variety of stimuli or tasks that ranged on a scale from preferred to nonpreferred, based on parent ranking. The participant received a brief break from the stimuli or task, contingent on each escape response. For one child, an avoidance contingency was also implemented in which he could engage in the response to avoid the presentation of stimuli. Results showed that for each child, several stimuli were identified that may serve as effective negative reinforcers. Results also indicated that the procedure did not elicit any negative side effects for four children and low rates of destructive behavior for the fifth child. For one child, the results of the negative-reinforcement assessment were used to develop an effective treatment for destructive behavior. Additional applications of the reinforcement assessment to treatment interventions is discussed, as well as limitations to the procedure.

  17. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  18. An observational assessment method for aging laboratory rats

    EPA Science Inventory

    The growth of the aging population highlights the need for laboratory animal models to study the basic biological processes ofaging and susceptibility to toxic chemicals and disease. Methods to evaluate health ofaging animals over time are needed, especially efficient methods for...

  19. Methods of the Quality of Higher Education Social Assessment

    ERIC Educational Resources Information Center

    Artushina, Irina; Troyan, Vladimir

    2007-01-01

    This article describes the methodical problems of the ranking of Russian universities by the independent ranking agency RatER. The main features of this ranking system are public estimations of the quality of the higher education in Russia. The results of the sociological study were processed by the original statistical methods. The procedure…

  20. Advances in validation, risk and uncertainty assessment of bioanalytical methods.

    PubMed

    Rozet, E; Marini, R D; Ziemons, E; Boulanger, B; Hubert, Ph

    2011-06-25

    Bioanalytical method validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application in order to trust the critical decisions that will be made with them. Even if several guidelines exist to help perform bioanalytical method validations, there is still the need to clarify the meaning and interpretation of bioanalytical method validation criteria and methodology. Yet, different interpretations can be made of the validation guidelines as well as for the definitions of the validation criteria. This will lead to diverse experimental designs implemented to try fulfilling these criteria. Finally, different decision methodologies can also be interpreted from these guidelines. Therefore, the risk that a validated bioanalytical method may be unfit for its future purpose will depend on analysts personal interpretation of these guidelines. The objective of this review is thus to discuss and highlight several essential aspects of methods validation, not only restricted to chromatographic ones but also to ligand binding assays owing to their increasing role in biopharmaceutical industries. The points that will be reviewed are the common validation criteria, which are selectivity, standard curve, trueness, precision, accuracy, limits of quantification and range, dilutional integrity and analyte stability. Definitions, methodology, experimental design and decision criteria are reviewed. Two other points closely connected to method validation are also examined: incurred sample reproducibility testing and measurement uncertainty as they are highly linked to bioanalytical results reliability. Their additional implementation is foreseen to strongly reduce the risk of having validated a bioanalytical method unfit for its purpose.

  1. Method for assessing motor insulation on operating motors

    DOEpatents

    Kueck, J.D.; Otaduy, P.J.

    1997-03-18

    A method for monitoring the condition of electrical-motor-driven devices is disclosed. The method is achieved by monitoring electrical variables associated with the functioning of an operating motor, applying these electrical variables to a three phase equivalent circuit and determining non-symmetrical faults in the operating motor based upon symmetrical components analysis techniques. 15 figs.

  2. Method for assessing motor insulation on operating motors

    DOEpatents

    Kueck, John D.; Otaduy, Pedro J.

    1997-01-01

    A method for monitoring the condition of electrical-motor-driven devices. The method is achieved by monitoring electrical variables associated with the functioning of an operating motor, applying these electrical variables to a three phase equivalent circuit and determining non-symmetrical faults in the operating motor based upon symmetrical components analysis techniques.

  3. Measurement Properties of Indirect Assessment Methods for Functional Behavioral Assessment: A Review of Research

    ERIC Educational Resources Information Center

    Floyd, Randy G.; Phaneuf, Robin L.; Wilczynski, Susan M.

    2005-01-01

    Indirect assessment instruments used during functional behavioral assessment, such as rating scales, interviews, and self-report instruments, represent the least intrusive techniques for acquiring information about the function of problem behavior. This article provides criteria for examining the measurement properties of these instruments…

  4. An investigation of the reliability of Rapid Upper Limb Assessment (RULA) as a method of assessment of children's computing posture.

    PubMed

    Dockrell, Sara; O'Grady, Eleanor; Bennett, Kathleen; Mullarkey, Clare; Mc Connell, Rachel; Ruddy, Rachel; Twomey, Seamus; Flannery, Colleen

    2012-05-01

    Rapid Upper Limb Assessment (RULA) is a quick observation method of posture analysis. RULA has been used to assess children's computer-related posture, but the reliability of RULA on a paediatric population has not been established. The purpose of this study was to investigate the inter-rater and intra-rater reliability of the use of RULA with children. Video recordings of 24 school children were independently viewed by six trained raters who assessed their postures using RULA, on two separate occasions. RULA demonstrated higher intra-rater reliability than inter-rater reliability although both were moderate to good. RULA was more reliable when used for assessing the older children (8-12 years) than with the younger children (4-7 years). RULA may prove useful as part of an ergonomic assessment, but its level of reliability warrants caution for its sole use when assessing children, and in particular, younger children.

  5. Method selection for sustainability assessments: The case of recovery of resources from waste water.

    PubMed

    Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L

    2017-04-05

    Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands.

  6. The assessment methods of laryngeal muscle activity in muscle tension dysphonia: a review.

    PubMed

    Khoddami, Seyyedeh Maryam; Nakhostin Ansari, Noureddin; Izadi, Farzad; Talebian Moghadam, Saeed

    2013-11-04

    The purpose of this paper is to review the methods used for the assessment of muscular tension dysphonia (MTD). The MTD is a functional voice disorder associated with abnormal laryngeal muscle activity. Various assessment methods are available in the literature to evaluate the laryngeal hyperfunction. The case history, laryngoscopy, and palpation are clinical methods for the assessment of patients with MTD. Radiography and surface electromyography (EMG) are objective methods to provide physiological information about MTD. Recent studies show that surface EMG can be an effective tool for assessing muscular tension in MTD.

  7. Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.

    PubMed

    Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad

    2016-07-15

    The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines.

  8. Cost Template for Meaningful Activity Intervention for Mild Cognitive Impairment

    PubMed Central

    Yueh-Feng Lu, Yvonne; Bakas, Tamilyn; Haase, Joan E.

    2013-01-01

    Purpose To describe and compare cost estimates for a pilot study of the Daily Enhancement of Meaningful Activity (DEMA) intervention for persons with mild cognitive impairment (PwMCI)-caregiver dyads. Background The increasing complexity of the health care system and rising health care costs, have forced nurse scientists to find ways to effectively improve health care quality and control cost, but no studies have examined costs for new programs that target PwMCI-caregiver dyads. Description of the project Pilot study data were used to develop a cost template and calculate the cost of implementing the DEMA. Outcomes Mean cost per dyad was estimated to be $1,327.97 in the clinical setting, compared with $1,069.06 if a telephone delivery mode had been used for four of the six face-to-face sessions. This difference was largely due to transportation-related expenses and staff cost. Implications DEMA should be evaluated further with larger and more diverse samples as a technology-delivered health promotion program that could reduce costs. PMID:23392066

  9. Redefining meaningful age groups in the context of disease.

    PubMed

    Geifman, Nophar; Cohen, Raphael; Rubin, Eitan

    2013-12-01

    Age is an important factor when considering phenotypic changes in health and disease. Currently, the use of age information in medicine is somewhat simplistic, with ages commonly being grouped into a small number of crude ranges reflecting the major stages of development and aging, such as childhood or adolescence. Here, we investigate the possibility of redefining age groups using the recently developed Age-Phenome Knowledge-base (APK) that holds over 35,000 literature-derived entries describing relationships between age and phenotype. Clustering of APK data suggests 13 new, partially overlapping, age groups. The diseases that define these groups suggest that the proposed divisions are biologically meaningful. We further show that the number of different age ranges that should be considered depends on the type of disease being evaluated. This finding was further strengthened by similar results obtained from clinical blood measurement data. The grouping of diseases that share a similar pattern of disease-related reports directly mirrors, in some cases, medical knowledge of disease-age relationships. In other cases, our results may be used to generate new and reasonable hypotheses regarding links between diseases.

  10. Individual olfactory perception reveals meaningful nonolfactory genetic information.

    PubMed

    Secundo, Lavi; Snitz, Kobi; Weissler, Kineret; Pinchover, Liron; Shoenfeld, Yehuda; Loewenthal, Ron; Agmon-Levin, Nancy; Frumin, Idan; Bar-Zvi, Dana; Shushan, Sagit; Sobel, Noam

    2015-07-14

    Each person expresses a potentially unique subset of ∼ 400 different olfactory receptor subtypes. Given that the receptors we express partially determine the odors we smell, it follows that each person may have a unique nose; to capture this, we devised a sensitive test of olfactory perception we termed the "olfactory fingerprint." Olfactory fingerprints relied on matrices of perceived odorant similarity derived from descriptors applied to the odorants. We initially fingerprinted 89 individuals using 28 odors and 54 descriptors. We found that each person had a unique olfactory fingerprint (P < 10(-10)), which was odor specific but descriptor independent. We could identify individuals from this pool using randomly selected sets of 7 odors and 11 descriptors alone. Extrapolating from this data, we determined that using 34 odors and 35 descriptors we could individually identify each of the 7 billion people on earth. Olfactory perception, however, fluctuates over time, calling into question our proposed perceptual readout of presumably stable genetic makeup. To test whether fingerprints remain informative despite this temporal fluctuation, building on the linkage between olfactory receptors and HLA, we hypothesized that olfactory perception may relate to HLA. We obtained olfactory fingerprints and HLA typing for 130 individuals, and found that olfactory fingerprint matching using only four odorants was significantly related to HLA matching (P < 10(-4)), such that olfactory fingerprints can save 32% of HLA tests in a population screen (P < 10(-6)). In conclusion, a precise measure of olfactory perception reveals meaningful nonolfactory genetic information.

  11. A method to assess the bacterial content of refrigerated meat.

    PubMed Central

    Perez de Castro, B; Asensio, M A; Sanz, B; Ordoñez, J A

    1988-01-01

    A new method has been developed to estimate the levels of gram-negative bacteria on refrigerated meat. The method is based on the aminopeptidase activity of these bacteria, which cleaves L-alanine-p-nitroanilide to yield p-nitroaniline, which is easily determined spectrophotometrically. This method allows the determination of levels around 10(6) to 10(7) CFU cm-2 in about 3 h. Because of the yellow color of p-nitroaniline, bacterial loads around 10(7) CFU cm-2 develop a color intense enough to be detected with the naked eye. PMID:3415222

  12. Methods to assess sensitivity of optical coherence tomography systems

    PubMed Central

    Agrawal, Anant; Pfefer, T. Joshua; Woolliams, Peter D.; Tomlins, Peter H.; Nehmetallah, George

    2017-01-01

    Measuring the sensitivity of an optical coherence tomography (OCT) system determines the minimum sample reflectivity it can detect and provides a figure of merit for system optimization and comparison. The published literature lacks a detailed description of OCT sensitivity measurement procedures. Here we describe a commonly-used measurement method and introduce two new phantom-based methods, which also offer a means to directly visualize low reflectivity conditions relevant to biological tissue. We provide quantitative results for the three methods from different OCT system configurations and discuss the methods’ advantages and disadvantages. PMID:28270992

  13. Use of scientometrics to assess nuclear and other analytical methods

    SciTech Connect

    Lyon, W.S.

    1986-01-01

    Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs.

  14. Caries assessment: establishing mathematical link of clinical and benchtop method

    NASA Astrophysics Data System (ADS)

    Amaechi, Bennett T.

    2009-02-01

    It is well established that the development of new technologies for early detection and quantitative monitoring of dental caries at its early stage could provide health and economic benefits ranging from timely preventive interventions to reduction of the time required for clinical trials of anti-caries agents. However, the new technologies currently used in clinical setting cannot assess and monitor caries using the actual mineral concentration within the lesion, while a laboratory-based microcomputed tomography (MCT) has been shown to possess this capability. Thus we envision the establishment of mathematical equations relating the measurements of each of the clinical technologies to that of MCT will enable the mineral concentration of lesions detected and assessed in clinical practice to be extrapolated from the equation, and this will facilitate preventitive care in dentistry to lower treatment cost. We utilize MCT and the two prominent clinical caries assessment devices (Quantitative Light-induced Fluorescence [QLF] and Diagnodent) to longitudinally monitor the development of caries in a continuous flow mixed-organisms biofilm model (artificial mouth), and then used the collected data to establish mathematical equation relating the measurements of each of the clinical technologies to that of MCT. A linear correlation was observed between the measurements of MicroCT and that of QLF and Diagnodent. Thus mineral density in a carious lesion detected and measured using QLF or Diagnodent can be extrapolated using the developed equation. This highlights the usefulness of MCT for monitoring the progress of an early caries being treated with therapeutic agents in clinical practice or trials.

  15. Anti-aging cosmetics and its efficacy assessment methods

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2015-07-01

    The mechanisms of skin aging, the active ingredients used in anti-aging cosmetics and evaluation methods for anti-aging cosmetics were surmised in this paper. And the mechanisms of skin aging were introduced in the intrinsic and extrinsic ways. Meanwhile, the anti-aging cosmetic active ingredients were classified in accordance with the mechanism of action. Various evaluation methods such as human evaluation, in vitro evaluation were also summarized.

  16. Signal Processing Methods for Liquid Rocket Engine Combustion Stability Assessments

    NASA Technical Reports Server (NTRS)

    Kenny, R. Jeremy; Lee, Erik; Hulka, James R.; Casiano, Matthew

    2011-01-01

    The J2X Gas Generator engine design specifications include dynamic, spontaneous, and broadband combustion stability requirements. These requirements are verified empirically based high frequency chamber pressure measurements and analyses. Dynamic stability is determined with the dynamic pressure response due to an artificial perturbation of the combustion chamber pressure (bomb testing), and spontaneous and broadband stability are determined from the dynamic pressure responses during steady operation starting at specified power levels. J2X Workhorse Gas Generator testing included bomb tests with multiple hardware configurations and operating conditions, including a configuration used explicitly for engine verification test series. This work covers signal processing techniques developed at Marshall Space Flight Center (MSFC) to help assess engine design stability requirements. Dynamic stability assessments were performed following both the CPIA 655 guidelines and a MSFC in-house developed statistical-based approach. The statistical approach was developed to better verify when the dynamic pressure amplitudes corresponding to a particular frequency returned back to pre-bomb characteristics. This was accomplished by first determining the statistical characteristics of the pre-bomb dynamic levels. The pre-bomb statistical characterization provided 95% coverage bounds; these bounds were used as a quantitative measure to determine when the post-bomb signal returned to pre-bomb conditions. The time for post-bomb levels to acceptably return to pre-bomb levels was compared to the dominant frequency-dependent time recommended by CPIA 655. Results for multiple test configurations, including stable and unstable configurations, were reviewed. Spontaneous stability was assessed using two processes: 1) characterization of the ratio of the peak response amplitudes to the excited chamber acoustic mode amplitudes and 2) characterization of the variability of the peak response

  17. Assessing Internet energy intensity: A review of methods and results

    SciTech Connect

    Coroama, Vlad C.; Hilty, Lorenz M.

    2014-02-15

    Assessing the average energy intensity of Internet transmissions is a complex task that has been a controversial subject of discussion. Estimates published over the last decade diverge by up to four orders of magnitude — from 0.0064 kilowatt-hours per gigabyte (kWh/GB) to 136 kWh/GB. This article presents a review of the methodological approaches used so far in such assessments: i) top–down analyses based on estimates of the overall Internet energy consumption and the overall Internet traffic, whereby average energy intensity is calculated by dividing energy by traffic for a given period of time, ii) model-based approaches that model all components needed to sustain an amount of Internet traffic, and iii) bottom–up approaches based on case studies and generalization of the results. Our analysis of the existing studies shows that the large spread of results is mainly caused by two factors: a) the year of reference of the analysis, which has significant influence due to efficiency gains in electronic equipment, and b) whether end devices such as personal computers or servers are included within the system boundary or not. For an overall assessment of the energy needed to perform a specific task involving the Internet, it is necessary to account for the types of end devices needed for the task, while the energy needed for data transmission can be added based on a generic estimate of Internet energy intensity for a given year. Separating the Internet as a data transmission system from the end devices leads to more accurate models and to results that are more informative for decision makers, because end devices and the networking equipment of the Internet usually belong to different spheres of control. -- Highlights: • Assessments of the energy intensity of the Internet differ by a factor of 20,000. • We review top–down, model-based, and bottom–up estimates from literature. • Main divergence factors are the year studied and the inclusion of end devices

  18. Apparatus and Method for Assessing Vestibulo-Ocular Function

    NASA Technical Reports Server (NTRS)

    Shelhamer, Mark J. (Inventor)

    2015-01-01

    A system for assessing vestibulo-ocular function includes a motion sensor system adapted to be coupled to a user's head; a data processing system configured to communicate with the motion sensor system to receive the head-motion signals; a visual display system configured to communicate with the data processing system to receive image signals from the data processing system; and a gain control device arranged to be operated by the user and to communicate gain adjustment signals to the data processing system.

  19. Assessing the reliability of nondestructive evaluation methods for damage characterization

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Annis, Charles; Sabbagh, Harold A.; Knopp, Jeremy S.; Lindgren, Eric A.

    2014-02-01

    A comprehensive approach to NDE characterization error evaluation is presented that follows the framework of the `ahat-versus-a' model evaluation process for probability of detection (POD) assessment. Before characterization error model building is performed, an intermediate step must evaluate the presence and frequency of several possible classes of poor characterization results. A case study is introduced based on the estimation the length, depth and width of surface breaking cracks using bolt hole eddy current (BHEC) NDE. This study highlights the importance of engineering and statistical expertise in the model-building process to ensure all key effects and possible interactions are addressed.

  20. The Assessment of Experimental Methods of Serial Number Restoration

    NASA Astrophysics Data System (ADS)

    Argo, Mackenzie

    Serial number restoration is a common and successful process of revealing obliterated serial numbers on firearms. In a crime laboratory setting, obliterated serial numbers are commonly processed in order to tie a person to a crime scene or provide an investigative lead for officers. Currently serial numbers are restored using a chemical etchant method that can eat away at the metal on the firearm even after the examination is complete. It can also take several hours to complete and only provide an examiner with a partial number. There are other nondestructive options however little to no literature is available. The purpose of this study is to discover new methods for nondestructive serial number restoration and to compare them to the traditional chemical method used. Metal bars of premeasured obliteration depths and different compositions were examined using three proposed experimental methods: near infrared imaging, cold frost, and scanning acoustic microscopy. Results did not indicate significant difference in the median number of visible digits recovered for each of the three proposed methods compared to the traditional chemical method. There were significant results in the median number of composition utilized and depth of obliteration. This indicates that different firearm compositions and depth of obliteration has an effect on serial number restoration.

  1. An assessment of precipitation adjustment and feedback computation methods

    NASA Astrophysics Data System (ADS)

    Richardson, T. B.; Samset, B. H.; Andrews, T.; Myhre, G.; Forster, P. M.

    2016-10-01

    The precipitation adjustment and feedback framework is a useful tool for understanding global and regional precipitation changes. However, there is no definitive method for making the decomposition. In this study we highlight important differences which arise in results due to methodological choices. The responses to five different forcing agents (CO2, CH4, SO4, black carbon, and solar insolation) are analyzed using global climate model simulations. Three decomposition methods are compared: using fixed sea surface temperature experiments (fSST), regressing transient climate change after an abrupt forcing (regression), and separating based on timescale using the first year of coupled simulations (YR1). The YR1 method is found to incorporate significant SST-driven feedbacks into the adjustment and is therefore not suitable for making the decomposition. Globally, the regression and fSST methods produce generally consistent results; however, the regression values are dependent on the number of years analyzed and have considerably larger uncertainties. Regionally, there are substantial differences between methods. The pattern of change calculated using regression reverses sign in many regions as the number of years analyzed increases. This makes it difficult to establish what effects are included in the decomposition. The fSST method provides a more clear-cut separation in terms of what physical drivers are included in each component. The fSST results are less affected by methodological choices and exhibit much less variability. We find that the precipitation adjustment is weakly affected by the choice of SST climatology.

  2. Real-time Continuous Assessment Method for Mental and Physiological Condition using Heart Rate Variability

    NASA Astrophysics Data System (ADS)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.

  3. Self-Assessment and Dialogue as Tools for Appreciating Diversity

    ERIC Educational Resources Information Center

    O'Neal, Gwenelle S.

    2012-01-01

    As social work educators continue to examine methods and techniques to provide meaningful knowledge about racism and discrimination, the role of self-assessment and dialogue should also be explored. This teaching note presents a tool for students and educators to use in considering literature discrimination and increasing awareness of…

  4. Principals' Use of Assessment Data to Drive Student Academic Achievement

    ERIC Educational Resources Information Center

    Henry, Stephanie Stewart

    2011-01-01

    Principals are expected to use data to develop a clear vision, stimulate meaningful dialogue among stakeholders, create interventions for struggling students, and improve school programs. The purpose of this three-phase, sequential, mixed-methods study was to examine how principals use summative and formative assessment data to improve academic…

  5. Participation in health impact assessment: objectives, methods and core values.

    PubMed Central

    Wright, John; Parry, Jayne; Mathers, Jonathan

    2005-01-01

    Health impact assessment (HIA) is a multidisciplinary aid to decision-making that assesses the impact of policy on public health and on health inequalities. Its purpose is to assist decision-makers to maximize health gains and to reduce inequalities. The 1999 Gothenburg Consensus Paper (GCP) provides researchers with a rationale for establishing community participation as a core value of HIA. According to the GCP, participation in HIA empowers people within the decision-making process and redresses the democratic deficit between government and society. Participation in HIA generates a sense that health and decision-making is community-owned, and the personal experiences of citizens become integral to the formulation of policy. However, the participatory and empowering dimensions of HIA may prove difficult to operationalize. In this review of the participation strategies adopted in key applications of HIA in the United Kingdom, we found that HIA's aim of influencing decision-making creates tension between its participatory and knowledge-gathering dimensions. Accordingly, researchers have decreased the participatory dimension of HIA by reducing the importance attached to the community's experience of empowerment, ownership and democracy, while enlarging its knowledge-gathering dimension by giving pre-eminence to "expert" and "research-generated" evidence. Recent applications of HIA offer a serviceable rationale for participation as a means of information gathering and it is no longer tenable to uphold HIA as a means of empowering communities and advancing the aims of participatory democracy. PMID:15682250

  6. A general method for the quantitative assessment of mineral pigments.

    PubMed

    Ares, M C Zurita; Fernández, J M

    2016-01-01

    A general method for the estimation of mineral pigment contents in different bases has been proposed using a sole set of calibration curves, (one for each pigment), calculated for a white standard base, thus elaborating patterns for each utilized base is not necessary. The method can be used in different bases and its validity had ev en been proved in strongly tinted bases. The method consists of a novel procedure that combines diffuse reflectance spectroscopy, second derivatives and the Kubelka-Munk function. This technique has proved to be at least one order of magnitude more sensitive than X-Ray diffraction for colored compounds, since it allowed the determination of the pigment amount in colored samples containing 0.5 wt% of pigment that was not detected by X-Ray Diffraction. The method can be used to estimate the concentration of mineral pigments in a wide variety of either natural or artificial materials, since it does not requiere the calculation of each pigment pattern in every base. This fact could have important industrial consequences, as the proposed method would be more convenient, faster and cheaper.

  7. USING AN INTENSIVE ASSESSMENT METHOD TO CALIBRATE A RAPID WETLAND ASSESSMENT METHOD: AN EXAMPLE FROM NANTICOKE BASIN, DELAWARE AND MARYLAND, USA

    EPA Science Inventory

    The development of rapid assessment methods has become a priority for many organizations that want to report on the condition of wetlands at larger scales requiring many sampling sites. To have faith in these rapid methods, however, requires that they be verified with more compr...

  8. A Method for Evaluating Competency in Assessment and Management of Suicide Risk

    ERIC Educational Resources Information Center

    Hung, Erick K.; Binder, Renee L.; Fordwood, Samantha R.; Hall, Stephen E.; Cramer, Robert J.; McNiel, Dale E.

    2012-01-01

    Objective: Although health professionals increasingly are expected to be able to assess and manage patients' risk for suicide, few methods are available to evaluate this competency. This report describes development of a competency-assessment instrument for suicide risk-assessment (CAI-S), and evaluates its use in an objective structured clinical…

  9. Comparing Assessment Methods as Predictors of Student Learning in an Undergraduate Mathematics Course

    ERIC Educational Resources Information Center

    Shorter, Nichole A.; Young, Cynthia Y.

    2011-01-01

    This experiment was designed to determine which assessment method: continuous assessment (in the form of daily in-class quizzes), cumulative assessment (in the form of online homework), or project-based learning, best predicted student learning (dependent upon post-test grades) in an undergraduate mathematics course. Participants included 117…

  10. Assessing Autonomous Learning in Research Methods Courses: Implementing the Student-Driven Research Project

    ERIC Educational Resources Information Center

    Vandiver, Donna M.; Walsh, Jeffrey A.

    2010-01-01

    As empirical assessments of teaching strategies increase in many disciplines and across many different courses, a paucity of such assessment seems to exist in courses devoted to social science research methods. This lack of assessment and evaluation impedes progress in developing successful teaching pedagogy. The teaching-learning issue addressed…

  11. PWSCC Assessment by Using Extended Finite Element Method

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Jun; Lee, Sang-Hwan; Chang, Yoon-Suk

    2015-12-01

    The head penetration nozzle of control rod driving mechanism (CRDM) is known to be susceptible to primary water stress corrosion cracking (PWSCC) due to the welding-induced residual stress. Especially, the J-groove dissimilar metal weld regions have received many attentions in the previous studies. However, even though several advanced techniques such as weight function and finite element alternating methods have been introduced to predict the occurrence of PWSCC, there are still difficulties in respect of applicability and efficiency. In this study, the extended finite element method (XFEM), which allows convenient crack element modeling by enriching degree of freedom (DOF) with special displacement function, was employed to evaluate structural integrity of the CRDM head penetration nozzle. The resulting stress intensity factors of surface cracks were verified for the reliability of proposed method through the comparison with those suggested in the American Society of Mechanical Engineering (ASME) code. The detailed results from the FE analyses are fully discussed in the manuscript.

  12. Methods of Assessing and Achieving Normality Applied to Environmental Data

    PubMed

    Mateu

    1997-09-01

    / It has been recognized for a long time that data transformation methods capable of achieving normality of distributions could have a crucial role in statistical analysis, especially towards an efficient application of techniques such as analysis of variance and multiple regression analysis. Normality is a basic assumption in many of the statistical methods used in the environmental sciences and is very often neglected. In this paper several techniques to test normality of distributions are proposed and analyzed. Confidence intervals and nonparametric tests are used and discussed. Basic and Box-Cox transformations are the suggested methods to achieve normal variables. Finally, we develop an application related to environmental data with atmospheric parameters and SO2 and particle concentrations. Results show that the analyzed transformations work well and are very useful to achieve normal distributions.KEY WORDS: Normal distribution; Kurtosis; Skewness; Confidence intervals; Box-Cox transformations; Nonparametric tests

  13. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  14. Method for in vitro assessment of straylight from intraocular lenses

    PubMed Central

    Łabuz, Grzegorz; Vargas-Martín, Fernando; van den Berg, Thomas J.T.P.; López-Gil, Norberto

    2015-01-01

    Ocular straylight has been measured by means of psychophysical methods over the years. This approach gives a functional parameter yielding a straight comparison with optically defined light scattering, and the point-spread-function. This is of particular importance when the effect of intraocular lenses (IOLs) on postoperative straylight is sought. An optical system for straylight measurements of IOLs was adapted to a commercial device (C-Quant, Oculus), which employs such psychophysical method. The proposed modifications were validated using light-scattering filters and some sample IOLs. The measurements were performed by 3 observers to prove that results are independent from straylight of the eye. Other applications will be discussed. PMID:26601008

  15. An automated method for fibrin clot permeability assessment.

    PubMed

    Ząbczyk, Michał; Piłat, Adam; Awsiuk, Magdalena; Undas, Anetta

    2015-01-01

    The fibrin clot permeability coefficient (Ks) is a useful measure of porosity of the fibrin network, which is determined by a number of genetic and environmental factors. Currently available methods to evaluate Ks are time-consuming, require constant supervision and provide only one parameter. We present an automated method in which drops are weighed individually, buffer is dosed by the pump and well defined clot washing is controlled by the software. The presence of a straight association between drop mass and their dripping time allows to shorten the measurement time twice. In 40 healthy individuals, Ks, the number of drops required to reach the plateau (DTP), the time to achieve the plateau (TTP) and the DTP/TTP ratio (DTR) were calculated. There was a positive association between Ks (r = 0.69, P < 0.0001) evaluated by using the manual [median of 4.17 (3.60-5.18) ·10⁻⁹ cm²) and the automated method [median of 4.35 (3.74-5.38) ·10⁻⁹ cm²]. The correlation was stronger (r = 0.85, P < 0.001) in clots with DTP of 7 or less (n = 12). DTP was associated with total homocysteine (tHcy) (r = 0.35, P < 0.05) and activated partial thromboplastin time (APTT) (r = -0.34, P < 0.05), TTP with Ks (r = -0.55, P < 0.01 for the manual method and r = -0.44, P < 0.01 for the automated method) and DTP (r = 0.75, P < 0.0001), and DTR with Ks (r = 0.70, P < 0.0001 for the manual method and r = 0.76, P < 0.0001 for the automated method), fibrinogen (r = -0.58, P < 0.0001) and C-reactive protein (CRP) (r = -0.47, P < 0.01). The automated method might be a suitable tool for research and clinical use and may offer more additional parameters describing fibrin clot structure.

  16. Points on the curve: An analysis of methods for assessing the shape of vertebrate claws.

    PubMed

    Tinius, Alexander; Patrick Russell, Anthony

    2017-02-01

    The form of amniote claws has been extensively investigated, often with inferences about ecological association being drawn from studies of their geometry. Various methods have been used to quantify differences in the geometry of claws, but rarely have the underlying assumptions of such methods been addressed. Here, we use one set of bird claws and apply six methods (five that have been previously used, and a new one) that are tasked with comparing their shape. In doing so, we compare the (1) ability of these methods to represent the shape of the claw; (2) validity of the assumptions made about underlying claw geometry; (3) their ability to be applied unambiguously; and (4) their ability to differentiate between predetermined functional clusters. We find that of the six methods considered only the geometric morphometric approach reveals differences in the shapes of bird claws. Our comparison shows that geometry-based methods can provide a general estimate of the degree of curvature of claw arcs, but are unable to differentiate between shapes. Of all of the geometry-based approaches, we conclude that the adjusted version of the Zani (2000) method is the most useful because it can be applied without ambiguity, and provides a reliable estimate of claw curvature. The three landmarks that define that method (tip and base of the claw arc, plus the intersection between said claw arc and a line drawn perpendicular from the midpoint of tip and claw base) do not all bear biological significance, but relatively clearly circumscribe the length-to-height ratio of the claw, which relates to its curvature. Overall, our comparisons reveal that the shape of avian claws does not differ significantly between climbing and perching birds, and that the utilization of preordained functional clusters in comparative data analysis can hinder the discovery of meaningful differences in claw shape. J. Morphol. 278:150-169, 2017. © 2016 Wiley Periodicals,Inc.

  17. [Numerical flow simulation : A new method for assessing nasal breathing].

    PubMed

    Hildebrandt, T; Osman, J; Goubergrits, L

    2016-08-01

    The current options for objective assessment of nasal breathing are limited. The maximum they can determine is the total nasal resistance. Possibilities to analyze the endonasal airstream are lacking. In contrast, numerical flow simulation is able to provide detailed information of the flow field within the nasal cavity. Thus, it has the potential to analyze the nasal airstream of an individual patient in a comprehensive manner and only a computed tomography (CT) scan of the paranasal sinuses is required. The clinical application is still limited due to the necessary technical and personnel resources. In particular, a statistically based referential characterization of normal nasal breathing does not yet exist in order to be able to compare and classify the simulation results.

  18. Human Rights Impact Assessment: A Method for Healthy Policymaking.

    PubMed

    MacNaughton, Gillian

    2015-06-11

    Two decades ago, Lawrence Gostin and Jonathan Mann developed a methodology for human rights impact assessment (HRIA) of proposed public health policies. This article looks back over the last 20 years to examine the development of HRIA in the health field and consider the progress that has been made since Gostin and Mann published their pioneering article. Health-related HRIA has advanced substantially in three ways. First, the content of the right to health has been delineated in greater detail through domestic and international laws and policies. Second, the UN human rights mechanisms have recommended that governments undertake HRIAs and have issued guidelines and methodologies for doing so. Third, nongovernmental organizations and international organizations have developed HRIA tools and carried out case studies to demonstrate their feasibility. In this light, the article concludes by recognizing the substantial progress that has been made in HRIA over the last 20 years and by considering some challenges that remain for health-related HRIA.

  19. Stratovolcano stability assessment methods and results from Citlaltepetl, Mexico

    USGS Publications Warehouse

    Zimbelman, D.R.; Watters, R.J.; Firth, I.R.; Breit, G.N.; Carrasco-Nunez, Gerardo

    2004-01-01

    Citlaltépetl volcano is the easternmost stratovolcano in the Trans-Mexican Volcanic Belt. Situated within 110 km of Veracruz, it has experienced two major collapse events and, subsequent to its last collapse, rebuilt a massive, symmetrical summit cone. To enhance hazard mitigation efforts we assess the stability of Citlaltépetl's summit cone, the area thought most likely to fail during a potential massive collapse event. Through geologic mapping, alteration mineralogy, geotechnical studies, and stability modeling we provide important constraints on the likelihood, location, and size of a potential collapse event. The volcano's summit cone is young, highly fractured, and hydrothermally altered. Fractures are most abundant within 5–20-m wide zones defined by multiple parallel to subparallel fractures. Alteration is most pervasive within the fracture systems and includes acid sulfate, advanced argillic, argillic, and silicification ranks. Fractured and altered rocks both have significantly reduced rock strengths, representing likely bounding surfaces for future collapse events. The fracture systems and altered rock masses occur non-uniformly, as an orthogonal set with N–S and E–W trends. Because these surfaces occur non-uniformly, hazards associated with collapse are unevenly distributed about the volcano. Depending on uncertainties in bounding surfaces, but constrained by detailed field studies, potential failure volumes are estimated to range between 0.04–0.5 km3. Stability modeling was used to assess potential edifice failure events. Modeled failure of the outer portion of the cone initially occurs as an "intact block" bounded by steeply dipping joints and outwardly dipping flow contacts. As collapse progresses, more of the inner cone fails and the outer "intact" block transforms into a collection of smaller blocks. Eventually, a steep face develops in the uppermost and central portion of the cone. This modeled failure morphology mimics collapse

  20. The Methods Behind 2015 Informatics Capacity and Needs Assessment Study

    PubMed Central

    2016-01-01

    The 2015 Informatics Needs and Capacity of Local Health Departments (LHDs) survey is the most recent comprehensive source of quantitative data on LHD informatics. Conducted by the National Association of County & City Health Officials (NACCHO), this is the third nationally representative quantitative study of LHD informatics since 2009. The previous 2 comprehensive quantitative assessments were conducted by NACCHO in 2009-2010 and 2011. Given that public health informatics is rapidly evolving, the 2015 Informatics survey is a much-needed country-wide assessment of the current informatics needs and capacities of LHDs. This article outlines detailed methodology used in the 2015 Informatics survey, including instrument development, pretesting, sampling design and sample size, survey administration, and sampling weights. A 9-member advisory committee representing federal, state, and local health agency representatives guided the design and implementation of this study. The survey instrument was organized into 6 topic areas: demographics, physical infrastructure, skills and capacity available, public health workforce development needs, electronic health records, and health information exchange. The instrument was pretested with a sample of 20 LHDs and subsequently pilot-tested with 30 LHDs. The survey was administered via the Qualtrics survey software to the sample of 650 LHDs, selected using stratified random sampling. The survey was fielded for approximately 8 weeks and 324 usable responses were received, constituting a response rate of 50%. Statistical weights were developed to account for 3 factors: (a) disproportionate response rate by population size (using 7 population strata), (b) oversampling of LHDs with larger population sizes, and (c) sampling rather than a census approach. PMID:27684627

  1. Laboratory methods for assessing and licensing influenza vaccines for poultry.

    PubMed

    Swayne, David E

    2014-01-01

    Avian influenza vaccines for poultry are based on hemagglutinin proteins, and protection is specific to the vaccine subtype. Over 113 billion doses have been used between 2002 and 2010 for high pathogenicity avian influenza control. No universal vaccines are currently available. The majority of avian influenza vaccines are inactivated whole influenza viruses that are grown in embryonating eggs, inactivated, emulsified in oil adjuvant systems, and injected into chickens. Live virus-vectored vaccines such as recombinant viruses of fowl pox, Newcastle disease, herpesvirus of turkeys and duck enteritis containing inserts of avian influenza virus hemagglutinin genes have been used on a more limited basis. In studies to evaluate vaccine efficacy and potency, the protocol design and its implementation should address the biosafety level needed for the work, provide information required for approval by Institutional Biosafety and Animal Care Committees, contain information on seed strain selection, provide needed information on animal subjects and their relevant parameters, and address the selection and use of challenge viruses. Various metrics have been used to directly measure vaccine induced protection. These include prevention of death, clinical signs, and lesions; prevention of decreases in egg production and alterations in egg quality; quantification of the reduction in virus replication and shedding from the respiratory tract and gastrointestinal tracts; and prevention of contact transmission in in vivo poultry experiments. In addition, indirect measures of vaccine potency and protection can be developed and validated against the direct measures and include serological assays in vaccinated poultry and assessment of the content of hemagglutinin antigen in the vaccine. These indirect assessments of protection are useful in determining if vaccine batches have a consistent ability to protect. For adequate potency, vaccines should contain 50 mean protective doses of

  2. Assessment of Entrepreneurial Territorial Attractiveness by the Ranking Method

    ERIC Educational Resources Information Center

    Gavrilova, Marina A.; Shepelev, Victor M.; Kosyakova, Inessa V.; Belikova, Lyudmila F.; Chistik, Olga F.

    2016-01-01

    The relevance of the researched problem is caused by existence of differentiation in development of separate regional units (urban districts and municipalities) within the region. The aim of this article is to offer a method, which determines the level of differentiation in development of various components of the region, and also in producing a…

  3. Understanding and Assessing: Bibliometrics as a Method of Measuring Interdisciplinarity

    ERIC Educational Resources Information Center

    Feller, Irwin

    2005-01-01

    This article presents the author's critique of "Measurement of Central Aspects of Scientific Research: Performance, Interdisciplinarity, Structure," by Anthony F. J. van Raan. The author states that van Raan's article provides an excellent, if tightly compressed, introduction to key findings and innovative methods of the accumulating and…

  4. Assessing Affective Constructs in Reading: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Conradi, Kristin

    2011-01-01

    Research investigating affective dimensions in reading has long been plagued by vaguely defined constructs and, consequently, by an array of potentially problematic instruments designed to measure them. This mixed-methods study investigated the relationship among three popular group-administered instruments intended to tap affective constructs in…

  5. A Comparative Analysis of Several Methods of Assessing Item Bias.

    ERIC Educational Resources Information Center

    Ironson, Gail H.

    Four statistical methods for identifying biased test items were used with data from two ethnic groups (1,691 black and 1,794 white high school seniors). The data were responses to 150 items in five subtests including two traditional tests (reading and mathematics) and three nontraditional tests (picture number test of associative memory, letter…

  6. Weighting Methods for Assessing Policy Effects Mediated by Peer Change

    ERIC Educational Resources Information Center

    Hong, Guanglei; Nomi, Takako

    2012-01-01

    This study introduces a new set of weighting procedures for revealing the mediation mechanism in multi-level settings. These methods are illustrated through an investigation of whether the impact of a system-wide policy change on student outcomes is mediated by policy-induced peer composition change. When the policy changed not only…

  7. Methods of assessing structural integrity for space shuttle vehicles

    NASA Technical Reports Server (NTRS)

    Anderson, R. E.; Stuckenberg, F. H.

    1971-01-01

    A detailed description and evaluation of nondestructive evaluation (NDE) methods are given which have application to space shuttle vehicles. Appropriate NDE design data is presented in twelve specifications in an appendix. Recommendations for NDE development work for the space shuttle program are presented.

  8. RIVERINE ASSESSMENT USING MACROINVERTEBRATES: ALL METHODS ARE NOT CREATED EQUAL

    EPA Science Inventory

    In 1999, we compared six benthic macroinvertebrate field sampling methods for nonwadeable streams based on those developed for three major programs (EMAP-SW, NAWQA, and Ohio EPA), at each of sixty sites across four tributaries to the Ohio River. Water chemistry samples and physi...

  9. EXPOSURE ASSESSMENT METHODS DEVELOPMENT PILOTS FOR THE NATIONAL CHILDREN'S STUDY

    EPA Science Inventory

    Accurate exposure classification tools are needed to link exposure with health effects. EPA began methods development pilot studies in 2000 to address general questions about exposures and outcome measures. Selected pilot studies are highlighted in this poster. The “Literature Re...

  10. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    USGS Publications Warehouse

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  11. Soil water content assessment: seasonal effects on the triangle method

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Capodici, F.; Ciraolo, G.; La Loggia, G.; Cammalleri, C.

    2016-10-01

    Among indirect estimations of the soil water content in the upper layer, the "triangle method" is based on the relationship between the optical and thermal features sensed via Earth Observation. These features are controlled by water content at surface and within root zone, but also by meteorological forcing including air temperature and humidity, and solar radiation. Night and day-time MODIS composite land-surface temperature (LST) allowed applying the thermal admittance version of the method; by taking into account the temporal admittance of the soil, this version was previously found achieving high accuracy in estimate the soil water content at high spatial resolution within a short time period (a single irrigation season). In this study, the method has been applied on a long time series to analyse the seasonal influence of the meteorological forcing on the triangle method index (or temperature vegetation index, TVX). The Imera Meridionale hydrological basin (≍ 2000 km2, Sicily) has been chosen to test the method over a decade time series, since its climate varies during the year from arid to temperate. The climate is arid for ≍3-7 months (from April-May to August- October) depending on altitude. The temporal analysis reveals that NDVI and LST pairs moves circularly within the optical and thermal diachronic feature space. Concordantly, the boundaries of the triangle move during the seasons. Results suggest that the contribution of soil water content fluctuations need to be isolated from other environmental stress factors, or at least, the conceptual meaning of TVX have to be better interpreted.

  12. The socio-economic impact of noise: a method for assessing noise annoyance.

    PubMed

    Gjestland, Truls

    2007-01-01

    Norwegian authorities have developed and adopted a method for assessing the magnitude of noise impact on a community in quantitative terms. The method takes into account all levels of noise annoyance experienced by all the residents in an area and transforms these data into a single quantity that can also be expressed in monetary terms. This method is contrary to other commonly used assessment methods where only a certain fraction of the impacted people, e.g. those "highly annoyed," is considered.

  13. Comparing assessment methods as predictors of student learning in an undergraduate mathematics course

    NASA Astrophysics Data System (ADS)

    Shorter, Nichole A.; Young, Cynthia Y.

    2011-12-01

    This experiment was designed to determine which assessment method: continuous assessment (in the form of daily in-class quizzes), cumulative assessment (in the form of online homework), or project-based learning, best predicted student learning (dependent upon post-test grades) in an undergraduate mathematics course. Participants included 117 university-level undergraduate freshmen enrolled in a course titled 'Mathematics for Calculus'. A stepwise regression model was formulated to model the relationship between the predictor variables (the continuous assessment, cumulative assessment, and project scores) versus the outcome variable (the post-test scores). Results indicated that ultimately the continuous assessment scores best predicted students' post-test scores.

  14. Evaluation of a clinical simulation-based assessment method for EHR-platforms.

    PubMed

    Jensen, Sanne; Rasmussen, Stine Loft; Lyng, Karen Marie

    2014-01-01

    In a procurement process assessment of issues like human factors and interaction between technology and end-users can be challenging. In a large public procurement of an Electronic health record-platform (EHR-platform) in Denmark a clinical simulation-based method for assessing and comparing human factor issues was developed and evaluated. This paper describes the evaluation of the method, its advantages and disadvantages. Our findings showed that clinical simulation is beneficial for assessing user satisfaction, usefulness and patient safety, all though it is resource demanding. The method made it possible to assess qualitative topics during the procurement and it provides an excellent ground for user involvement.

  15. Effects of Node-Link Mapping on Non-Science Majors' Meaningful Learning and Conceptual Change in a Life-Science Survey Lecture Course

    ERIC Educational Resources Information Center

    Park-Martinez, Jayne Irene

    2011-01-01

    The purpose of this study was to assess the effects of node-link mapping on students' meaningful learning and conceptual change in a 1-semester introductory life-science course. This study used node-link mapping to integrate and apply the National Research Council's (NRC, 2005) three principles of human learning: engaging students' prior…

  16. The assessment of human skin biomatrixes using raman spectroscopy method

    NASA Astrophysics Data System (ADS)

    Timchenko, E. V.; Timchenko, P. E.; Volova, L. T.; Dolgushkin, D. A.; Shalkovskaya, P. Y.; Pershutkina, S. V.; Nefedova, I. F.

    2017-01-01

    There are presented the results of the analysis of the implants made of human skin by Raman scattering method. The main spectral distinctions of bioimplants by using various methods for their manufacture are shown at wavenumbers 1062 cm-1, 1645 cm-1, 1260 cm-1, 850 cm-1, 863 cm-1, corresponding to components that are important for the quality of implant: glycosaminoglycans, amide type I, amide type III, asymmetrical association C-O-S of vibration of glycosaminoglycans GAGs, tyrosine and a C-C stretching of proline ring, ribose. Has been carried out two-dimensional analysis of optical coefficients providing an opportunity to control the quality of cutaneous implants in the process of manufacturing it, and detailed analysis of Raman scattering spectroscopy.

  17. Energy performance assessment with empirical methods: application of energy signature

    NASA Astrophysics Data System (ADS)

    Belussi, L.; Danza, L.; Meroni, I.; Salamone, F.

    2015-03-01

    Energy efficiency and reduction of building consumption are deeply felt issues both at Italian and international level. The recent regulatory framework sets stringent limits on energy performance of buildings. Awaiting the adoption of these principles, several methods have been developed to solve the problem of energy consumption of buildings, among which the simplified energy audit is intended to identify any anomalies in the building system, to provide helpful tips for energy refurbishments and to raise end users' awareness. The Energy Signature is an operational tool of these methodologies, an evaluation method in which energy consumption is correlated with climatic variables, representing the actual energy behaviour of the building. In addition to that purpose, the Energy Signature can be used as an empirical tool to determine the real performances of the technical elements. The latter aspect is illustrated in this article.

  18. A Novel Method to Assess Incompleteness of Mammography Reports

    PubMed Central

    Gimenez, Francisco J.; Wu, Yirong; Burnside, Elizabeth S.; Rubin, Daniel L.

    2014-01-01

    Mammography has been shown to improve outcomes of women with breast cancer, but it is subject to inter-reader variability. One well-documented source of such variability is in the content of mammography reports. The mammography report is of crucial importance, since it documents the radiologist’s imaging observations, interpretation of those observations in terms of likelihood of malignancy, and suggested patient management. In this paper, we define an incompleteness score to measure how incomplete the information content is in the mammography report and provide an algorithm to calculate this metric. We then show that the incompleteness score can be used to predict errors in interpretation. This method has 82.6% accuracy at predicting errors in interpretation and can possibly reduce total diagnostic errors by up to 21.7%. Such a method can easily be modified to suit other domains that depend on quality reporting. PMID:25954448

  19. A novel method to assess incompleteness of mammography reports.

    PubMed

    Gimenez, Francisco J; Wu, Yirong; Burnside, Elizabeth S; Rubin, Daniel L

    2014-01-01

    Mammography has been shown to improve outcomes of women with breast cancer, but it is subject to inter-reader variability. One well-documented source of such variability is in the content of mammography reports. The mammography report is of crucial importance, since it documents the radiologist's imaging observations, interpretation of those observations in terms of likelihood of malignancy, and suggested patient management. In this paper, we define an incompleteness score to measure how incomplete the information content is in the mammography report and provide an algorithm to calculate this metric. We then show that the incompleteness score can be used to predict errors in interpretation. This method has 82.6% accuracy at predicting errors in interpretation and can possibly reduce total diagnostic errors by up to 21.7%. Such a method can easily be modified to suit other domains that depend on quality reporting.

  20. Vulnerability Assessment Using a Fuzzy Logic Based Method

    DTIC Science & Technology

    1993-12-01

    evaluating computer security vulnerabilities is very labor intensive. To help ease this workload, this thesis presents two automated methods possibly...eal 3n, 0 e) 0 n It -f0 . nts reg"roreg Iths OU raen estre -tte In Vt )thef awfict Of this ~.,i~t 14,-, A I’ K1- 2 3" toe 17 %1d3.rV. ~ 0 C .~ Ats ,glt