Science.gov

Sample records for objective evaluation framework

  1. Framework for objective evaluation of privacy filters

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Melle, Andrea; Dugelay, Jean-Luc; Ebrahimi, Touradj

    2013-09-01

    Extensive adoption of video surveillance, affecting many aspects of our daily lives, alarms the public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks, leading to a tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. In this paper, we investigate this privacy-intelligibility tradeoff objectively by proposing an objective framework for evaluation of privacy filters. We apply the proposed framework on a use case where privacy of people is protected by obscuring faces, assuming an automated video surveillance system. We used several popular privacy protection filters, such as blurring, pixelization, and masking and applied them with varying strengths to people's faces from different public datasets of video surveillance footage. Accuracy of face detection algorithm was used as a measure of intelligibility (a face should be detected to perform a surveillance task), and accuracy of face recognition algorithm as a measure of privacy (a specific person should not be identified). Under these conditions, after application of an ideal privacy protection tool, an obfuscated face would be visible as a face but would not be correctly identified by the recognition algorithm. The experiments demonstrate that, in general, an increase in strength of privacy filters under consideration leads to an increase in privacy (i.e., reduction in recognition accuracy) and to a decrease in intelligibility (i.e., reduction in detection accuracy). Masking also shows to be the most favorable filter across all tested datasets.

  2. The Sloan-C Pillars and Boundary Objects As a Framework for Evaluating Blended Learning

    ERIC Educational Resources Information Center

    Laumakis, Mark; Graham, Charles; Dziuban, Chuck

    2009-01-01

    The authors contend that blended learning represents a boundary object; a construct that brings together constituencies from a variety of backgrounds with each of these cohorts defining the object somewhat differently. The Sloan-C Pillars (learning effectiveness, access, cost effectiveness, student satisfaction, and faculty satisfaction) provide…

  3. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    SciTech Connect

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A.

    2014-03-15

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  4. Evaluation of maritime object detection methods for full motion video applications using the PASCAL VOC Challenge framework

    NASA Astrophysics Data System (ADS)

    Jaszewski, Martin; Parameswaran, Shibin; Hallenborg, Eric; Bagnall, Bryan

    2015-03-01

    We present an initial target detection performance evaluation system for the RAPid Image Exploitation Resource (RAPIER) Full Motion Video (RFMV) maritime target tracking software. We test and evaluate four statistical target detection methods using 30 Hz full motion video from aerial platforms. Using appropriate algorithm performance criteria inspired by the PASCAL Visual Object Classes (VOC) Challenge, we address the tradeoffs between detection fidelity and computational speed/throughput.

  5. Object-oriented Geographic Information System Framework

    Energy Science and Technology Software Center (ESTSC)

    2003-03-01

    JeoViewer is an intelligent object-oriented geographic information system (GIS) framework written in Java that provides transparent linkage to any object’s data, behaviors, and optimized spatial geometry representation. Tools are provided for typical GIS functionality, data ingestion, data export, and integration with other frameworks. The primary difference between Jeo Viewer and traditional GIS systems is that traditional GIS systems offer static views of geo-spatial data while JeoViewer can be dynamically coupled to models and live datamore » streams which dynamically change the state of the object which can be immediately represented in JeoViewer. Additionally, JeoViewer’s object-oriented paradigm provides a more natural representation of spatial data. A rich layer hierarchy allows arbitrary grouping of objects based on any relationship as well as the traditional GIS vertical ordering of objects. JeoViewer can run as a standalone product, extended with additional analysis functionality, or embedded in another framework.« less

  6. Object-oriented Geographic Information System Framework

    SciTech Connect

    Lurie, Gordon

    2003-03-01

    JeoViewer is an intelligent object-oriented geographic information system (GIS) framework written in Java that provides transparent linkage to any object’s data, behaviors, and optimized spatial geometry representation. Tools are provided for typical GIS functionality, data ingestion, data export, and integration with other frameworks. The primary difference between Jeo Viewer and traditional GIS systems is that traditional GIS systems offer static views of geo-spatial data while JeoViewer can be dynamically coupled to models and live data streams which dynamically change the state of the object which can be immediately represented in JeoViewer. Additionally, JeoViewer’s object-oriented paradigm provides a more natural representation of spatial data. A rich layer hierarchy allows arbitrary grouping of objects based on any relationship as well as the traditional GIS vertical ordering of objects. JeoViewer can run as a standalone product, extended with additional analysis functionality, or embedded in another framework.

  7. Performance Objectives: Foundation for Evaluation

    ERIC Educational Resources Information Center

    McKinney, Floyd L.; Mannebach, Alfred J.

    1970-01-01

    Only when agricultural educators and others evaluate agricultural education programs on the basis of student's performance in relation to valid and realistic performance objectives will progress be made in educational program improvement. (Authors)

  8. Evaluation Framework for NASA's Educational Outreach Programs

    NASA Technical Reports Server (NTRS)

    Berg, Rick; Booker, Angela; Linde, Charlotte; Preston, Connie

    1999-01-01

    The objective of the proposed work is to develop an evaluation framework for NASA's educational outreach efforts. We focus on public (rather than technical or scientific) dissemination efforts, specifically on Internet-based outreach sites for children.The outcome of this work is to propose both methods and criteria for evaluation, which would enable NASA to do a more analytic evaluation of its outreach efforts. The proposed framework is based on IRL's ethnographic and video-based observational methods, which allow us to analyze how these sites are actually used.

  9. Evaluation Framework for Search Instruments

    SciTech Connect

    Warren, Glen A.; Smith, Leon E.; Cooper, Matthew W.; Kaye, William R.

    2005-10-23

    A framework for quantitatively evaluating current and proposed gamma-ray search instrument designs has been developed. The framework is designed to generate a large library of “virtual neighborhoods” that can be used to test and evaluate nearly any gamma-ray sensor type. Calculating nuisance-source emissions and combining various sources to create a large number of random virtual scenes places a significant computational burden on the development of the framework. To reduce this burden, a number of radiation transport simplifications have been made which maintain the essential physics ingredients for the quantitative assessment of search instruments while significantly reducing computational times. The various components of the framework, from the simulation and benchmarking of nuisance source emissions to the computational engine for generating the gigabytes of simulated search scenes, are discussed.

  10. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed

  11. Sequentially Executed Model Evaluation Framework

    Energy Science and Technology Software Center (ESTSC)

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, suchmore » as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed« less

  12. Rethinking modeling framework design: object modeling system 3.0

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Object Modeling System (OMS) is a framework for environmental model development, data provisioning, testing, validation, and deployment. It provides a bridge for transferring technology from the research organization to the program delivery agency. The framework provides a consistent and efficie...

  13. Matrix evaluation of science objectives

    NASA Technical Reports Server (NTRS)

    Wessen, Randii R.

    1994-01-01

    The most fundamental objective of all robotic planetary spacecraft is to return science data. To accomplish this, a spacecraft is fabricated and built, software is planned and coded, and a ground system is designed and implemented. However, the quantitative analysis required to determine how the collection of science data drives ground system capabilities has received very little attention. This paper defines a process by which science objectives can be quantitatively evaluated. By applying it to the Cassini Mission to Saturn, this paper further illustrates the power of this technique. The results show which science objectives drive specific ground system capabilities. In addition, this process can assist system engineers and scientists in the selection of the science payload during pre-project mission planning; ground system designers during ground system development and implementation; and operations personnel during mission operations.

  14. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.

  15. Sequentially Executed Model Evaluation Framework

    Energy Science and Technology Software Center (ESTSC)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such asmore » time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  16. A Configurable, Object-Oriented, Transportation System Software Framework

    SciTech Connect

    KELLY,SUZANNE M.; MYRE,JOHN W.; PRICE,MARK H.; RUSSELL,ERIC D.; SCOTT,DAN W.

    2000-08-01

    The Transportation Surety Center, 6300, has been conducting continuing research into and development of information systems for the Configurable Transportation Security and Information Management System (CTSS) project, an Object-Oriented Framework approach that uses Component-Based Software Development to facilitate rapid deployment of new systems while improving software cost containment, development reliability, compatibility, and extensibility. The direction has been to develop a Fleet Management System (FMS) framework using object-oriented technology. The goal for the current development is to provide a software and hardware environment that will demonstrate and support object-oriented development commonly in the FMS Central Command Center and Vehicle domains.

  17. Making Just Tenure and Promotion Decisions Using the Objective Knowledge Growth Framework

    ERIC Educational Resources Information Center

    Chitpin, Stephanie

    2015-01-01

    Purpose: The purpose of this paper is to utilize the Objective Knowledge Growth Framework (OKGF) to promote a better understanding of the evaluating tenure and promotion processes. Design/Methodology/Approach: A scenario is created to illustrate the concept of using OKGF. Findings: The framework aims to support decision makers in identifying the…

  18. Framework for Development of Object-Oriented Software

    NASA Technical Reports Server (NTRS)

    Perez-Poveda, Gus; Ciavarella, Tony; Nieten, Dan

    2004-01-01

    The Real-Time Control (RTC) Application Framework is a high-level software framework written in C++ that supports the rapid design and implementation of object-oriented application programs. This framework provides built-in functionality that solves common software development problems within distributed client-server, multi-threaded, and embedded programming environments. When using the RTC Framework to develop software for a specific domain, designers and implementers can focus entirely on the details of the domain-specific software rather than on creating custom solutions, utilities, and frameworks for the complexities of the programming environment. The RTC Framework was originally developed as part of a Space Shuttle Launch Processing System (LPS) replacement project called Checkout and Launch Control System (CLCS). As a result of the framework s development, CLCS software development time was reduced by 66 percent. The framework is generic enough for developing applications outside of the launch-processing system domain. Other applicable high-level domains include command and control systems and simulation/ training systems.

  19. Organizing and Typing Persistent Objects Within an Object-Oriented Framework

    NASA Technical Reports Server (NTRS)

    Madany, Peter W.; Campbell, Roy H.

    1991-01-01

    Conventional operating systems provide little or no direct support for the services required for an efficient persistent object system implementation. We have built a persistent object scheme using a customization and extension of an object-oriented operating system called Choices. Choices includes a framework for the storage of persistent data that is suited to the construction of both conventional file system and persistent object system. In this paper we describe three areas in which persistent object support differs from file system support: storage organization, storage management, and typing. Persistent object systems must support various sizes of objects efficiently. Customizable containers, which are themselves persistent objects and can be nested, support a wide range of object sizes in Choices. Collections of persistent objects that are accessed as an aggregate and collections of light-weight persistent objects can be clustered in containers that are nested within containers for larger objects. Automated garbage collection schemes are added to storage management and have a major impact on persistent object applications. The Choices persistent object store provides extensible sets of persistent object types. The store contains not only the data for persistent objects but also the names of the classes to which they belong and the code for the operation of the classes. Besides presenting persistent object storage organization, storage management, and typing, this paper discusses how persistent objects are named and used within the Choices persistent data/file system framework.

  20. Evaluation Framework for Telemedicine Using the Logical Framework Approach and a Fishbone Diagram

    PubMed Central

    2015-01-01

    Objectives Technological advances using telemedicine and telehealth are growing in healthcare fields, but the evaluation framework for them is inconsistent and limited. This paper suggests a comprehensive evaluation framework for telemedicine system implementation and will support related stakeholders' decision-making by promoting general understanding, and resolving arguments and controversies. Methods This study focused on developing a comprehensive evaluation framework by summarizing themes across the range of evaluation techniques and organized foundational evaluation frameworks generally applicable through studies and cases of diverse telemedicine. Evaluation factors related to aspects of information technology; the evaluation of satisfaction of service providers and consumers, cost, quality, and information security are organized using the fishbone diagram. Results It was not easy to develop a monitoring and evaluation framework for telemedicine since evaluation frameworks for telemedicine are very complex with many potential inputs, activities, outputs, outcomes, and stakeholders. A conceptual framework was developed that incorporates the key dimensions that need to be considered in the evaluation of telehealth implementation for a formal structured approach to the evaluation of a service. The suggested framework consists of six major dimensions and the subsequent branches for each dimension. Conclusions To implement telemedicine and telehealth services, stakeholders should make decisions based on sufficient evidence in quality and safety measured by the comprehensive evaluation framework. Further work would be valuable in applying more comprehensive evaluations to verify and improve the comprehensive framework across a variety of contexts with more factors and participant group dimensions. PMID:26618028

  1. Etomica: an object-oriented framework for molecular simulation.

    PubMed

    Schultz, Andrew J; Kofke, David A

    2015-03-30

    We describe the design of an object-oriented library of software components that are suitable for constructing simulations of systems of interacting particles. The emphasis of the discussion is on the general design of the components and how they interact, and less on details of the programming interface or its implementation. Example code is provided as an aid to understanding object-oriented programming structures and to demonstrate how the framework is applied. PMID:25565378

  2. A Sustainable Evaluation Framework and Its Application

    ERIC Educational Resources Information Center

    Powell, Robert B.; Stern, Marc J.; Ardoin, Nicole

    2006-01-01

    This article presents a framework for developing internally sustainable evaluation systems for environmental education organizations, although the framework can be applied to other types of organizations. The authors developed a sustainable evaluation framework (SEF) with the intent of creating an evaluation system that could be self-administered…

  3. Design of single object model of software reuse framework

    NASA Astrophysics Data System (ADS)

    Yan, Liu

    2011-12-01

    In order to embody the reuse significance of software reuse framework fully, this paper will analyze in detail about the single object model mentioned in the article "The overall design of software reuse framework" and induce them as add and delete and modify mode, check mode, and search and scroll and display integrated mode. Three modes correspond to their own interface design template, class and database design concept. The modelling idea helps developers clear their minds and speed up. Even laymen can complete the development task easily.

  4. A unified framework for optimal multiple video object bit allocation

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Ngan, King Ngi

    2005-07-01

    MPEG-4 supports object-level video coding. It is a challenge to design an optimal bit allocation strategy which considers not only how to distribute bits among multiple video objects (MVO's) but also how to achieve optimization between the texture and shape information. In this paper, we present a uniform framework for optimal multiple video object bit allocation in MPEG-4. We combine the rate-distortion (R-D) models for the texture and shape information of arbitrarily shaped video objects to develop the joint texture-shape rate-distortion models. The dynamic programming (DP) technique is applied to optimize the bit allocation for the multiple video objects. The simulation results demonstrate that the proposed joint texture-shape optimization algorithm outperforms the MPEG-4 verification model on the decoded picture quality.

  5. An Object Oriented, Finite Element Framework for Linear Wave Equations

    SciTech Connect

    Koning, J M

    2004-08-12

    This dissertation documents an object oriented framework which can be used to solve any linear wave equation. The linear wave equations are expressed in the differential forms language. This differential forms expression allows a strict discrete interpretation of the system. The framework is implemented using the Galerkin Finite Element Method to define the discrete differential forms and operators. Finite element basis functions including standard scalar Nodal and vector Nedelec basis functions are used to implement the discrete differential forms resulting in a mixed finite element system. Discretizations of scalar and vector wave equations in the time and frequency domains will be demonstrated in both differential forms and vector calculi. This framework conserves energy, maintains physical continuity, is valid on unstructured grids, conditionally stable and second order accurate. Examples including linear electrodynamics, acoustics, elasticity and magnetohydrodynamics are demonstrated.

  6. Tecolote: An Object-Oriented Framework for Hydrodynamics Physics

    SciTech Connect

    Holian, K.S.; Ankeny, L.A.; Clancy, S.P.; Hall, J.H.; Marshall, J.C.; McNamara, G.R.; Painter, J.W.; Zander, M.E.

    1997-12-31

    Tecolote is an object-oriented framework for both developing and accessing a variety of hydrodynamics models. It is written in C++, and is in turn built on another framework - Parallel Object-Oriented Methods and Applications (POOMA). The Tecolote framework is meant to provide modules (or building blocks) to put together hydrodynamics applications that can encompass a wide variety of physics models, numerical solution options, and underlying data storage schemes, although with only those modules activated at runtime that are necessary. Tecolote has been designed to separate physics from computer science, as much as humanly possible. The POOMA framework provides fields in C++ to Tecolote that are analogous to Fortran-9O-like arrays in the way that they are used, but that, in addition, have underlying load balancing, message passing, and a special scheme for compact data storage. The POOMA fields can also have unique meshes associated with them that can allow more options than just the normal regularly-spaced Cartesian mesh. They also permit one-, two, and three-dimensions to be immediately accessible to the code developer and code user.

  7. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  8. Toward Objectivity in Faculty Evaluation

    ERIC Educational Resources Information Center

    Elmore, H. W.

    2008-01-01

    The productivity of faculty members often figures prominently in annual evaluations, post-tenure reviews, and decisions about tenure, promotion, merit pay, release time, awards, and other kinds of recognition. Yet the procedures and instruments that institutions use to assess productivity and merit vary, leaving little that unifies the evaluation…

  9. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  10. An Object Oriented Framework for Customizable Physics Education Software

    NASA Astrophysics Data System (ADS)

    Aslan, Hürol

    2007-04-01

    The learning improvements resulting from employing computers in physics education may be insufficient to convince many instructors to follow the trend leading to learner-centered education models. The instructors who are not willing to invest time and resources to create their own educational software will be able to contribute to the digital learning environment of future if they can easily produce interactive simulations accompanied by their own explanations. Software developers should design according to the needs of the instructors in order to help more students to benefit from the educational potential of computers. This paper describes how the capabilities of object-oriented languages can be utilized to produce customizable educational applets that can reach across language barriers and can be modified without writing additional code. The methods outlined here forms the framework of an ongoing programming project intended to help instructors create interactive educational materials with still graphics and animations.

  11. Faunus: An object oriented framework for molecular simulation

    PubMed Central

    Lund, Mikael; Trulsson, Martin; Persson, Björn

    2008-01-01

    Background We present a C++ class library for Monte Carlo simulation of molecular systems, including proteins in solution. The design is generic and highly modular, enabling multiple developers to easily implement additional features. The statistical mechanical methods are documented by extensive use of code comments that – subsequently – are collected to automatically build a web-based manual. Results We show how an object oriented design can be used to create an intuitively appealing coding framework for molecular simulation. This is exemplified in a minimalistic C++ program that can calculate protein protonation states. We further discuss performance issues related to high level coding abstraction. Conclusion C++ and the Standard Template Library (STL) provide a high-performance platform for generic molecular modeling. Automatic generation of code documentation from inline comments has proven particularly useful in that no separate manual needs to be maintained. PMID:18241331

  12. Overture: An Object-Oriented Framework for Overlapping Grid Applications

    SciTech Connect

    Henshaw, W.D.

    2002-04-04

    The Overture framework is an object-oriented environment for solving partial differential equations on over-lapping grids. We describe some of the tools in Overture that can be used to generate grids and solve partial differential equations (PDEs). Overture contains a collection of C++ classes that can be used to write PDE solvers either at a high level or at a lower level for efficiency. There are also a number of tools provided with Overture that can be used with no programming effort. These tools include capabilities to: repair computer-aided-design (CAD) geometries and build global surface triangulations; generate surface and volume grids with hyperbolic grid generation; generate composite overlapping grids; generate hybrid (unstructured) grids; and solve particular PDEs such as the incompressible and compressible Navier-Stokes equations.

  13. A Synoptic Framework for School Program Evaluation.

    ERIC Educational Resources Information Center

    Maher, Charles A.

    1978-01-01

    A broad-based conceptual scheme, termed the Synoptic Evaluation Framework (SEF), provides school program evaluation services. The SEF allows for the design and conduct of a range of program evaluation studies within the school context, and has been used as a basis for training doctoral school psychology students in program evaluation. (Author)

  14. Frameworks for evaluating health research capacity strengthening: a qualitative study

    PubMed Central

    2013-01-01

    Background Health research capacity strengthening (RCS) projects are often complex and hard to evaluate. In order to inform health RCS evaluation efforts, we aimed to describe and compare key characteristics of existing health RCS evaluation frameworks: their process of development, purpose, target users, structure, content and coverage of important evaluation issues. A secondary objective was to explore what use had been made of the ESSENCE framework, which attempts to address one such issue: harmonising the evaluation requirements of different funders. Methods We identified and analysed health RCS evaluation frameworks published by seven funding agencies between 2004 and 2012, using a mixed methods approach involving structured qualitative analyses of documents, a stakeholder survey and consultations with key contacts in health RCS funding agencies. Results The frameworks were intended for use predominantly by the organisations themselves, and most were oriented primarily towards funders’ internal organisational performance requirements. The frameworks made limited reference to theories that specifically concern RCS. Generic devices, such as logical frameworks, were typically used to document activities, outputs and outcomes, but with little emphasis on exploring underlying assumptions or contextual constraints. Usage of the ESSENCE framework appeared limited. Conclusions We believe that there is scope for improving frameworks through the incorporation of more accessible information about how to do evaluation in practice; greater involvement of stakeholders, following evaluation capacity building principles; greater emphasis on explaining underlying rationales of frameworks; and structuring frameworks so that they separate generic and project-specific aspects of health RCS evaluation. The third and fourth of these improvements might assist harmonisation. PMID:24330628

  15. The MEPPP Framework: A Framework for Monitoring and Evaluating Participatory Planning Processes

    NASA Astrophysics Data System (ADS)

    Hassenforder, Emeline; Pittock, Jamie; Barreteau, Olivier; Daniell, Katherine Anne; Ferrand, Nils

    2016-01-01

    Evaluating participatory processes, participatory planning processes especially, can be challenging. Due to their complexity, these processes require a specific approach to evaluation. This paper proposes a framework for evaluating projects that have adopted a participatory planning approach: the monitoring and evaluation of participatory planning processes (MEPPP) framework. The MEPPP framework is applied to one case study, a participatory planning process in the Rwenzori region in Uganda. We suggest that this example can serve as a guideline for researchers and practitioners to set up the monitoring and evaluation of their participatory planning process of interest by following six main phases: (1) description of the case, (2) clarification of the M&E viewpoint(s) and definition of the M&E objective(s), (3) identification of the context, process and outputs/outcomes analytical variables, (4) development of the M&E methods and data collection, (5) data analysis, and (6) sharing of the M&E results. Results of the application of the MEPPP framework in Uganda demonstrate the ability of the framework to tackle the complexity of participatory planning processes. Strengths and limitations of the MEPPP framework are also discussed.

  16. Tecolote: An object-oriented framework for physics development

    SciTech Connect

    Marshall, J.; Ankeny, L.; Clancy, S.

    1998-12-31

    The authors describe a C++ physics development environment, called the Tecolote Framework, which allows model developers to work more efficiently and accurately. This Framework contains a variety of meshes, operators, and parallel fields, as well as an input/output (I/O) subsystem and graphics capabilities. Model developers can inherit Tecolote`s generic model interface and use the Framework`s high-level field and operator components to write parallel physics equations. New Tecolote models are easily registered with the Framework, and they can be built and called directly from the input file, which greatly expedites model installation. In the process of developing an extensible and robust framework, they have found appealing solutions to some of the serious problems they encounter when parallelizing and extending the older codes. They also discuss memory and performance issues for a large hydrodynamics application built in this Framework.

  17. Evaluating the Learning in Learning Objects

    ERIC Educational Resources Information Center

    Kay, Robin H.; Knaack, Liesel

    2007-01-01

    A comprehensive review of the literature on the evaluation of learning objects revealed a number of problem areas, including emphasizing technology ahead of learning, an absence of reliability and validity estimates, over-reliance on informal descriptive data, a tendency to embrace general impressions of learning objects rather than focusing on…

  18. REEF: Retainable Evaluator Execution Framework

    PubMed Central

    Weimer, Markus; Chen, Yingda; Chun, Byung-Gon; Condie, Tyson; Curino, Carlo; Douglas, Chris; Lee, Yunseong; Majestro, Tony; Malkhi, Dahlia; Matusevych, Sergiy; Myers, Brandon; Narayanamurthy, Shravan; Ramakrishnan, Raghu; Rao, Sriram; Sears, Russell; Sezgin, Beysim; Wang, Julia

    2015-01-01

    Resource Managers like Apache YARN have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low-level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault-tolerance, task scheduling and coordination) and re-implement common mechanisms (e.g., caching, bulk-data transfers). This paper presents REEF, a development framework that provides a control-plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource re-use for data caching, and state management abstractions that greatly ease the development of elastic data processing work-flows on cloud platforms that support a Resource Manager service. REEF is being used to develop several commercial offerings such as the Azure Stream Analytics service. Furthermore, we demonstrate REEF development of a distributed shell application, a machine learning algorithm, and a port of the CORFU [4] system. REEF is also currently an Apache Incubator project that has attracted contributors from several instititutions.1 PMID:26819493

  19. FOSE: a framework for open science evaluation

    PubMed Central

    Walther, Alexander; van den Bosch, Jasper J. F.

    2012-01-01

    Pre-publication peer review of scientific literature in its present state suffers from a lack of evaluation validity and transparency to the community. Inspired by social networks, we propose a framework for the open exchange of post-publication evaluation to complement the current system. We first formulate a number of necessary conditions that should be met by any design dedicated to perform open scientific evaluation. To introduce our framework, we provide a basic data standard and communication protocol. We argue for the superiority of a provider-independent framework, over a few isolated implementations, which allows the collection and analysis of open evaluation content across a wide range of diverse providers like scientific journals, research institutions, social networks, publishers websites, and more. Furthermore, we describe how its technical implementation can be achieved by using existing web standards and technology. Finally, we illustrate this with a set of examples and discuss further potential. PMID:22754522

  20. Framework for Evaluation of Equity Initiatives

    ERIC Educational Resources Information Center

    Bexley, Emmaline; Harris, Kerri-Lee; James, Richard

    2010-01-01

    The Framework for Evaluation of Equity Initiatives has been prepared to support the Go8 Equity Strategy. Its purpose is to assist Group of Eight (Go8) universities to evaluate the effectiveness of their equity initiatives and interventions in the context of federal policies and the distinctive missions and responsibilities of the individual Go8…

  1. A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning

    NASA Astrophysics Data System (ADS)

    Basdekas, L.; Stewart, N.; Triana, E.

    2013-12-01

    Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU

  2. Proposal of a Framework for Internet Based Licensing of Learning Objects

    ERIC Educational Resources Information Center

    Santos, Osvaldo A.; Ramos, Fernando M. S.

    2004-01-01

    This paper presents a proposal of a framework whose main objective is to manage the delivery and rendering of learning objects in a digital rights controlled environment. The framework is based on a digital licensing scheme that requires each learning object to have the proper license in order to be rendered by a trusted player. A conceptual model…

  3. Object-oriented data analysis framework for neutron scattering experiments

    NASA Astrophysics Data System (ADS)

    Suzuki, Jiro; Nakatani, Takeshi; Ohhara, Takashi; Inamura, Yasuhiro; Yonemura, Masao; Morishima, Takahiro; Aoyagi, Tetsuo; Manabe, Atsushi; Otomo, Toshiya

    2009-02-01

    Materials and Life Science Facility (MLF) of Japan Proton Accelerator Research Complex (J-PARC) is one of the facilities that provided the highest intensity pulsed neutron and muon beams. The MLF computing environment design group organizes the computing environments of MLF and instruments. It is important that the computing environment is provided by the facility side, because meta-data formats, the analysis functions and also data analysis strategy should be shared among many instruments in MLF. The C++ class library, named Manyo-lib, is a framework software for developing data reduction and analysis softwares. The framework is composed of the class library for data reduction and analysis operators, network distributed data processing modules and data containers. The class library is wrapped by the Python interface created by SWIG. All classes of the framework can be called from Python language, and Manyo-lib will be cooperated with the data acquisition and data-visualization components through the MLF-platform, a user interface unified in MLF, which is working on Python language. Raw data in the event-data format obtained by data acquisition systems will be converted into histogram format data on Manyo-lib in high performance, and data reductions and analysis are performed with user-application software developed based on Manyo-lib. We enforce standardization of data containers with Manyo-lib, and many additional fundamental data containers in Manyo-lib have been designed and developed. Experimental and analysis data in the data containers can be converted into NeXus file. Manyo-lib is the standard framework for developing analysis software in MLF, and prototypes of data-analysis softwares for each instrument are being developed by the instrument teams.

  4. Adapting the Mathematical Task Framework to Design Online Didactic Objects

    ERIC Educational Resources Information Center

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-01-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where…

  5. Toward Advancing Nano-Object Count Metrology: A Best Practice Framework

    PubMed Central

    Boyko, Volodymyr; Meyers, Greg; Voetz, Matthias; Wohlleben, Wendel

    2013-01-01

    Background: A movement among international agencies and policy makers to classify industrial materials by their number content of sub–100-nm particles could have broad implications for the development of sustainable nanotechnologies. Objectives: Here we highlight current particle size metrology challenges faced by the chemical industry due to these emerging number percent content thresholds, provide a suggested best-practice framework for nano-object identification, and identify research needs as a path forward. Discussion: Harmonized methods for identifying nanomaterials by size and count for many real-world samples do not currently exist. Although particle size remains the sole discriminating factor for classifying a material as “nano,” inconsistencies in size metrology will continue to confound policy and decision making. Moreover, there are concerns that the casting of a wide net with still-unproven metrology methods may stifle the development and judicious implementation of sustainable nanotechnologies. Based on the current state of the art, we propose a tiered approach for evaluating materials. To enable future risk-based refinements of these emerging definitions, we recommend that this framework also be considered in environmental and human health research involving the implications of nanomaterials. Conclusion: Substantial scientific scrutiny is needed in the area of nanomaterial metrology to establish best practices and to develop suitable methods before implementing definitions based solely on number percent nano-object content for regulatory purposes. Strong cooperation between industry, academia, and research institutions will be required to fully develop and implement detailed frameworks for nanomaterial identification with respect to emerging count-based metrics. Citation: Brown SC, Boyko V, Meyers G, Voetz M, Wohlleben W. 2013. Toward advancing nano-object count metrology: a best practice framework. Environ Health Perspect 121:1282–1291;

  6. PICO: An Object-Oriented Framework for Branch and Bound

    SciTech Connect

    ECKSTEIN,JONATHAN; HART,WILLIAM E.; PHILLIPS,CYNTHIA A.

    2000-12-01

    This report describes the design of PICO, a C++ framework for implementing general parallel branch-and-bound algorithms. The PICO framework provides a mechanism for the efficient implementation of a wide range of branch-and-bound methods on an equally wide range of parallel computing platforms. We first discuss the basic architecture of PICO, including the application class hierarchy and the package's serial and parallel layers. We next describe the design of the serial layer, and its central notion of manipulating subproblem states. Then, we discuss the design of the parallel layer, which includes flexible processor clustering and communication rates, various load balancing mechanisms, and a non-preemptive task scheduler running on each processor. We describe the application of the package to a branch-and-bound method for mixed integer programming, along with computational results on the ASCI Red massively parallel computer. Finally we describe the application of the branch-and-bound mixed-integer programming code to a resource constrained project scheduling problem for Pantex.

  7. Optimal object association in theDempster-Shafer framework.

    PubMed

    Denoux, Thierry; El Zoghby, Nicole; Cherfaoui, Véronique; Jouglet, Antoine

    2014-12-01

    Object association is a crucial step in target tracking and data fusion applications. This task can be formalized as the search for a relation between two sets (e.g., a sets of tracks and a set of observations) in such a way that each object in one set is matched with at most one object in the other set. In this paper, this problem is tackled using the formalism of belief functions. Evidence about the possible association of each object pair, usually obtained by comparing the values of some attributes, is modeled by a Dempster-Shafer mass function defined in the frame of all possible relations. These mass functions are combined using Dempster's rule, and the relation with maximal plausibility is found by solving an integer linear programming problem. This problem is shown to be equivalent to a linear assignment problem, which can be solved in polynomial time using, for example, the Hungarian algorithm. This method is demonstrated using simulated and real data. The 3-D extension of this problem (with three object sets) is also formalized and is shown to be NP-Hard. PMID:24801831

  8. A Framework for Assessing Computer Competence: Defining Objectives.

    ERIC Educational Resources Information Center

    National Assessment of Educational Progress, Princeton, NJ.

    Computer skills objectives have been developed for the 1986 National Assessment of Educational Progress (NAEP). These items will be administered to a large number of American students aged 9, 13, and 17 in grades 3, 7, and 11. For this first national assessment of computer skills, it was necessary to consider the existing expertise of school…

  9. Evaluation framework for carotid bifurcation lumen segmentation and stenosis grading.

    PubMed

    Hameeteman, K; Zuluaga, M A; Freiman, M; Joskowicz, L; Cuisenaire, O; Valencia, L Flórez; Gülsün, M A; Krissian, K; Mille, J; Wong, W C K; Orkisz, M; Tek, H; Hoyos, M Hernández; Benmansour, F; Chung, A C S; Rozie, S; van Gils, M; van den Borne, L; Sosna, J; Berman, P; Cohen, N; Douek, P C; Sánchez, I; Aissat, M; Schaap, M; Metz, C T; Krestin, G P; van der Lugt, A; Niessen, W J; van Walsum, T

    2011-08-01

    This paper describes an evaluation framework that allows a standardized and objective quantitative comparison of carotid artery lumen segmentation and stenosis grading algorithms. We describe the data repository comprising 56 multi-center, multi-vendor CTA datasets, their acquisition, the creation of the reference standard and the evaluation measures. This framework has been introduced at the MICCAI 2009 workshop 3D Segmentation in the Clinic: A Grand Challenge III, and we compare the results of eight teams that participated. These results show that automated segmentation of the vessel lumen is possible with a precision that is comparable to manual annotation. The framework is open for new submissions through the website http://cls2009.bigr.nl. PMID:21419689

  10. Adapting the mathematical task framework to design online didactic objects

    NASA Astrophysics Data System (ADS)

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-06-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where 'discussions' are broadly defined as the conversations students have with themselves as they interact with the dynamic mathematical representations on the screen. Eighty-four pre-service elementary teachers enrolled in hybrid mathematics courses were asked to interact with a series of applets designed to support their understanding of qualitative graphing. The results of the surveys indicate that various design features of the applets did in fact cause perturbations and opportunities for resolutions that enabled the users to 'discuss' their learning by reflecting on their in-class discussions and online activities. The discussion includes four design features for guiding future applet creation.

  11. Depth perception in the framework of General Object Constancy.

    PubMed

    Qian, Jiehui; Petrov, Yury

    2013-01-01

    Size constancy is a well-known example of perceptual stabilization accounting for the effect of viewing distance on retinal image size. In a recent study (Qian & Petrov, 2012), we demonstrated a similar stabilization mechanism for contrast perception and suggested that the brain accounts for effects of perceived distance on various other object features in a similar way, a hypothesis that we called General Object Constancy. Here we report a new illusion of depth further supporting this hypothesis. Pairs of disks moved across the screen in a pattern of radial optic flow. A pair comprised a small black disk floating in front of a large white disk, creating the percept of a pencil tip viewed head on. As these "pencils" moved away, they appeared to grow in contrast, in diameter, and also appeared to be getting "sharper." The contrast and size illusions replicated our previous findings, while the depth gradient (sharpness) illusion revealed a depth constancy phenomenon. We discovered that depth and size constancies were related, e.g., the two illusions were strongly correlated across observers. Whereas the illusory diameter increase could not be canceled by any degree of depth modulation, decreasing the diameter of the "pencils" during optic flow motion (thus increasing their disparity gradient) weakened the illusory depth gradient increase. This paradoxical result, as well as our other results, is explained by the General Object Constancy model: Besides using the same scaling factor to account for size, contrast, and depth variations with distance, the brain uses the apparent object size to additionally scale contrast and depth signals. PMID:24023274

  12. Community health program evaluation using accreditation as a framework.

    PubMed

    Severance, Janet Hahn

    2009-03-01

    Increasingly, health system leaders seek to determine whether community health interventions make a difference to individuals in the community. However, community health improvement is difficult to measure, and health system staff may not be familiar with evaluation research methods. Health care organizations can improve their evaluation efforts relatively easily by building on what they already know: the Joint Commission accreditation process. By using accreditation as a framework, community health evaluation may be seen as more approachable when viewed through that lens. This article provides a framework for practical approaches to program planning, evaluation, and sustainability. Joint Commission accreditation functions (chapters) are similar to health program goals. Standards are similar to program objectives. Elements of performance are similar to activities or methods. Scoring comparisons are similar to measures. PMID:19116229

  13. A framework for qualitative reasoning about solid objects

    NASA Technical Reports Server (NTRS)

    Davis, E.

    1987-01-01

    Predicting the behavior of a qualitatively described system of solid objects requires a combination of geometrical, temporal, and physical reasoning. Methods based upon formulating and solving differential equations are not adequate for robust prediction, since the behavior of a system over extended time may be much simpler than its behavior over local time. A first-order logic, in which one can state simple physical problems and derive their solution deductively, without recourse to solving the differential equations, is discussed. This logic is substantially more expressive and powerful than any previous AI representational system in this domain.

  14. A Framework for Realistic Modeling and Display of Object Surface Appearance

    NASA Astrophysics Data System (ADS)

    Darling, Benjamin A.

    With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the

  15. Evaluating risk propensity using an objective instrument.

    PubMed

    Sueiro Abad, Manuel J; Sánchez-Iglesias, Ivan; Moncayo de Tella, Alejandra

    2011-05-01

    Risk propensity is the stable tendency to choose options with a lower probability of success, but greater rewards. Its evaluation has been approached from various perspectives: from self-report questionnaires to objective tests. Self-report questionnaires have often been criticized due to interference from voluntary and involuntary biases, in addition to their lack of predictive value. Objective tests, on the other hand, require resources that make them difficult to administer to large samples. This paper presents an easy-to-administer, 30-item risk propensity test. Each item is itself an objective test describing a hypothetical situation in which the subject must choose between three options, each with a different gain function but equivalent in expected value. To assess its psychometric fit, the questionnaire was administered to 222 subjects, and we performed a test of its reliability as well as exploratory factor analysis. The results supported a three-factor model of risk (Sports and Gambling, Long-term Plans, and Loss Management). After making the necessary adjustments and incorporating a global factor of risk propensity, confirmatory factor analysis was done, revealing that the data exhibited adequate goodness of fit. PMID:21568196

  16. A Framework for the Objective Assessment of Registration Accuracy

    PubMed Central

    Simonetti, Flavio; Foroni, Roberto Israel

    2014-01-01

    Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios. PMID:24659997

  17. Objective evaluation of generic drug information.

    PubMed

    Iijima, Hisashi; Kamei, Miwako; Koshimizu, Toshimasa; Shiragami, Makoto

    2004-06-01

    Pharmacists active in health care venues need to be able to evaluate generic drugs in terms of effectiveness, safety, and economy to ensure that they are used appropriately. As part of the ongoing study of these factors, we carried out an objective evaluation of information provided for generics. A minimum of 20 commercially available products was considered for each pharmaceutical ingredient. The information subjected to evaluation consisted of the text of drug package inserts and information noted on interview forms. Using our own criteria for evaluating drug information, we attempted to quantify the amounts of information provided. Then, based on the numerical values obtained, we calculated information quantities with reference to drug prices to study the relationship between prices and available information for original drugs and their later-developed, generic equivalents. A total of 14 different pharmaceutical ingredients (327 product items) were considered, with the information quantity for generics amounting to 27.9+/-17.8-46.3+/-21.4% (Mean+/-S.D.) that for the original drugs. Examined on the basis of individual pharmaceutical companies, the corresponding ratio came to 15.1+/-7.8-62.4+/-6.4% (Mean+/-S.D.). For generics, the relationship between drug price (expressed against a value of 1.0 for original drugs) and information quantity (Qua(i)) came to 0.79+/-0.46-1.90+/-0.79% (Mean+/-S.D.). These results clearly point to the importance of evaluating information quantity for generic drugs on a maker-by-maker basis. PMID:15170069

  18. An evaluation framework for comparing geocoding systems

    PubMed Central

    2013-01-01

    Background Geocoding, the process of converting textual information describing a location into one or more digital geographic representations, is a routine task performed at large organizations and government agencies across the globe. In a health context, this task is often a fundamental first step performed prior to all operations that take place in a spatially-based health study. As such, the quality of the geocoding system used within these agencies is of paramount concern to the agency (the producer) and researchers or policy-makers who wish to use these data (consumers). However, geocoding systems are continually evolving with new products coming on the market continuously. Agencies must develop and use criteria across a number axes when faced with decisions about building, buying, or maintaining any particular geocoding systems. To date, published criteria have focused on one or more aspects of geocode quality without taking a holistic view of a geocoding system’s role within a large organization. The primary purpose of this study is to develop and test an evaluation framework to assist a large organization in determining which geocoding systems will meet its operational needs. Methods A geocoding platform evaluation framework is derived through an examination of prior literature on geocoding accuracy. The framework developed extends commonly used geocoding metrics to take into account the specific concerns of large organizations for which geocoding is a fundamental operational capability tightly-knit into its core mission of processing health data records. A case study is performed to evaluate the strengths and weaknesses of five geocoding platforms currently available in the Australian geospatial marketplace. Results The evaluation framework developed in this research is proven successful in differentiating between key capabilities of geocoding systems that are important in the context of a large organization with significant investments in geocoding

  19. An evaluation framework for participatory modelling

    NASA Astrophysics Data System (ADS)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  20. Borg: an auto-adaptive many-objective evolutionary computing framework.

    PubMed

    Hadka, David; Reed, Patrick

    2013-01-01

    This study introduces the Borg multi-objective evolutionary algorithm (MOEA) for many-objective, multimodal optimization. The Borg MOEA combines ε-dominance, a measure of convergence speed named ε-progress, randomized restarts, and auto-adaptive multioperator recombination into a unified optimization framework. A comparative study on 33 instances of 18 test problems from the DTLZ, WFG, and CEC 2009 test suites demonstrates Borg meets or exceeds six state of the art MOEAs on the majority of the tested problems. The performance for each test problem is evaluated using a 1,000 point Latin hypercube sampling of each algorithm's feasible parameterization space. The statistical performance of every sampled MOEA parameterization is evaluated using 50 replicate random seed trials. The Borg MOEA is not a single algorithm; instead it represents a class of algorithms whose operators are adaptively selected based on the problem. The adaptive discovery of key operators is of particular importance for benchmarking how variation operators enhance search for complex many-objective problems. PMID:22385134

  1. PEAPOL (Program Evaluation at the Performance Objective Level) Outside Evaluation.

    ERIC Educational Resources Information Center

    Auvil, Mary S.

    In evaluating this pilot project, which developed a computer system for assessing student progress and cost effectiveness as related to achievement of performance objectives, interviews were conducted with project participants, including project staff, school administrators, and the auto shop instructors. Project documents were reviewed and a…

  2. Spectrum of Objectivity-Credibility in Evaluation.

    ERIC Educational Resources Information Center

    Ahn, Unhai R.; And Others

    Evaluation roles used in the Department of Research and Development in the Cincinnati Public Schools are identified and described. These include: project evaluator, local-school evaluator, independent program evaluator, external evaluator and external auditors. The merits of each evaluation role will be discussed as to its relationship with…

  3. Objective evaluation of cutaneous thermal sensivity

    NASA Technical Reports Server (NTRS)

    Vanbeaumont, W.

    1972-01-01

    The possibility of obtaining reliable and objective quantitative responses was investigated under conditions where only temperature changes in localized cutaneous areas evoked measurable changes in remote sudomotor activity. Both male and female subjects were studied to evaluate sex difference in thermal sensitivity. The results discussed include: sweat rate responses to contralateral cooling, comparison of sweat rate responses between men and women to contralateral cooling, influence of the menstrual cycle on the sweat rate responses to contralateral cooling, comparison of threshold of sweating responses between men and women, and correlation of latency to threshold for whole body sweating. It is concluded that the quantitative aspects of the reflex response is affected by both the density and activation of receptors as well as the rate of heat loss; men responded 8-10% more frequently than women to thermode cooling, the magnitude of responses being greater for men; and women responded 7-9% more frequently to thermode cooling on day 1 of menstruation, as compared to day 15.

  4. Report on objective ride quality evaluation

    NASA Technical Reports Server (NTRS)

    Wambold, J. C.; Park, W. H.

    1974-01-01

    The correlation of absorbed power as an objective ride measure to the subjective evaluation for the bus data was investigated. For some individual bus rides the correlations were poor, but when a sufficient number of rides was used to give reasonable sample base, an excellent correlation was obtained. The following logarithmical function was derived: S = 1.7245 1n (39.6849 AP), where S = one subjective rating of the ride; and AP = the absorbed power in watts. A six-degree-of-freedom method developed for aircraft data was completed. Preliminary correlation of absorbed power with ISO standards further enhances the bus ride and absorbed power correlation numbers since the AP's obtained are of the same order of magnitude for both correlations. While it would then appear that one could just use ISO standards, there is no way to add the effect of three degrees of freedom. The absorbed power provides a method of adding the effects due to the three major directions plus the pitch and roll.

  5. ROSE: The Design of a General Tool for the Independent Optimization of Object-Oriented Frameworks

    SciTech Connect

    Davis, K.; Philip, B.; Quinlan, D.

    1999-05-18

    ROSE represents a programmable preprocessor for the highly aggressive optimization of C++ object-oriented frameworks. A fundamental feature of ROSE is that it preserves the semantics, the implicit meaning, of the object-oriented framework's abstractions throughout the optimization process, permitting the framework's abstractions to be recognized and optimizations to capitalize upon the added value of the framework's true meaning. In contrast, a C++ compiler only sees the semantics of the C++ language and thus is severely limited in what optimizations it can introduce. The use of the semantics of the framework's abstractions avoids program analysis that would be incapable of recapturing the framework's full semantics from those of the C++ language implementation of the application or framework. Just as no level of program analysis within the C++ compiler would not be expected to recognize the use of adaptive mesh refinement and introduce optimizations based upon such information. Since ROSE is programmable, additional specialized program analysis is possible which then compliments the semantics of the framework's abstractions. Enabling an optimization mechanism to use the high level semantics of the framework's abstractions together with a programmable level of program analysis (e.g. dependence analysis), at the level of the framework's abstractions, allows for the design of high performance object-oriented frameworks with uniquely tailored sophisticated optimizations far beyond the limits of contemporary serial F0RTRAN 77, C or C++ language compiler technology. In short, faster, more highly aggressive optimizations are possible. The resulting optimizations are literally driven by the framework's definition of its abstractions. Since the abstractions within a framework are of third party design the optimizations are similarly of third party design, specifically independent of the compiler and the applications that use the framework. The interface to ROSE is

  6. Cryptographic framework for document-objects resulting from multiparty collaborative transactions.

    PubMed

    Goh, A

    2000-01-01

    Multiparty transactional frameworks--i.e. Electronic Data Interchange (EDI) or Health Level (HL) 7--often result in composite documents which can be accurately modelled using hyperlinked document-objects. The structural complexity arising from multiauthor involvement and transaction-specific sequencing would be poorly handled by conventional digital signature schemes based on a single evaluation of a one-way hash function and asymmetric cryptography. In this paper we outline the generation of structure-specific authentication hash-trees for the the authentication of transactional document-objects, followed by asymmetric signature generation on the hash-tree value. Server-side multi-client signature verification would probably constitute the single most compute-intensive task, hence the motivation for our usage of the Rabin signature protocol which results in significantly reduced verification workloads compared to the more commonly applied Rivest-Shamir-Adleman (RSA) protocol. Data privacy is handled via symmetric encryption of message traffic using session-specific keys obtained through key-negotiation mechanisms based on discrete-logarithm cryptography. Individual client-to-server channels can be secured using a double key-pair variation of Diffie-Hellman (DH) key negotiation, usage of which also enables bidirectional node authentication. The reciprocal server-to-client multicast channel is secured through Burmester-Desmedt (BD) key-negotiation which enjoys significant advantages over the usual multiparty extensions to the DH protocol. The implementation of hash-tree signatures and bi/multidirectional key negotiation results in a comprehensive cryptographic framework for multiparty document-objects satisfying both authentication and data privacy requirements. PMID:11187485

  7. A Multi-Component Model for Assessing Learning Objects: The Learning Object Evaluation Metric (LOEM)

    ERIC Educational Resources Information Center

    Kay, Robin H.; Knaack, Liesel

    2008-01-01

    While discussion of the criteria needed to assess learning objects has been extensive, a formal, systematic model for evaluation has yet to be thoroughly tested. The purpose of the following study was to develop and assess a multi-component model for evaluating learning objects. The Learning Object Evaluation Metric (LOEM) was developed from a…

  8. A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs

    ERIC Educational Resources Information Center

    Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry

    2004-01-01

    This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…

  9. Adult Roles & Functions. Objective Based Evaluation System.

    ERIC Educational Resources Information Center

    West Virginia State Vocational Curriculum Lab., Cedar Lakes.

    This book of objective-based test items is designed to be used with the Adult Roles and Functions curriculum for a non-laboratory home economic course for grades eleven and twelve. It contains item banks for each cognitive objective in the curriculum. In addition, there is a form for the table of specifications to be developed for each unit. This…

  10. Objectives, Evaluation, and the Improvement of Education

    ERIC Educational Resources Information Center

    Anderson, Lorin W.

    2005-01-01

    For five years, from 1995 until 2000, a group of eight educators and researchers met twice annually in Syracuse, NY, for the purpose of revising Bloom's Taxonomy. Based in part on the structure of educational objectives, in part on advances in cognitive psychology, and in part on numerous other attempts to classify educational objectives that were…

  11. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  12. An efficient two-objective automatic SAR image segmentation framework using artificial immune system

    NASA Astrophysics Data System (ADS)

    Yang, Dongdong; Niu, Ruican; Fei, Rong; Jiang, Qiaoyong; Li, Hongye; Cao, Zijian

    2015-12-01

    Here, an efficient multi-objective automatic segmentation framework (MASF) is formulated and applied to synthetic aperture radar (SAR) image unsupervised classification. In the framework, three important issues are presented: 1) two reasonable image preprocessing techniques, including spatial filtering and watershed operator, are discussed at the initial stage of the framework; 2)then, an efficient immune multi-objective optimization algorithm with uniform clone, adaptive selection by online nondominated solutions, and dynamic deletion in diversity maintenance is proposed; 3 two very simple, but very efficient conflicting clustering validity indices are incorporated into the framework and simultaneously optimized. Two simulated SAR data and two complicated real images are used to quantitatively validate its effectiveness. In addition, four other state-of-the-art image segmentation methods are employed for comparison.

  13. A Framework for Geographic Object-Based Image Analysis (GEOBIA) based on geographic ontology

    NASA Astrophysics Data System (ADS)

    Gu, H. Y.; Li, H. T.; Yan, L.; Lu, X. J.

    2015-06-01

    GEOBIA (Geographic Object-Based Image Analysis) is not only a hot topic of current remote sensing and geographical research. It is believed to be a paradigm in remote sensing and GIScience. The lack of a systematic approach designed to conceptualize and formalize the class definitions makes GEOBIA a highly subjective and difficult method to reproduce. This paper aims to put forward a framework for GEOBIA based on geographic ontology theory, which could implement "Geographic entities - Image objects - Geographic objects" true reappearance. It consists of three steps, first, geographical entities are described by geographic ontology, second, semantic network model is built based on OWL(ontology web language), at last, geographical objects are classified with decision rule or other classifiers. A case study of farmland ontology was conducted for describing the framework. The strength of this framework is that it provides interpretation strategies and global framework for GEOBIA with the property of objective, overall, universal, universality, etc., which avoids inconsistencies caused by different experts' experience and provides an objective model for mage analysis.

  14. Evaluation in a Management by Objectives System.

    ERIC Educational Resources Information Center

    Goddu, Roland

    Management and supervision in a management by objectives system do not focus on the quality or efficiency of a list of activities. Rather, the manager and supervisor validate progress in reaching agreed outcomes. The implementation of a management and supervision by results approach requires (a) agreement on a statement of mission; (b) agreement…

  15. A Systematic Evaluation of Learning Objects for Secondary School Students

    ERIC Educational Resources Information Center

    Kay, Robin

    2007-01-01

    Empirical research evaluating the effectiveness of learning objects is noticeably absent. No formal research has been done on the use of learning objects in secondary schools. The purpose of this study was to evaluate the use of learning objects by high school students. The evaluation metric used to assess benefits and quality of learning objects…

  16. A framework for evaluation of technology transfer programs. Volume 2

    SciTech Connect

    Not Available

    1993-07-01

    The objective of this volume is to describe a framework with which DOE can develop a program specific methodology to evaluate it`s technology transfer efforts. This approach could also be applied to an integrated private sector technology transfer organization. Several benefits will be realized from the application of this work. While the immediate effect will be to assist program managers in evaluating and improving program performance, the ultimate benefits will accrue to the producing industry, the states, and the nation in the form of sustained or increased domestic oil production. This benefit depends also, of course, on the effectiveness of the technology being transferred. The managers of the Technology Transfer program, and the larger federal oil and gas R&D programs, will be provided with a means to design and assess the effectiveness of program efforts as they are developed, tested and performed. The framework allows deficiencies in critical aspects of the program to be quickly identified, allowing for timely corrections and improvements. The actual process of developing the evaluation also gives the staff of the Oil R&D Program or Technology Transfer subprogram the opportunity to become oriented to the overall program goals. The structure and focus imposed by the evaluation paradigm will guide program staff in selecting activities which are consistent with achieving the goals of the overall R&D program.

  17. An Evaluation of the Effects of Experimenter Control of Objects on Individuals' Engagement in Object Stereotypy

    ERIC Educational Resources Information Center

    Stangeland, Lindsay A.; Smith, Dean P.; Rapp, John T.

    2012-01-01

    In two experiments, the authors evaluated the extent to which (a) individuals preferred engaging in object stereotypy versus observing an experimenter while the experimenter engaged in object stereotypy and (b) an experimenter's engagement in object stereotypy decreased the participants' engagement in object stereotypy. Results of Experiment 1…

  18. Quantile equivalence to evaluate compliance with habitat management objectives

    USGS Publications Warehouse

    Cade, Brian S.; Johnson, Pamela R.

    2011-01-01

    Equivalence estimated with linear quantile regression was used to evaluate compliance with habitat management objectives at Arapaho National Wildlife Refuge based on monitoring data collected in upland (5,781 ha; n = 511 transects) and riparian and meadow (2,856 ha, n = 389 transects) habitats from 2005 to 2008. Quantiles were used because the management objectives specified proportions of the habitat area that needed to comply with vegetation criteria. The linear model was used to obtain estimates that were averaged across 4 y. The equivalence testing framework allowed us to interpret confidence intervals for estimated proportions with respect to intervals of vegetative criteria (equivalence regions) in either a liberal, benefit-of-doubt or conservative, fail-safe approach associated with minimizing alternative risks. Simple Boolean conditional arguments were used to combine the quantile equivalence results for individual vegetation components into a joint statement for the multivariable management objectives. For example, management objective 2A required at least 809 ha of upland habitat with a shrub composition ≥0.70 sagebrush (Artemisia spp.), 20–30% canopy cover of sagebrush ≥25 cm in height, ≥20% canopy cover of grasses, and ≥10% canopy cover of forbs on average over 4 y. Shrub composition and canopy cover of grass each were readily met on >3,000 ha under either conservative or liberal interpretations of sampling variability. However, there were only 809–1,214 ha (conservative to liberal) with ≥10% forb canopy cover and 405–1,098 ha with 20–30%canopy cover of sagebrush ≥25 cm in height. Only 91–180 ha of uplands simultaneously met criteria for all four components, primarily because canopy cover of sagebrush and forbs was inversely related when considered at the spatial scale (30 m) of a sample transect. We demonstrate how the quantile equivalence analyses also can help refine the numerical specification of habitat objectives and explore

  19. ORCHESTRA: an object-oriented framework for implementing chemical equilibrium models.

    PubMed

    Meeussen, Johannes C L

    2003-03-15

    This work presents a new object-oriented structure for chemical equilibrium calculations that is used in the modeling framework ORCHESTRA (Objects Representing CHEmical Speciation and TRAnsport). In contrast to standard chemical equilibrium algorithms, such as MINEQL, MINTEQ2A, PHREEQC, and ECOSAT, model equations are not hard-coded in the source code, but instead all equations are defined in text format and read by the ORCHESTRA calculation kernel at run time. This makes model definitions easily accessible and extendible by users. Furthermore, it results in a very compact and efficient calculation kernel that is easy to use as a submodel within mass transport or kinetic models. Finally, the object-oriented structure of the chemical model definitions makes it possible to implement a new object-oriented framework for implementing chemical models. This framework consists of three basic object types, entities, reactions, and phases, that form the building blocks from which other chemical models are composed. The hierarchical approach ensures consistent and compact model definitions and is illustrated here by discussing the implementation of a number of commonly used chemical models such as aqueous complexation, activity correction, precipitation, surface complexation ion exchange, and several more sophisticated adsorption models including electrostatic interactions, NICA, and CD-MUSIC. The ORCHESTRA framework is electronically available from www.macaulay.ac.uk/ORCHESTRA. PMID:12680672

  20. A classification framework for content-based extraction of biomedical objects from hierarchically decomposed images

    NASA Astrophysics Data System (ADS)

    Thies, Christian; Schmidt Borreda, Marcel; Seidl, Thomas; Lehmann, Thomas M.

    2006-03-01

    Multiscale analysis provides a complete hierarchical partitioning of images into visually plausible regions. Each of them is formally characterized by a feature vector describing shape, texture and scale properties. Consequently, object extraction becomes a classification of the feature vectors. Classifiers are trained by relevant and irrelevant regions labeled as object and remaining partitions, respectively. A trained classifier is applicable to yet uncategorized partitionings to identify the corresponding region's classes. Such an approach enables retrieval of a-priori unknown objects within a point-and-click interface. In this work, the classification pipeline consists of a framework for data selection, feature selection, classifier training, classification of testing data, and evaluation. According to the no-free-lunch-theorem of supervised learning, the appropriate classification pipeline is determined experimentally. Therefore, each of the steps is varied by state-of-the-art methods and the respective classification quality is measured. Selection of training data from the ground truth is supported by bootstrapping, variance pooling, virtual training data, and cross validation. Feature selection for dimension reduction is performed by linear discriminant analysis, principal component analysis, and greedy selection. Competing classifiers are k-nearest-neighbor, Bayesian classifier, and the support vector machine. Quality is measured by precision and recall to reflect the retrieval task. A set of 105 hand radiographs from clinical routine serves as ground truth, where the metacarpal bones have been labeled manually. In total, 368 out of 39.017 regions are identified as relevant. In initial experiments for feature selection with the support vector machine have been obtained recall, precision and F-measure of 0.58, 0.67, and 0,62, respectively.

  1. A Framework for Including Family Health Spillovers in Economic Evaluation

    PubMed Central

    Al-Janabi, Hareth; van Exel, Job; Brouwer, Werner; Coast, Joanna

    2016-01-01

    Health care interventions may affect the health of patients’ family networks. It has been suggested that these “health spillovers” should be included in economic evaluation, but there is not a systematic method for doing this. In this article, we develop a framework for including health spillovers in economic evaluation. We focus on extra-welfarist economic evaluations where the objective is to maximize health benefits from a health care budget (the “health care perspective”). Our framework involves adapting the conventional cost-effectiveness decision rule to include 2 multiplier effects to internalize the spillover effects. These multiplier effects express the ratio of total health effects (for patients and their family networks) to patient health effects. One multiplier effect is specified for health benefit generated from providing a new intervention, one for health benefit displaced by funding this intervention. We show that using multiplier effects to internalize health spillovers could change the optimal funding decisions and generate additional health benefits to society. PMID:26377370

  2. A Framework Relating Outcomes Based Education and the Taxonomy of Educational Objectives.

    ERIC Educational Resources Information Center

    Andrich, David

    2002-01-01

    Articulates a framework that can place the Outcomes Based Education movement in a historical context and by so doing advance its discourse on assessment. Reviews the development and structure of the Taxonomy of Educational Objectives (B. Bloom et al., 1956) and the structure of Student Outcome Statements in Western Australia and explores the…

  3. Objective evaluation of slanted edge charts

    NASA Astrophysics Data System (ADS)

    Hornung, Harvey (.

    2015-01-01

    Camera objective characterization methodologies are widely used in the digital camera industry. Most objective characterization systems rely on a chart with specific patterns, a software algorithm measures a degradation or difference between the captured image and the chart itself. The Spatial Frequency Response (SFR) method, which is part of the ISO 122331 standard, is now very commonly used in the imaging industry, it is a very convenient way to measure a camera Modulation transfer function (MTF). The SFR algorithm can measure frequencies beyond the Nyquist frequency thanks to super-resolution, so it does provide useful information on aliasing and can provide modulation for frequencies between half Nyquist and Nyquist on all color channels of a color sensor with a Bayer pattern. The measurement process relies on a chart that is simple to manufacture: a straight transition from a bright reflectance to a dark one (black and white for instance), while a sine chart requires handling precisely shades of gray which can also create all sort of issues with printers that rely on half-toning. However, no technology can create a perfect edge, so it is important to assess the quality of the chart and understand how it affects the accuracy of the measurement. In this article, I describe a protocol to characterize the MTF of a slanted edge chart, using a high-resolution flatbed scanner. The main idea is to use the RAW output of the scanner as a high-resolution micro-densitometer, since the signal is linear it is suitable to measure the chart MTF using the SFR algorithm. The scanner needs to be calibrated in sharpness: the scanner MTF is measured with a calibrated sine chart and inverted to compensate for the modulation loss from the scanner. Then the true chart MTF is computed. This article compares measured MTF from commercial charts and charts printed on printers, and also compares how of the contrast of the edge (using different shades of gray) can affect the chart MTF

  4. Rubrics for Evaluating Open Education Resource (OER) Objects

    ERIC Educational Resources Information Center

    Achieve, Inc., 2011

    2011-01-01

    The rubrics presented in this report represent an evaluation system for objects found within Open Education Resources. An object could include images, applets, lessons, units, assessments and more. For the purpose of this evaluation, any component that can exist as a stand-alone qualifies as an object. The rubrics in this packet can be applied…

  5. Designing a Workable Framework for Evaluating Distance Language Instruction

    ERIC Educational Resources Information Center

    Madyarov, Irshat

    2009-01-01

    Teaching foreign languages at distance is now becoming widespread; so is the need for evaluating online language courses. This article discusses an example of a framework that was applied to evaluate an online English as a foreign language (EFL) course at a Middle Eastern university. The development of the framework investigated areas of interest…

  6. Checklist for Evaluating SREB-SCORE Learning Objects

    ERIC Educational Resources Information Center

    Southern Regional Education Board (SREB), 2007

    2007-01-01

    This checklist is based on "Evaluation Criteria for SREB-SCORE Learning Objects" and is designed to help schools and colleges determine the quality and effectiveness of learning objects. It is suggested that each learning object be rated to the extent to which it meets the criteria and the SREB-SCORE definition of a learning object. A learning…

  7. Toward Fortran 77 performance from object-oriented C++ scientific frameworks

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-01

    The use of object-oriented C{sup 2} frameworks has significantly simplified the development of numerous complex parallel scientific applications at Los Alamos National Laboratory and elsewhere. In spite of considerable use of, and commitment to, these frameworks, concerns about performance are nonetheless a significant issue; performance very close to that of FORTRAN 77 with message passing must be realized before the acceptance and use of such frameworks will be truly widespread. This paper identifies the primary source of inefficiency in using C or C{sup 2} for numerical codes with stencil- or stencil-like operations, and demonstrates two solutions--one portable, one not--to give genuine FORTRAN 77 performance.

  8. BioInt: an integrative biological object-oriented application framework and interpreter.

    PubMed

    Desai, Sanket; Burra, Prasad

    2015-01-01

    BioInt, a biological programming application framework and interpreter, is an attempt to equip the researchers with seamless integration, efficient extraction and effortless analysis of the data from various biological databases and algorithms. Based on the type of biological data, algorithms and related functionalities, a biology-specific framework was developed which has nine modules. The modules are a compilation of numerous reusable BioADTs. This software ecosystem containing more than 450 biological objects underneath the interpreter makes it flexible, integrative and comprehensive. Similar to Python, BioInt eliminates the compilation and linking steps cutting the time significantly. The researcher can write the scripts using available BioADTs (following C++ syntax) and execute them interactively or use as a command line application. It has features that enable automation, extension of the framework with new/external BioADTs/libraries and deployment of complex work flows. PMID:26561020

  9. On the Evaluation of Higher-Order Science Instructional Objectives

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; Sheehan, Daniel S.

    1977-01-01

    Advocates the use of a free-sort categorization technique for evaluation of higher-order science instructional objectives. An explanation and demonstration of the use of the evaluation technique with 284 ninth-grade science students is provided. (CP)

  10. Addressing the real-time synchronization requirements of multimedia in an object-oriented framework

    NASA Astrophysics Data System (ADS)

    Papathomas, Michael; Blair, Gordon S.; Coulson, Geoff; Robin, Philippe

    1995-03-01

    It is now recognized that object-oriented techniques are well suited to the design and implementation of multimedia applications. Objects may be used to encapsulate the great variety of hardware devices used in such applications and to abstract over the details of low level interfaces. Furthermore, complex media processing algorithms, such as compression/decompression, may be encapsulated within objects making them easier to reuse across applications. Real-time synchronization is also an essential aspect of multimedia which arises from the inherently temporal properties of media such as audio and video. In this paper, we propose a set of programming abstractions and an approach to address real-time synchronization requirements in an object-oriented framework. In our approach, active objects encapsulate media processing activities. Real-time synchronization is maintained by reactive objects that control the execution of media processing objects. A key advantage of our approach is that it allows the separation of synchronization from the behavior of objects. Both objects and synchronization specifications may be reused in different contexts. In addition, the approach enables the specification of real-time synchronization in a high-level notation that has proven well suited to this task.

  11. Implementation of Two-Dimensional Polycrystalline Grains in Object Oriented Micromagnetic Framework

    PubMed Central

    Lau, J. W.; McMichael, R. D.; Donahue, M. J.

    2009-01-01

    In response to the growing need for a more accurate micromagnetic model to understand switching phenomenon in nanoscale magnets, we developed the capability to simulate two-dimensional polycrystalline grains using the Object Oriented Micromagnetic Framework (OOMMF). This addition allows users full flexibility in determining the magnetocrystalline anisotropy and axe in each grain as well as the inter- and intragranular exchange coupling strength. PMID:27504213

  12. An advanced object-based software framework for complex ecosystem modeling and simulation

    SciTech Connect

    Sydelko, P. J.; Dolph, J. E.; Majerus, K. A.; Taxon, T. N.

    2000-06-29

    Military land managers and decision makers face an ever increasing challenge to balance maximum flexibility for the mission with a diverse set of multiple land use, social, political, and economic goals. In addition, these goals encompass environmental requirements for maintaining ecosystem health and sustainability over the long term. Spatiotemporal modeling and simulation in support of adaptive ecosystem management can be best accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques through a geographic information system (GIS)-based backbone. Recently, an object-oriented (OO) architectural framework was developed for IDLAMS (OO-IDLAMS). This OO-IDLAMS Prototype was built upon and leverages from the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS is an object-based architectural framework that affords a more integrated, dynamic, and flexible approach to comprehensive ecosystem modeling than was possible with the GIS-based integration approach of the original IDLAMS. The flexibility, dynamics, and interoperability demonstrated through this case study of an object-oriented approach have the potential to provide key technology solutions for many of the military's multiple-use goals and needs for integrated natural resource planning and ecosystem management.

  13. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  14. Evaluating Learning Objects across Boundaries: The Semantics of Localization

    ERIC Educational Resources Information Center

    Li, Jerry Z.; Nesbit, John C.; Richards, Griff

    2006-01-01

    Learning object repositories and evaluation tools have the potential to serve as sites for interaction among different cultures and communities of practice. This article outlines Web-based learning object evaluation tools that we have developed, describes our current efforts to extend those tools to a wider range of user communities, and considers…

  15. Mathematical model of bisubject qualimetric arbitrary objects evaluation

    NASA Astrophysics Data System (ADS)

    Morozova, A.

    2016-04-01

    An analytical basis and the process of formalization of arbitrary objects bisubject qualimetric evaluation mathematical model information spaces are developed. The model is applicable in solving problems of control over both technical and socio-economic systems for objects evaluation using systems of parameters generated by different subjects taking into account their performance and priorities of decision-making.

  16. Contributions to Objective Measurement and Evaluation of Trainee Competency.

    ERIC Educational Resources Information Center

    Moonan, William J.

    The purpose of this paper is to lay a basis for and discuss the components of a system, called COMET, designed to objectively measure and evaluate the competency of trainees in military training enterprises. COMET is an acronym for "Computerized Objective Measurement and Evaluation of Trainees." These goals will be accomplished by: (a) describing…

  17. Evaluating the Use of Learning Objects for Secondary School Science

    ERIC Educational Resources Information Center

    Kay, Robin; Knaack, Liesel

    2007-01-01

    A learning object is an interactive web-based tool that supports learning by enhancing, amplifying, and guiding the cognitive processes of a learner. To date, no formal research has been done on the use of learning objects in secondary school science classrooms. The purpose of this study was to evaluate the use of learning objects developed for…

  18. A Framework for Outreach Evaluation Plans

    ERIC Educational Resources Information Center

    Raven, Neil

    2015-01-01

    Much importance is now placed upon the evaluation of outreach interventions by higher education institutions (HEIs). Accompanying this focus are requests that HEIs prepare evaluation plans. Yet, whilst some now have plans in place, others do not. One of the challenges for those preparing such documents is that official guidance is not prescriptive…

  19. An Ethical Framework for Evaluating Experimental Technology.

    PubMed

    van de Poel, Ibo

    2016-06-01

    How are we to appraise new technological developments that may bring revolutionary social changes? Currently this is often done by trying to predict or anticipate social consequences and to use these as a basis for moral and regulatory appraisal. Such an approach can, however, not deal with the uncertainties and unknowns that are inherent in social changes induced by technological development. An alternative approach is proposed that conceives of the introduction of new technologies into society as a social experiment. An ethical framework for the acceptability of such experiments is developed based on the bioethical principles for experiments with human subjects: non-maleficence, beneficence, respect for autonomy, and justice. This provides a handle for the moral and regulatory assessment of new technologies and their impact on society. PMID:26573302

  20. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  1. Decision analysis framework for evaluating CTBT seismic verification options

    SciTech Connect

    Judd, B.R.; Strait, R.S.; Younker, L.W.

    1986-09-01

    This report describes a decision analysis framework for evaluating seismic verification options for a Comprehensive Test Ban Treaty (CTBT). In addition to providing policy makers with insights into the relative merits of different options, the framework is intended to assist in formulating and evaluating political decisions - such as responses to evidence of violations - and in setting research priorities related to the options. To provide these broad analytical capabilities to decision makers, the framework incorporates a wide variety of issues. These include seismic monitoring capabilities, evasion possibilities, evidence produced by seismic systems, US response to the evidence, the dependence between US and Soviet decision-making, and the relative values of possible outcomes to the US and the Soviet Union. An added benefit of the framework is its potential use to improve communication about these CTBT verification issues among US experts and decision makers. The framework has been implemented on a portable microcomputer to facilitate this communication through demonstration and rapid evaluation of alternative judgments and policy choices. The report presents the framework and its application in four parts. The first part describes the decision analysis framework and the types of analytical results produced. In the second part, the framework is used to evaluate representative seismic verification options. The third part describes the results of sensitivity analyses that determine the relative importance of the uncertainties or subjective judgments that influence the evaluation of the options. The fourth (and final) part summaries conclusions and presents implications of the sample analytical results for further research and for policy-making related to CTBT verification. The fourth section also describes the next steps in the development and use of the decision analysis framework.

  2. Framework for Evaluating Educational Systemic Initiatives.

    ERIC Educational Resources Information Center

    Ikegulu, T. Nelson

    This paper describes the implementation of the Holistic Systemic Evaluation (HSE), a component of an Education Systemic Initiative's strategic management. The HSE provides general guidance for the implementation and continual improvement of an Education Systemic Initiative Reform (ESIR). The implementation of the education system initiative plan:…

  3. Evaluation Framework for Dependable Mobile Learning Scenarios

    ERIC Educational Resources Information Center

    Bensassi, Manel; Laroussi, Mona

    2014-01-01

    The goal of the dependability analysis is to predict inconsistencies and to reveal ambiguities and incompleteness in the designed learning scenario. Evaluation, in traditional learning design, is generally planned after the execution of the scenario. In mobile learning, this stage becomes too difficult and expensive to apply due to the complexity…

  4. Evaluation Framework for Collaborative Educational Virtual Environments

    ERIC Educational Resources Information Center

    Tsiatsos, Thrasyvoulos; Andreas, Konstantinidis; Pomportsis, Andreas

    2010-01-01

    In this paper we will focus on a specific category of Collaborative Virtual Environments that aims to support Collaborative Learning. We call these environments Collaborative Educational Virtual Environments. Our aim is to analyze the evaluation process through the study of relevant bibliography and by doing so reveal the existing research gap…

  5. Generalized multiple kernel framework for multiclass geospatial objects detection in high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Li, Xiangjuan; Sun, Xian; Sun, Hao; Li, Yu; Wang, Hongqi

    2012-01-01

    Multiclass geospatial objects detection within complex environments is a challenging problem in remote-sensing areas. In this paper we propose a novel, generalized kernel-based learning framework for the purpose of enhanced object detection. There are two novel areas. (1) Multisource information, including shape, feature points, and appearance, was extracted to give a comprehensive representation of the objects. We improved a shape descriptor and introduced a two-level spatial pyramid to represent appearance, both global and local. Therefore, basis kernels were formed, one for each feature. (2) In order to illustrate the effect of each kind of feature on each pyramid level, a generalized and weighted combination method was first used to combine all of the levels and then the features. The weights and the classifier model are based on the support vector machine framework for obtaining balance between all basis kernels. This classifier was transformed into a powerful detector by using a sliding window. The reported results are for the detection on high-resolution remote-sensing images. This study demonstrates that the proposed generalized and weighted combination of kernels can yield better performance compared with traditional single-kernel classifier and other combination methods.

  6. More performance results and implementation of an object oriented track reconstruction model in different OO frameworks

    NASA Astrophysics Data System (ADS)

    Gaines, Irwin; Qian, Sijin

    2001-08-01

    This is an update of the report about an Object Oriented (OO) track reconstruction model, which was presented in the previous AIHENP'99 at Crete, Greece. The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. It has been coded in the C++ programming language and successfully implemented into a few different OO computing environments of the CMS and ATLAS experiments at the future Large Hadron Collider at CERN. We shall report: (1) more performance result: (2) implementing the OO model into the new SW OO framework "Athena" of ATLAS experiment and some upgrades of the OO model itself.

  7. A Computational Framework for Quantitative Evaluation of Movement during Rehabilitation

    NASA Astrophysics Data System (ADS)

    Chen, Yinpeng; Duff, Margaret; Lehrer, Nicole; Sundaram, Hari; He, Jiping; Wolf, Steven L.; Rikakis, Thanassis

    2011-06-01

    This paper presents a novel generalized computational framework for quantitative kinematic evaluation of movement in a rehabilitation clinic setting. The framework integrates clinical knowledge and computational data-driven analysis together in a systematic manner. The framework provides three key benefits to rehabilitation: (a) the resulting continuous normalized measure allows the clinician to monitor movement quality on a fine scale and easily compare impairments across participants, (b) the framework reveals the effect of individual movement components on the composite movement performance helping the clinician decide the training foci, and (c) the evaluation runs in real-time, which allows the clinician to constantly track a patient's progress and make appropriate adaptations to the therapy protocol. The creation of such an evaluation is difficult because of the sparse amount of recorded clinical observations, the high dimensionality of movement and high variations in subject's performance. We address these issues by modeling the evaluation function as linear combination of multiple normalized kinematic attributes y = Σwiφi(xi) and estimating the attribute normalization function φi(ṡ) by integrating distributions of idealized movement and deviated movement. The weights wi are derived from a therapist's pair-wise comparison using a modified RankSVM algorithm. We have applied this framework to evaluate upper limb movement for stroke survivors with excellent results—the evaluation results are highly correlated to the therapist's observations.

  8. Learning Objects and Virtual Learning Environments Technical Evaluation Criteria

    ERIC Educational Resources Information Center

    Kurilovas, Eugenijus; Dagiene, Valentina

    2009-01-01

    The main scientific problems investigated in this article deal with technical evaluation of quality attributes of the main components of e-Learning systems (referred here as DLEs--Digital Libraries of Educational Resources and Services), i.e., Learning Objects (LOs) and Virtual Learning Environments (VLEs). The main research object of the work is…

  9. Holistic Evaluation of ESL Compositions: Can It Be Validated Objectively?

    ERIC Educational Resources Information Center

    Homburg, Taco Justus

    1984-01-01

    Discusses the relationship between subjective evaluation and objective measures of ESL writing proficiency. Objective measures used in this study accounted for 84 percent of the variance of subjective grades assigned to a sample of compositions. Author suggests that training readers can be important to the reliability and the validity of the…

  10. Quality in Learning Objects: Evaluating Compliance with Metadata Standards

    NASA Astrophysics Data System (ADS)

    Vidal, C. Christian; Segura, N. Alejandra; Campos, S. Pedro; Sánchez-Alonso, Salvador

    Ensuring a certain level of quality of learning objects used in e-learning is crucial to increase the chances of success of automated systems in recommending or finding these resources. This paper aims to present a proposal for implementation of a quality model for learning objects based on ISO 9126 international standard for the evaluation of software quality. Features indicators associated with the conformance sub-characteristic are defined. Some instruments for feature evaluation are advised, which allow collecting expert opinion on evaluation items. Other quality model features are evaluated using only the information from its metadata using semantic web technologies. Finally, we propose an ontology-based application that allows automatic evaluation of a quality feature. IEEE LOM metadata standard was used in experimentation, and the results shown that most of learning objects analyzed do not complain the standard.

  11. The ventral visual pathway: An expanded neural framework for the processing of object quality

    PubMed Central

    Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Ungerleider, Leslie G.; Mishkin, Mortimer

    2012-01-01

    Since the original characterization of the ventral visual pathway our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d’etre for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy that culminates in singular object representations for utilization mainly by ventrolateral prefrontal cortex and, more parsimoniously than this account, incorporates attentional, contextual, and feedback effects. PMID:23265839

  12. The ventral visual pathway: an expanded neural framework for the processing of object quality.

    PubMed

    Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Ungerleider, Leslie G; Mishkin, Mortimer

    2013-01-01

    Since the original characterization of the ventral visual pathway, our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d'être for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy culminating in singular object representations and more parsimoniously incorporates attentional, contextual, and feedback effects. PMID:23265839

  13. Static-dynamic multi-scale structural damage identification in a multi-objective framework

    NASA Astrophysics Data System (ADS)

    Perera, Ricardo; Marin, Roberto; Ruiz, Antonio

    2013-03-01

    Although either static or dynamic measurements have been used for model updating in a damage identification procedure, when a generally valid and accurate model is sought, different types of measurements should be combined. While modal characteristics give information about the global response of structures, static measurements are more concerned with the local response. Their combination would allow considering different scale levels in the detection through the simultaneous optimization of several objectives. In this work, a damage identification methodology is presented which allows combining static and dynamic measurements within a model updating procedure posed in a multi-objective framework solved by using evolutionary algorithms. Unlike other global-local multi-stage procedures developed in the past, the proposed method is solved as a simplified one-stage procedure.

  14. A New Object-Oriented MODFLOW Framework for Coupling Multiple Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Langevin, C.; Hughes, J. D.; Panday, S. M.; Banta, E. R.; Niswonger, R. G.

    2014-12-01

    MODFLOW is a popular open-source groundwater flow model distributed by the U.S. Geological Survey. For 30 years, the MODFLOW program has been widely used by academic researchers, private consultants, and government scientists to accurately, reliably, and efficiently simulate groundwater flow. With time, growing interest in surface and groundwater interactions, local refinement with nested and unstructured grids, karst groundwater flow, solute transport, and saltwater intrusion, has led to the development of numerous MODFLOW versions. Although these MODFLOW versions are often based on the core version (presently MODFLOW-2005), there are often incompatibilities that restrict their use with one another. In many cases, development of these alternative versions has been challenging due to the underlying MODFLOW structure, which was designed for simulation with a single groundwater flow model using a rectilinear grid. A new object-oriented framework is being developed for MODFLOW to provide a platform for supporting multiple models and multiple types of models within the same simulation. In the new design, any number of numerical models can be tightly coupled at the matrix level by adding them to the same numerical solution, or they can be iteratively coupled until there is convergence between them. Transfer of information between models is isolated to exchange objects, which allow models to be developed and used independently. For existing MODFLOW users, this means that the program can function in the same way it always has for a single groundwater flow model. Within this new framework, a regional-scale groundwater model may be coupled with multiple local-scale groundwater models. Or, a surface water flow model can be coupled to multiple groundwater flow models. The framework naturally allows for the simulation of solute transport. Presently, unstructured control-volume finite-difference models have been implemented in the framework for three-dimensional groundwater

  15. Change detection of built-up land: A framework of combining pixel-based detection and object-based recognition

    NASA Astrophysics Data System (ADS)

    Xiao, Pengfeng; Zhang, Xueliang; Wang, Dongguang; Yuan, Min; Feng, Xuezhi; Kelly, Maggi

    2016-09-01

    This study proposed a new framework that combines pixel-level change detection and object-level recognition to detect changes of built-up land from high-spatial resolution remote sensing images. First, an adaptive differencing method was designed to detect changes at the pixel level based on both spectral and textural features. Next, the changed pixels were subjected to a set of morphological operations to improve the completeness and to generate changed objects, achieving the transition of change detection from the pixel level to the object level. The changed objects were further recognised through the difference of morphological building index in two phases to indicate changed objects on built-up land. The transformation from changed pixels to changed objects makes the proposed framework distinct with both the pixel-based and the object-based change detection methods. Compared with the pixel-based methods, the proposed framework can improve the change detection capability through the transformation and successive recognition of objects. Compared with the object-based method, the proposed framework avoids the issue of multitemporal segmentation and can generate changed objects directly from changed pixels. The experimental results show the effectiveness of the transformation from changed pixels to changed objects and the successive object-based recognition on improving the detection accuracy, which justify the application potential of the proposed change detection framework.

  16. A Framework for Evaluation and Use of Automated Scoring

    ERIC Educational Resources Information Center

    Williamson, David M.; Xi, Xiaoming; Breyer, F. Jay

    2012-01-01

    A framework for evaluation and use of automated scoring of constructed-response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are…

  17. Framework for the Evaluation of an IT Project Portfolio

    ERIC Educational Resources Information Center

    Tai, W. T.

    2010-01-01

    The basis for evaluating projects in an organizational IT project portfolio includes complexity factors, arguments/criteria, and procedures, with various implications. The purpose of this research was to develop a conceptual framework for IT project proposal evaluation. The research involved using a heuristic roadmap and the mind-mapping method to…

  18. Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks

    NASA Astrophysics Data System (ADS)

    Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.

    2010-12-01

    Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for

  19. Extending the infoway benefits evaluation framework for health information systems.

    PubMed

    Lau, Francis

    2009-01-01

    A proposal is made that extends the current Canada Health Infoway Benefits Evaluation (BE) Framework for Health Information Systems (HIS) being deployed in Canada. The current BE framework takes a micro view of HIS quality, use and impact at the local level whereas the extended framework takes into account the broader socio-organizational and contextual aspects known as the meso and macro views of HIS deployment. The meso view addresses the people, organization, network and implementation dimensions. The macro view focuses on the contextual dimensions of technology standard, funding/incentive, legislation/policy and professional practice. Validation of this extended BE framework is being planned through a comparative review of recent HIS evaluation literature, a Delphi-consensus process with HIS experts and users, and multiple validation studies with recent HIS implementation projects in British Columbia. PMID:19380969

  20. POET (parallel object-oriented environment and toolkit) and frameworks for scientific distributed computing

    SciTech Connect

    Armstrong, R.; Cheung, A.

    1997-01-01

    Frameworks for parallel computing have recently become popular as a means for preserving parallel algorithms as reusable components. Frameworks for parallel computing in general, and POET in particular, focus on finding ways to orchestrate and facilitate cooperation between components that implement the parallel algorithms. Since performance is a key requirement for POET applications, CORBA or CORBA-like systems are eschewed for a SPMD message-passing architecture common to the world of distributed-parallel computing. Though the system is written in C++ for portability, the behavior of POET is more like a classical framework, such as Smalltalk. POET seeks to be a general platform for scientific parallel algorithm components which can be modified, linked, mixed and matched to a user`s specification. The purpose of this work is to identify a means for parallel code reuse and to make parallel computing more accessible to scientists whose expertise is outside the field of parallel computing. The POET framework provides two things: (1) an object model for parallel components that allows cooperation without being restrictive; (2) services that allow components to access and manage user data and message-passing facilities, etc. This work has evolved through application of a series of real distributed-parallel scientific problems. The paper focuses on what is required for parallel components to cooperate and at the same time remain ``black-boxes`` that users can drop into the frame without having to know the exquisite details of message-passing, data layout, etc. The paper walks through a specific example of a chemically reacting flow application. The example is implemented in POET and the authors identify component cooperation, usability and reusability in an anecdotal fashion.

  1. An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations

    SciTech Connect

    Michael R Tonks; Derek R Gaston; Paul C Millett; David Andrs; Paul Talbot

    2012-01-01

    The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of new models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.

  2. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    PubMed

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  3. Supervised Evaluation of Image Segmentation and Object Proposal Techniques.

    PubMed

    Pont-Tuset, Jordi; Marques, Ferran

    2016-07-01

    This paper tackles the supervised evaluation of image segmentation and object proposal algorithms. It surveys, structures, and deduplicates the measures used to compare both segmentation results and object proposals with a ground truth database; and proposes a new measure: the precision-recall for objects and parts. To compare the quality of these measures, eight state-of-the-art object proposal techniques are analyzed and two quantitative meta-measures involving nine state of the art segmentation methods are presented. The meta-measures consist in assuming some plausible hypotheses about the results and assessing how well each measure reflects these hypotheses. As a conclusion of the performed experiments, this paper proposes the tandem of precision-recall curves for boundaries and for objects-and-parts as the tool of choice for the supervised evaluation of image segmentation. We make the datasets and code of all the measures publicly available. PMID:26415155

  4. Framework for dynamic background modeling and shadow suppression for moving object segmentation in complex wavelet domain

    NASA Astrophysics Data System (ADS)

    Kushwaha, Alok Kumar Singh; Srivastava, Rajeev

    2015-09-01

    Moving object segmentation using change detection in wavelet domain under continuous variations of lighting condition is a challenging problem in video surveillance systems. There are several methods proposed in the literature for change detection in wavelet domain for moving object segmentation having static backgrounds, but it has not been addressed effectively for dynamic background changes. The methods proposed in the literature suffer from various problems, such as ghostlike appearance, object shadows, and noise. To deal with these issues, a framework for dynamic background modeling and shadow suppression under rapidly changing illumination conditions for moving object segmentation in complex wavelet domain is proposed. The proposed method consists of eight steps applied on given video frames, which include wavelet decomposition of frame using complex wavelet transform; use of change detection on detail coefficients (LH, HL, and HH), use of improved Gaussian mixture-based dynamic background modeling on approximate coefficient (LL subband); cast shadow suppression; use of soft thresholding for noise removal; strong edge detection; inverse wavelet transformation for reconstruction; and finally using closing morphology operator. A comparative analysis of the proposed method is presented both qualitatively and quantitatively with other standard methods available in the literature for six datasets in terms of various performance measures. Experimental results demonstrate the efficacy of the proposed method.

  5. Quality framework proposal for Component Material Evaluation (CME) projects.

    SciTech Connect

    Christensen, Naomi G.; Arfman, John F.; Limary, Siviengxay

    2008-09-01

    This report proposes the first stage of a Quality Framework approach that can be used to evaluate and document Component Material Evaluation (CME) projects. The first stage of the Quality Framework defines two tools that will be used to evaluate a CME project. The first tool is used to decompose a CME project into its essential elements. These elements can then be evaluated for inherent quality by looking at the subelements that impact their level of quality maturity or rigor. Quality Readiness Levels (QRLs) are used to valuate project elements for inherent quality. The Framework provides guidance for the Principal Investigator (PI) and stakeholders for CME project prerequisites that help to ensure the proper level of confidence in the deliverable given its intended use. The Framework also Provides a roadmap that defined when and how the Framework tools should be applied. Use of these tools allow the Principal Investigator (PI) and stakeholders to understand what elements the project will use to execute the project, the inherent quality of the elements, which of those are critical to the project and why, and the risks associated to the project's elements.

  6. An object-based interaction framework for the operation of multiple field robots

    NASA Astrophysics Data System (ADS)

    Jones, Henry Lee, II

    Today's field robots, such as the Sojourner Mars rover or the Predator unmanned aerial vehicle, work alone to accomplish dirty, dull, or dangerous missions. Plans for the next generation of robotic systems call for multiple field robots to conduct these missions cooperatively under the direction of a single operator. This research examines the role of the operator in multiple-robot missions and creates a human-robot interaction framework that supports this role---a vital step toward the successful deployment of these future robots. In a typical user-centered approach to the development of a human-robot interaction framework, the work practices of the robot operator would be observed, characterized, and integrated into the design. Unfortunately, there are no settings where one can study the operator of multiple robots at work because no such systems have been deployed. As an alternative, this research incorporated a surrogate setting that could be used to inform the early interaction design of multiple-robot systems. Police Special Weapons and Tactics (SWAT) teams were chosen as this setting, and an ethnographic study of SWAT commanders was conducted. Concepts from the interdisciplinary study of geographically distributed work, including common ground, shared mental models, and information sharing, were used to understand and characterize the ethnographic observations. Using lessons learned from the surrogate setting, an implementation of a new human-robot interaction framework was demonstrated on the Micro Autonomous Rovers (MAR) platform in the Aerospace Robotics Laboratory at Stanford University. This interaction framework, which is based on the sensing and manipulation of physical objects by the robots, was derived from the finding that references to physical objects serve as an essential communication and coordination tool for SWAT commanders. A human-computer interface that utilizes direct manipulation techniques and three-dimensional computer graphics was

  7. A framework for evaluating and utilizing medical terminology mappings.

    PubMed

    Hussain, Sajjad; Sun, Hong; Sinaci, Anil; Erturkmen, Gokce Banu Laleci; Mead, Charles; Gray, Alasdair J G; McGuinness, Deborah L; Prud'Hommeaux, Eric; Daniel, Christel; Forsberg, Kerstin

    2014-01-01

    Use of medical terminologies and mappings across them are considered to be crucial pre-requisites for achieving interoperable eHealth applications. Built upon the outcomes of several research projects, we introduce a framework for evaluating and utilizing terminology mappings that offers a platform for i) performing various mappings strategies, ii) representing terminology mappings together with their provenance information, and iii) enabling terminology reasoning for inferring both new and erroneous mappings. We present the results of the introduced framework from SALUS project where we evaluated the quality of both existing and inferred terminology mappings among standard terminologies. PMID:25160255

  8. Integrated test evaluation decision framework for the Yucca Moutain Project

    SciTech Connect

    Judd, B.R.; Hoxie, D.T.; Mattson, S.R.; Younker, J.L.

    1993-12-31

    An Integrated Test Evaluation decision framework and computer model were developed to help prioritize site-characterization tests at Yucca Mountain. An initial application of the framework evaluated studies described in the Department of Energy`s Site Characterization Plan. Priorities were based on the ability of tests to detect unsuitable site conditions, to demonstrate compliance with regulatory requirements, and to build confidence and support within the scientific community. Testing costs were also considered. Results showed priorities to depend most on estimates of the abilities of tests to build scientific confidence and least on estimates of the ability to detect unsuitable site conditions.

  9. A framework to support preceptors' evaluation and development of new nurses' clinical judgment.

    PubMed

    Nielsen, Ann; Lasater, Kathie; Stock, Mary

    2016-07-01

    In today's complex, fast-paced world of hospital nursing, new graduate nurses do not have well-developed clinical judgment skills. Nurse preceptors are charged with bridging the gap between new graduates' learning in school and their autonomous practice as RNs. In one large, urban medical center in the U.S., a clinical judgment model and rubric were used as a framework for a new evaluation and orientation process. Preceptors of new graduate nurses who had used the former and new processes described their experiences using the framework. The findings indicated that having a structured framework provided objective ways to evaluate and help develop new graduate nurses' clinical judgment. It is hypothesized that academic clinical supervisors may find such a framework useful to prepare students for transition to practice. PMID:27428698

  10. Evaluating genomic tests from bench to bedside: a practical framework

    PubMed Central

    2012-01-01

    The development of genomic tests is one of the most significant technological advances in medical testing in recent decades. As these tests become increasingly available, so does the need for a pragmatic framework to evaluate the evidence base and evidence gaps in order to facilitate informed decision-making. In this article we describe such a framework that can provide a common language and benchmarks for different stakeholders of genomic testing. Each stakeholder can use this framework to specify their respective thresholds for decision-making, depending on their perspective and particular needs. This framework is applicable across a broad range of test applications and can be helpful in the application and communication of a regulatory science for genomic testing. Our framework builds upon existing work and incorporates principles familiar to researchers involved in medical testing (both diagnostic and prognostic) generally, as well as those involved in genomic testing. This framework is organized around six phases in the development of genomic tests beginning with marker identification and ending with population impact, and highlights the important knowledge gaps that need to be filled in establishing the clinical relevance of a test. Our framework focuses on the clinical appropriateness of the four main dimensions of test research questions (population/setting, intervention/index test, comparators/reference test, and outcomes) rather than prescribing a hierarchy of study designs that should be used to address each phase. PMID:23078403

  11. A framework for developing objective and measurable recovery criteria for threatened and endangered species.

    PubMed

    Himes Boor, Gina K

    2014-02-01

    For species listed under the U.S. Endangered Species Act (ESA), the U.S. Fish and Wildlife Service and National Marine Fisheries Service are tasked with writing recovery plans that include "objective, measurable criteria" that define when a species is no longer at risk of extinction, but neither the act itself nor agency guidelines provide an explicit definition of objective, measurable criteria. Past reviews of recovery plans, including one published in 2012, show that many criteria lack quantitative metrics with clear biological rationale and are not meeting the measureable and objective mandate. I reviewed how objective, measureable criteria have been defined implicitly and explicitly in peer-reviewed literature, the ESA, other U.S. statutes, and legal decisions. Based on a synthesis of these sources, I propose the following 6 standards be used as minimum requirements for objective, measurable criteria: contain a quantitative threshold with calculable units, stipulate a timeframe over which they must be met, explicitly define the spatial extent or population to which they apply, specify a sampling procedure that includes sample size, specify a statistical significance level, and include justification by providing scientific evidence that the criteria define a species whose extinction risk has been reduced to the desired level. To meet these 6 standards, I suggest that recovery plans be explicitly guided by and organized around a population viability modeling framework even if data or agency resources are too limited to complete a viability model. When data and resources are available, recovery criteria can be developed from the population viability model results, but when data and resources are insufficient for model implementation, extinction risk thresholds can be used as criteria. A recovery-planning approach centered on viability modeling will also yield appropriately focused data-acquisition and monitoring plans and will facilitate a seamless transition

  12. Real-time framework for tensor-based image enhancement for object classification

    NASA Astrophysics Data System (ADS)

    Cyganek, Bogusław; Smołka, Bogdan

    2016-04-01

    In many practical situations visual pattern recognition is vastly burdened by low quality of input images due to noise, geometrical distortions, as well as low quality of the acquisition hardware. However, although there are techniques of image quality improvements, such as nonlinear filtering, there are only few attempts reported in the literature that try to build these enhancement methods into a complete chain for multi-dimensional object recognition such as color video or hyperspectral images. In this work we propose a joint multilinear signal filtering and classification system built upon the multi-dimensional (tensor) approach. Tensor filtering is performed by the multi-dimensional input signal projection into the tensor subspace spanned by the best-rank tensor decomposition method. On the other hand, object classification is done by construction of the tensor sub-space constructed based on the Higher-Order Singular Value Decomposition method applied to the prototype patters. In the experiments we show that the proposed chain allows high object recognition accuracy in the real-time even from the poor quality prototypes. Even more importantly, the proposed framework allows unified classification of signals of any dimensions, such as color images or video sequences which are exemplars of 3D and 4D tensors, respectively. The paper discussed also some practical issues related to implementation of the key components of the proposed system.

  13. A Global Hypothesis Verification Framework for 3D Object Recognition in Clutter.

    PubMed

    Aldoma, Aitor; Tombari, Federico; Stefano, Luigi Di; Vincze, Markus

    2016-07-01

    Pipelines to recognize 3D objects despite clutter and occlusions usually end up with a final verification stage whereby recognition hypotheses are validated or dismissed based on how well they explain sensor measurements. Unlike previous work, we propose a Global Hypothesis Verification (GHV) approach which regards all hypotheses jointly so as to account for mutual interactions. GHV provides a principled framework to tackle the complexity of our visual world by leveraging on a plurality of recognition paradigms and cues. Accordingly, we present a 3D object recognition pipeline deploying both global and local 3D features as well as shape and color. Thereby, and facilitated by the robustness of the verification process, diverse object hypotheses can be gathered and weak hypotheses need not be suppressed too early to trade sensitivity for specificity. Experiments demonstrate the effectiveness of our proposal, which significantly improves over the state-of-art and attains ideal performance (no false negatives, no false positives) on three out of the six most relevant and challenging benchmark datasets. PMID:26485476

  14. FACET: an object-oriented software framework for modeling complex social behavior patterns

    SciTech Connect

    Dolph, J. E.; Christiansen, J. H.; Sydelko, P. J.

    2000-06-30

    The Framework for Addressing Cooperative Extended Transactions (FACET) is a flexible, object-oriented architecture for implementing models of dynamic behavior of multiple individuals, or agents, in a simulation. These agents can be human (individuals or organizations) or animal and may exhibit any type of organized social behavior that can be logically articulated. FACET was developed by Argonne National Laboratory's (ANL) Decision and Information Sciences Division (DIS) out of the need to integrate societal processes into natural system simulations. The FACET architecture includes generic software components that provide the agents with various mechanisms for interaction, such as step sequencing and logic, resource management, conflict resolution, and preemptive event handling. FACET components provide a rich environment within which patterns of behavior can be captured in a highly expressive manner. Interactions among agents in FACET are represented by Course of Action (COA) object-based models. Each COA contains a directed graph of individual actions, which represents any known pattern of social behavior. The agents' behavior in a FACET COA, in turn, influences the natural landscape objects in a simulation (i.e., vegetation, soil, and habitat) by updating their states. The modular design of the FACET architecture provides the flexibility to create multiple and varied simulation scenarios by changing social behavior patterns, without disrupting the natural process models. This paper describes the FACET architecture and presents several examples of FACET models that have been developed to assess the effects of anthropogenic influences on the dynamics of the natural environment.

  15. Semantic framework for mapping object-oriented model to semantic web languages.

    PubMed

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  16. Semantic framework for mapping object-oriented model to semantic web languages

    PubMed Central

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  17. The effect of object-valence relations on automatic evaluation.

    PubMed

    Moran, Tal; Bar-Anan, Yoav

    2013-01-01

    Two experiments tested the effect of co-occurrence of a target object with affective stimuli on automatic evaluation of the target when the relation between the target and the affective stimuli suggests that they have opposite valence. Participants learned about targets that ended an unpleasant noise or a pleasant music. The valence of such targets is opposite to the valence of the affective stimuli that co-occur with them. Participants reported preference for targets that ended noise over targets that ended music, but automatic evaluation measures revealed the opposite preference. This suggests that automatic evaluation is sensitive to co-occurrence between stimuli more than to the relation between the stimuli, and that relational information has a stronger influence on deliberate evaluation than on automatic evaluation. These conclusions support the associative-propositional evaluation model (Gawronski & Bodenhausen, 2006), and add evidence regarding the sensitivity of the evaluative-conditioning effect to relational information. PMID:23072334

  18. MObIUS (Massive Object Integrated Universal Store): A Survey Toward a More General Framework

    SciTech Connect

    Sirp, J K; Brugger, S T

    2004-06-07

    General frameworks for distributed computing are slowly evolving out of Grid, Peer Architecture, and Web Services. The following results from a summer long survey into distributing computing practices have revealed three things. One, that Legion and Cactus-G have achieved the most in terms of providing an all-purpose application environment. Two, that extending a local programming environment to operate in a highly distributed fashion can be facilitated with toolkits like Globus. Three, that building a new system from the ground up could be realized, in part, by using some of the following components; an Object Oriented Database, Tapestry, JXTA, BOINC, Globus, component architecture technology, XML and related libraries, Condor-G, Proteus, and ParMETIS.

  19. Development of an Object-Oriented Turbomachinery Analysis Code within the NPSS Framework

    NASA Technical Reports Server (NTRS)

    Jones, Scott M.

    2014-01-01

    During the preliminary or conceptual design phase of an aircraft engine, the turbomachinery designer has a need to estimate the effects of a large number of design parameters such as flow size, stage count, blade count, radial position, etc. on the weight and efficiency of a turbomachine. Computer codes are invariably used to perform this task however, such codes are often very old, written in outdated languages with arcane input files, and rarely adaptable to new architectures or unconventional layouts. Given the need to perform these kinds of preliminary design trades, a modern 2-D turbomachinery design and analysis code has been written using the Numerical Propulsion System Simulation (NPSS) framework. This paper discusses the development of the governing equations and the structure of the primary objects used in OTAC.

  20. Evaluation of the virtual learning object "Diagnostic reasoning in nursing applied to preterm newborns".

    PubMed

    Góes, Fernanda dos Santos Nogueira de; Fonseca, Luciana Mara Monti; Furtado, Maria Cândida de Carvalho; Leite, Adriana Moraes; Scochi, Carmen Gracinda Silvan

    2011-01-01

    The potential use of computer technology in teaching and continuous education for nursing motivated the development of this study to evaluate the virtual learning object, "Diagnostic Reasoning in Nursing Applied to Preterm Newborns" at an intermediate neonatal care unit. This descriptive study evaluates the appearance and content of the virtual object concerning aspects related to presentation, organization, usability and overall impression. Experts from the fields of computer technology (12) and nursing (31) participated in the evaluation process. Each sub-item of the instrument was assessed on a Likert scale and blank space was provided for comments/suggestions. All items were positively evaluated by over 80% of the experts, except for the 'informational density' criterion in the evaluation performed by computer technology experts. The developed product is considered adequate to be used for teaching for nursing students and in continuous education of diagnostic reasoning in the development of Nursing Diagnoses for preterm newborns, in the problem-posing pedagogical framework. PMID:21876941

  1. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  2. Objective evaluation of oral presentation skills using Inertial Measurement Units.

    PubMed

    Sessa, Salvatore; Kong, Weisheng; Zhang, Di; Cosentino, Sarah; Manawadu, Udara; Kawasaki, Motoji; Thomas, George Thuruthel; Suzuki, Tomohiro; Tsumura, Ryosuke; Takanishi, Atsuo

    2015-01-01

    Oral presentation is considered as one of the most sought after skills by companies and professional organizations and program accreditation agencies. However, both learning process and evaluation of this skill are time demanding and complex tasks that need dedication and experience. Furthermore, the role of the instructor is fundamental during the presentation assessment. The instructor needs to consider several verbal and nonverbal communications cues sent in parallel and this kind of evaluation is often subjective. Even if there are oral presentation rubrics that try to standardize the evaluation, they are not an optimal solution because they do not provide the presenter a real-time feedback. In this paper, we describe a system for behavioral monitoring during presentations. We propose an ecological measurement system based on Inertial Measurement Units to evaluate objectively the presenter's posture through objective parameters. The system can be used to provide a real-time feedback to the presenters unobtrusively. PMID:26736952

  3. Development and application of a modular watershed-scale hydrologic model using the object modeling system: runoff response evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study reports on: 1) the integration of the European J2K model (an object-oriented, modular hydrological system for fully distributed simulation of the water balance in large watersheds) under the Object Modeling System (OMS) environmental modeling framework; and 2) evaluation of OMS-J2K perfor...

  4. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  5. Evaluating QR Code Case Studies Using a Mobile Learning Framework

    ERIC Educational Resources Information Center

    Rikala, Jenni

    2014-01-01

    The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…

  6. The AgMIP Framework to Evaluate Agricultural Pathways

    NASA Technical Reports Server (NTRS)

    Ruane, Alex

    2015-01-01

    This talk will describe the community and research framework that AgMIP has built to enable evidence-based adaptation investment. We provide expertise on the ground and connect various disciplines in order to allow specific adaptations to be evaluated for their biophysical and socio-economic ramifications.

  7. ESDORA: A Data Archive Infrastructure Using Digital Object Model and Open Source Frameworks

    NASA Astrophysics Data System (ADS)

    Shrestha, Biva; Pan, Jerry; Green, Jim; Palanisamy, Giriprakash; Wei, Yaxing; Lenhardt, W.; Cook, R. Bob; Wilson, B. E.; Leggott, M.

    2011-12-01

    There are an array of challenges associated with preserving, managing, and using contemporary scientific data. Large volume, multiple formats and data services, and the lack of a coherent mechanism for metadata/data management are some of the common issues across data centers. It is often difficult to preserve the data history and lineage information, along with other descriptive metadata, hindering the true science value for the archived data products. In this project, we use digital object abstraction architecture as the information/knowledge framework to address these challenges. We have used the following open-source frameworks: Fedora-Commons Repository, Drupal Content Management System, Islandora (Drupal Module) and Apache Solr Search Engine. The system is an active archive infrastructure for Earth Science data resources, which include ingestion, archiving, distribution, and discovery functionalities. We use an ingestion workflow to ingest the data and metadata, where many different aspects of data descriptions (including structured and non-structured metadata) are reviewed. The data and metadata are published after reviewing multiple times. They are staged during the reviewing phase. Each digital object is encoded in XML for long-term preservation of the content and relations among the digital items. The software architecture provides a flexible, modularized framework for adding pluggable user-oriented functionality. Solr is used to enable word search as well as faceted search. A home grown spatial search module is plugged in to allow user to make a spatial selection in a map view. A RDF semantic store within the Fedora-Commons Repository is used for storing information on data lineage, dissemination services, and text-based metadata. We use the semantic notion "isViewerFor" to register internally or externally referenced URLs, which are rendered within the same web browser when possible. With appropriate mapping of content into digital objects, many

  8. Formative Evaluation of Lectures; An Application of Stake's Evaluation Framework.

    ERIC Educational Resources Information Center

    Westphal, Walter W.; And Others

    The problem of major concern to the Physics Education Evaluation Project (P.E.E.P.) involved the improvement of university physics teaching and learning. The present paper describes instruments and procedures developed for systematic formative evaluation of physics lectures. The data was drawn from two sections of a first year university physics…

  9. Object-Oriented Algorithm For Evaluation Of Fault Trees

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1992-01-01

    Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).

  10. Metadata Evaluation: The Road toward Meeting Our Objectives (SIG LAN)

    ERIC Educational Resources Information Center

    Cuddy, Colleen

    2000-01-01

    Lists the three presentation titles that were part of this technical session on metadata evaluation, including "An Update on the Dublin Core" (Stuart Weibel; "Metadata Development Update" (Kathleen Burnett and Jeong-Mee Lee); and "Metadata Standards for Discovery and Retrieval of Learning Objects" (Stuart Sutton). (LRW)

  11. Training Objectives, Transfer, Validation and Evaluation: A Sri Lankan Study

    ERIC Educational Resources Information Center

    Wickramasinghe, Vathsala M.

    2006-01-01

    Using a stratified random sample, this paper examines the training practices of setting objectives, transfer, validation and evaluation in Sri Lanka. The paper further sets out to compare those practices across local, foreign and joint-venture companies based on the assumption that there may be significant differences across companies of different…

  12. Methodology for Evaluating Quality and Reusability of Learning Objects

    ERIC Educational Resources Information Center

    Kurilovas, Eugenijus; Bireniene, Virginija; Serikoviene, Silvija

    2011-01-01

    The aim of the paper is to present the scientific model and several methods for the expert evaluation of quality of learning objects (LOs) paying especial attention to LOs reusability level. The activities of eQNet Quality Network for a European Learning Resource Exchange (LRE) aimed to improve reusability of LOs of European Schoolnet's LRE…

  13. Evaluating cloud retrieval algorithms with the ARM BBHRP framework

    SciTech Connect

    Mlawer,E.; Dunn,M.; Mlawer, E.; Shippert, T.; Troyan, D.; Johnson, K. L.; Miller, M. A.; Delamere, J.; Turner, D. D.; Jensen, M. P.; Flynn, C.; Shupe, M.; Comstock, J.; Long, C. N.; Clough, S. T.; Sivaraman, C.; Khaiyer, M.; Xie, S.; Rutan, D.; Minnis, P.

    2008-03-10

    Climate and weather prediction models require accurate calculations of vertical profiles of radiative heating. Although heating rate calculations cannot be directly validated due to the lack of corresponding observations, surface and top-of-atmosphere measurements can indirectly establish the quality of computed heating rates through validation of the calculated irradiances at the atmospheric boundaries. The ARM Broadband Heating Rate Profile (BBHRP) project, a collaboration of all the working groups in the program, was designed with these heating rate validations as a key objective. Given the large dependence of radiative heating rates on cloud properties, a critical component of BBHRP radiative closure analyses has been the evaluation of cloud microphysical retrieval algorithms. This evaluation is an important step in establishing the necessary confidence in the continuous profiles of computed radiative heating rates produced by BBHRP at the ARM Climate Research Facility (ACRF) sites that are needed for modeling studies. This poster details the continued effort to evaluate cloud property retrieval algorithms within the BBHRP framework, a key focus of the project this year. A requirement for the computation of accurate heating rate profiles is a robust cloud microphysical product that captures the occurrence, height, and phase of clouds above each ACRF site. Various approaches to retrieve the microphysical properties of liquid, ice, and mixed-phase clouds have been processed in BBHRP for the ACRF Southern Great Plains (SGP) and the North Slope of Alaska (NSA) sites. These retrieval methods span a range of assumptions concerning the parameterization of cloud location, particle density, size, shape, and involve different measurement sources. We will present the radiative closure results from several different retrieval approaches for the SGP site, including those from Microbase, the current 'reference' retrieval approach in BBHRP. At the NSA, mixed-phase clouds and

  14. Objective climate classification as a framework for assessing projected climate change in High Mountain Asia

    NASA Astrophysics Data System (ADS)

    Forsythe, Nathan; Fowler, Hayley; Pritchard, David; Blenkinsop, Stephen

    2016-04-01

    This study builds upon foundational work by Forsythe et al (2015, doi: 10.5194/esd-6-311-2015) which used principal component analysis (PCA) and k-means clustering to derive objective present climate classifications over High Mountain Asia and adjacent regions (60E to 100E, 20N to 40N) based on global meteorological reanalyses' estimates of the drivers of water resources availability and variability (precipitation, surface shortwave radiation, daily mean near surface air temperature and its diurnal range). This study refines Forsythe et al (2015) by testing the potential for spatially disaggregating coarse global reanalyses (and climate model outputs) using iterative classification and regression processing to achieve a 5km (0.05 decimal degree) horizontal resolution in order better capture the severe topographic range and gradients of the HMA domain. This spatial refinement should allow for better intercomparability of resultant classifications derived from datasets with different native resolutions. This intercomparability is critical because the second stage of this assesses climate change projections from a range regional climate model experiments - UK Hadley Centre RQUMP 25km South Asia perturbed physics ensemble, CORDEX South Asia domain and (pending dataset availability) NextData EC-Earth 15km high resolution HMA domain - using derived objective classifications as a framework for aggregation. By establishing sub-regional units of relative homogeneity, the objective classification approach allows twofold assessment of project future climate scenarios, i.e. change can be quantified not only as perturbation of key variables (e.g. precipitation, temperature, etc) but also in terms of the spatial descriptors (areal extent, surface elevation range and mean, latitudinal and longitudinal bounds) of the identified climate zones. It is expected that this novel approach, and in particular the very high target spatial resolution, will yield important insights into the

  15. An Evaluation of Database Solutions to Spatial Object Association

    SciTech Connect

    Kumar, V S; Kurc, T; Saltz, J; Abdulla, G M; Kohn, S; Matarazzo, C

    2008-06-24

    Object association is a common problem encountered in many applications. Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two datasets based on their positions in a common spatial coordinate system--one of the datasets may correspond to a catalog of objects observed over time in a multi-dimensional domain; the other dataset may consist of objects observed in a snapshot of the domain at a time point. The use of database management systems to the solve the object association problem provides portability across different platforms and also greater flexibility. Increasing dataset sizes in today's applications, however, have made object association a data/compute-intensive problem that requires targeted optimizations for efficient execution. In this work, we investigate how database-based crossmatch algorithms can be deployed on different database system architectures and evaluate the deployments to understand the impact of architectural choices on crossmatch performance and associated trade-offs. We investigate the execution of two crossmatch algorithms on (1) a parallel database system with active disk style processing capabilities, (2) a high-throughput network database (MySQL Cluster), and (3) shared-nothing databases with replication. We have conducted our study in the context of a large-scale astronomy application with real use-case scenarios.

  16. Training snakes to find object boundaries and evaluating them

    NASA Astrophysics Data System (ADS)

    Fenster, Samuel D.; Kender, John R.

    2000-08-01

    We describe how to teach deformable models (snakes) to find object boundaries based on user-specified criteria, and we present a method for evaluating which criteria work best. These methods prove indispensable in abdominal CT images. Further work is needed in heart ultrasound images. The methods apply in any domain with consistent image conditions characterizing object boundaries, for which automated identification is nontrivial, perhaps due to interfering detail. A traditional strongest-edge-seeking snake fails to find an object's boundary when the strongest nearby image edges are not the ones sought. But we show how to instead learn, from training data, the relation between a shape and any image feature, as the probability distribution (PDF) of a function of image and shape. An important but neglected task has always been to select image qualities to guide a model. Because success depends on the relation of objective function (PDF) output to shape correctness, it is evaluated using a sampling of ground truth, a random model of the range of shapes tried during optimization, and a measure of shape closeness. The test results are evaluated for incidence of 'false positives' (scoring better than ground truth) versus incorrectness, and for the objective function's monotonicity with respect to incorrectness. Monotonicity is measured using correlation coefficient and using the newly introduced distance from closest increasing function. Domain-dependent choices must be tested. We analyze several Gaussian models fitting image intensity and perpendicular gradient at the object boundary, as well as the traditional sum of gradient magnitudes. The latter model is found inadequate in our domains; some of the former succeed.

  17. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  18. Assessing Vital Signs: Applying Two Participatory Evaluation Frameworks to the Evaluation of a College of Nursing

    ERIC Educational Resources Information Center

    Connors, Susan C.; Magilvy, Joan K.

    2011-01-01

    Evaluation research has been in progress to clarify the concept of participatory evaluation and to assess its impact. Recently, two theoretical frameworks have been offered--Daigneault and Jacob's participatory evaluation measurement index and Champagne and Smits' model of practical participatory evaluation. In this case report, we apply these…

  19. An ecosystem evaluation framework for global seamount conservation and management.

    PubMed

    Taranto, Gerald H; Kvile, Kristina Ø; Pitcher, Tony J; Morato, Telmo

    2012-01-01

    In the last twenty years, several global targets for protection of marine biodiversity have been adopted but have failed. The Convention on Biological Diversity (CBD) aims at preserving 10% of all the marine biomes by 2020. For achieving this goal, ecologically or biologically significant areas (EBSA) have to be identified in all biogeographic regions. However, the methodologies for identifying the best suitable areas are still to be agreed. Here, we propose a framework for applying the CBD criteria to locate potential ecologically or biologically significant seamount areas based on the best information currently available. The framework combines the likelihood of a seamount constituting an EBSA and its level of human impact and can be used at global, regional and local scales. This methodology allows the classification of individual seamounts into four major portfolio conservation categories which can help optimize management efforts toward the protection of the most suitable areas. The framework was tested against 1000 dummy seamounts and satisfactorily assigned seamounts to proper EBSA and threats categories. Additionally, the framework was applied to eight case study seamounts that were included in three out of four portfolio categories: areas highly likely to be identified as EBSA with high degree of threat; areas highly likely to be EBSA with low degree of threat; and areas with a low likelihood of being EBSA with high degree of threat. This framework will allow managers to identify seamount EBSAs and to prioritize their policies in terms of protecting undisturbed areas, disturbed areas for recovery of habitats and species, or both based on their management objectives. It also identifies seamount EBSAs and threats considering different ecological groups in both pelagic and benthic communities. Therefore, this framework may represent an important tool to mitigate seamount biodiversity loss and to achieve the 2020 CBD goals. PMID:22905190

  20. An Ecosystem Evaluation Framework for Global Seamount Conservation and Management

    PubMed Central

    Taranto, Gerald H.; Kvile, Kristina Ø.; Pitcher, Tony J.; Morato, Telmo

    2012-01-01

    In the last twenty years, several global targets for protection of marine biodiversity have been adopted but have failed. The Convention on Biological Diversity (CBD) aims at preserving 10% of all the marine biomes by 2020. For achieving this goal, ecologically or biologically significant areas (EBSA) have to be identified in all biogeographic regions. However, the methodologies for identifying the best suitable areas are still to be agreed. Here, we propose a framework for applying the CBD criteria to locate potential ecologically or biologically significant seamount areas based on the best information currently available. The framework combines the likelihood of a seamount constituting an EBSA and its level of human impact and can be used at global, regional and local scales. This methodology allows the classification of individual seamounts into four major portfolio conservation categories which can help optimize management efforts toward the protection of the most suitable areas. The framework was tested against 1000 dummy seamounts and satisfactorily assigned seamounts to proper EBSA and threats categories. Additionally, the framework was applied to eight case study seamounts that were included in three out of four portfolio categories: areas highly likely to be identified as EBSA with high degree of threat; areas highly likely to be EBSA with low degree of threat; and areas with a low likelihood of being EBSA with high degree of threat. This framework will allow managers to identify seamount EBSAs and to prioritize their policies in terms of protecting undisturbed areas, disturbed areas for recovery of habitats and species, or both based on their management objectives. It also identifies seamount EBSAs and threats considering different ecological groups in both pelagic and benthic communities. Therefore, this framework may represent an important tool to mitigate seamount biodiversity loss and to achieve the 2020 CBD goals. PMID:22905190

  1. Object-oriented framework for rapid development of image analysis applications

    NASA Astrophysics Data System (ADS)

    Liang, Weidong; Zhang, Xiangmin; Sonka, Milan

    1997-04-01

    Image analysis applications are usually composed of a set of graphic objects, a set of image processing algorithms, and a graphic user interface (GUI). Typically, developing an image analysis application is time-consuming and the developed programs are hard to maintain. We have developed a framework called IMANAL that aims at reducing the development costs by improving system maintainability, design change flexibility, component reusability, and human-computer interaction. IMANAL decomposes an image analysis application into three models; data model, process model, and GUI model. The three models as well as the collaboration among them are standardized into a unified system architecture. A new application can be developed rapidly by customizing task- specific building blocks within the unified architecture. IMANAL maintains a class library of more than 100,000 lines of C/C++ code that are highly reusable for creating the three above mentioned models. Software components from other sources such as Khoros can also be easily included in the applications. IMANAL was used for development of image analysis applications utilizing a variety of medical images such as x-ray coronary angiography, intracardiac, intravascular and brachial ultrasound, and pulmonary CT. In all the above listed applications, the development overhead is removed and the developer is able to fully focus on the image analysis algorithms. IMANAL has proven to be a useful tool for image analysis research as well as the prototype development tool for commercial image analysis applications.

  2. Attitudes as Object-Evaluation Associations of Varying Strength

    PubMed Central

    Fazio, Russell H.

    2009-01-01

    Historical developments regarding the attitude concept are reviewed, and set the stage for consideration of a theoretical perspective that views attitude, not as a hypothetical construct, but as evaluative knowledge. A model of attitudes as object-evaluation associations of varying strength is summarized, along with research supporting the model’s contention that at least some attitudes are represented in memory and activated automatically upon the individual’s encountering the attitude object. The implications of the theoretical perspective for a number of recent discussions related to the attitude concept are elaborated. Among these issues are the notion of attitudes as “constructions,” the presumed malleability of automatically-activated attitudes, correspondence between implicit and explicit measures of attitude, and postulated dual or multiple attitudes. PMID:19424447

  3. Blogging for Evaluating Objectives in an International Nursing Course.

    PubMed

    Strang, Sharon; Knopp, Andrea; Schubert, Carolyn

    2015-01-01

    Nursing educators need to adapt to meet students in new technological spaces and in the increasingly global environment. This article provides background on blogging as an educational tool and the use of a blogging assignment for evaluation of course objectives in an international graduate nursing course. The blog is a part of a study-abroad experience in Kenya, where graduate nursing students learn about Kenyan culture and work in the health care system. PMID:25888104

  4. A portfolio evaluation framework for air transportation improvement projects

    NASA Astrophysics Data System (ADS)

    Baik, Hyeoncheol

    This thesis explores the application of portfolio theory to the Air Transportation System (ATS) improvement. The ATS relies on complexly related resources and different stakeholder groups. Moreover, demand for air travel is significantly increasing relative to capacity of air transportation. In this environment, improving the ATS is challenging. Many projects, which are defined as technologies or initiatives, for improvement have been proposed and some have been demonstrated in practice. However, there is no clear understanding of how well these projects work in different conditions nor of how they interact with each other or with existing systems. These limitations make it difficult to develop good project combinations, or portfolios that maximize improvement. To help address this gap, a framework for identifying good portfolios is proposed. The framework can be applied to individual projects or portfolios of projects. Projects or portfolios are evaluated using four different groups of factors (effectiveness, time-to-implement, scope of applicability, and stakeholder impacts). Portfolios are also evaluated in terms of interaction-determining factors (prerequisites, co-requisites, limiting factors, and amplifying factors) because, while a given project might work well in isolation, interdependencies between projects or with existing systems could result in lower overall performance in combination. Ways to communicate a portfolio to decision makers are also introduced. The framework is unique because (1) it allows using a variety of available data, and (2) it covers diverse benefit metrics. For demonstrating the framework, an application to ground delay management projects serves as a case study. The portfolio evaluation approach introduced in this thesis can aid decision makers and researchers at universities and aviation agencies such as Federal Aviation Administration (FAA), National Aeronautics and Space Administration (NASA), and Department of Defense (DoD), in

  5. Evaluating the inverse reasoning account of object discovery.

    PubMed

    Carroll, Christopher D; Kemp, Charles

    2015-06-01

    People routinely make inferences about unobserved objects. A hotel guest with welts on his arms, for example, will often worry about bed bugs. The discovery of unobserved objects almost always involves a backward inference from some observed effects (e.g., welts) to unobserved causes (e.g., bed bugs). The inverse reasoning account, which is typically formalized as Bayesian inference, posits that the strength of a backward inference is closely connected to the strength of the corresponding forward inference from the unobserved causes to the observed effects. We evaluated the inverse reasoning account of object discovery in three experiments where participants were asked to discover the unobserved "attractors" and "repellers" that controlled a "particle" moving within an arena. Experiments 1 and 2 showed that participants often failed to provide the best explanations for various particle motions, even when the best explanations were simple and when participants enthusiastically endorsed these explanations when presented with them. This failure demonstrates that object discovery is critically dependent on the processes that support hypothesis generation-processes that the inverse reasoning account does not explain. Experiment 3 demonstrated that people sometimes generate explanations that are invalid even according to their own forward inferences, suggesting that the psychological processes that support forward and backward inference are less intertwined than the inverse reasoning account suggests. The experimental findings support an alternative account of object discovery in which people rely on heuristics to generate possible explanations. PMID:25824861

  6. A Patient-Centered Framework for Evaluating Digital Maturity of Health Services: A Systematic Review

    PubMed Central

    Callahan, Ryan; Darzi, Ara; Mayer, Erik

    2016-01-01

    Background Digital maturity is the extent to which digital technologies are used as enablers to deliver a high-quality health service. Extensive literature exists about how to assess the components of digital maturity, but it has not been used to design a comprehensive framework for evaluation. Consequently, the measurement systems that do exist are limited to evaluating digital programs within one service or care setting, meaning that digital maturity evaluation is not accounting for the needs of patients across their care pathways. Objective The objective of our study was to identify the best methods and metrics for evaluating digital maturity and to create a novel, evidence-based tool for evaluating digital maturity across patient care pathways. Methods We systematically reviewed the literature to find the best methods and metrics for evaluating digital maturity. We searched the PubMed database for all papers relevant to digital maturity evaluation. Papers were selected if they provided insight into how to appraise digital systems within the health service and if they indicated the factors that constitute or facilitate digital maturity. Papers were analyzed to identify methodology for evaluating digital maturity and indicators of digitally mature systems. We then used the resulting information about methodology to design an evaluation framework. Following that, the indicators of digital maturity were extracted and grouped into increasing levels of maturity and operationalized as metrics within the evaluation framework. Results We identified 28 papers as relevant to evaluating digital maturity, from which we derived 5 themes. The first theme concerned general evaluation methodology for constructing the framework (7 papers). The following 4 themes were the increasing levels of digital maturity: resources and ability (6 papers), usage (7 papers), interoperability (3 papers), and impact (5 papers). The framework includes metrics for each of these levels at each

  7. A multi-objective optimization framework to model 3D river and landscape evolution processes

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Castelletti, Andrea; Cominola, Andrea; Mason, Emanuele; Paik, Kyungrock

    2013-04-01

    Water and sediment interactions shape hillslopes, regulate soil erosion and sedimentation, and organize river networks. Landscape evolution and river organization occur at various spatial and temporal scale and the understanding and modelling of them is highly complex. The idea of a least action principle governing river networks evolution has been proposed many times as a simpler approach among the ones existing in the literature. These theories assume that river networks, as observed in nature, self-organize and act on soil transportation in order to satisfy a particular "optimality" criterion. Accordingly, river and landscape weathering can be simulated by solving an optimization problem, where the choice of the criterion to be optimized becomes the initial assumption. The comparison between natural river networks and optimized ones verifies the correctness of this initial assumption. Yet, various criteria have been proposed in literature and there is no consensus on which is better able to explain river network features observed in nature like network branching and river bed profile: each one is able to reproduce some river features through simplified modelling of the natural processes, but it fails to characterize the whole complexity (3D and its dynamic) of the natural processes. Some of the criteria formulated in the literature partly conflict: the reason is that their formulation rely on mathematical and theoretical simplifications of the natural system that are suitable for specific spatial and temporal scale but fails to represent the whole processes characterizing landscape evolution. In an attempt to address some of these scientific questions, we tested the suitability of using a multi-objective optimization framework to describe river and landscape evolution in a 3D spatial domain. A synthetic landscape is used to this purpose. Multiple, alternative river network evolutions, corresponding to as many tradeoffs between the different and partly

  8. Evaluating metal-organic frameworks for natural gas storage

    SciTech Connect

    Mason, JA; Veenstra, M; Long, JR

    2014-01-01

    Metal-organic frameworks have received significant attention as a new class of adsorbents for natural gas storage; however, inconsistencies in reporting high-pressure adsorption data and a lack of comparative studies have made it challenging to evaluate both new and existing materials. Here, we briefly discuss high-pressure adsorption measurements and review efforts to develop metal-organic frameworks with high methane storage capacities. To illustrate the most important properties for evaluating adsorbents for natural gas storage and for designing a next generation of improved materials, six metal-organic frameworks and an activated carbon, with a range of surface areas, pore structures, and surface chemistries representative of the most promising adsorbents for methane storage, are evaluated in detail. High-pressure methane adsorption isotherms are used to compare gravimetric and volumetric capacities, isosteric heats of adsorption, and usable storage capacities. Additionally, the relative importance of increasing volumetric capacity, rather than gravimetric capacity, for extending the driving range of natural gas vehicles is highlighted. Other important systems-level factors, such as thermal management, mechanical properties, and the effects of impurities, are also considered, and potential materials synthesis contributions to improving performance in a complete adsorbed natural gas system are discussed.

  9. A novel objective evaluation method for trunk function

    PubMed Central

    Kinoshita, Kazuaki; Hashimoto, Masashi; Ishida, Kazunari; Yoneda, Yuki; Naka, Yuta; Kitanishi, Hideyuki; Oyagi, Hirotaka; Hoshino, Yuichi; Shibanuma, Nao

    2015-01-01

    [Purpose] To investigate whether an objective evaluation method for trunk function, namely the “trunk righting test”, is reproducible and reliable by testing on different observers (from experienced to beginners) and by confirming the test-retest reliability. [Subjects] Five healthy subjects were evaluated in this correlation study. [Methods] A handheld dynamometer was used in the assessments. The motor task was a trunk righting motion by moving the part with the sensor pad 10 cm outward from the original position. During measurement, the posture was held at maximum effort for 5 s. Measurement was repeated three times. Interexaminer reproducibility was examined in two physical therapists with 1 year experience and one physical therapist with 7 years of experience. The measured values were evaluated for reliability by using intraclass correlation coefficients (ICC 1.1) and interclass correlation coefficients (ICC 2.1). [Results] The test-retest reliability ICC 1.1 and ICC 2.1 were all high. The ICC 1.1 was >0.90. The ICC 2.1 was 0.93. [Conclusion] We developed the trunk righting test as a novel objective evaluation method for trunk function. As the study included inexperienced therapists, the results suggest that the trunk righting test could be used in the clinic, independent of the experience of the therapists. PMID:26157279

  10. Improvement of the R-SWAT-FME framework to support multiple variables and multi-objective functions

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shu-Guang

    2014-01-01

    Application of numerical models is a common practice in the environmental field for investigation and prediction of natural and anthropogenic processes. However, process knowledge, parameter identifiability, sensitivity, and uncertainty analyses are still a challenge for large and complex mathematical models such as the hydrological/water quality model, Soil and Water Assessment Tool (SWAT). In this study, the previously developed R program language-SWAT-Flexible Modeling Environment (R-SWAT-FME) was improved to support multiple model variables and objectives at multiple time steps (i.e., daily, monthly, and annually). This expansion is significant because there is usually more than one variable (e.g., water, nutrients, and pesticides) of interest for environmental models like SWAT. To further facilitate its easy use, we also simplified its application requirements without compromising its merits, such as the user-friendly interface. To evaluate the performance of the improved framework, we used a case study focusing on both streamflow and nitrate nitrogen in the Upper Iowa River Basin (above Marengo) in the United States. Results indicated that the R-SWAT-FME performs well and is comparable to the built-in auto-calibration tool in multi-objective model calibration. Overall, the enhanced R-SWAT-FME can be useful for the SWAT community, and the methods we used can also be valuable for wrapping potential R packages with other environmental models.

  11. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    PubMed Central

    Mendrik, Adriënne M.; Vincken, Koen L.; Kuijf, Hugo J.; Breeuwer, Marcel; Bouvy, Willem H.; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R.; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A.; Vrooman, Henri A.; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A.

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand. PMID:26759553

  12. Genetic Counseling Milestones: A Framework for Student Competency Evaluation.

    PubMed

    Guy, Carrie

    2016-08-01

    Graduate medical education has recently increased focus on the development of medical specialty competency milestones to provide a targeted tool for medical resident evaluation. Milestones provide developmental assessment of the attainment of competencies over the course of an educational program. An educational framework is described to explore the development of Genetic Counseling Milestones for the evaluation of the development of genetic counseling competencies by genetic counseling students. The development of Genetic Counseling Milestones may provide a valuable tool to assess genetic counseling students across all program activities. Historical educational context, current practices, and potential benefits and challenges in the development of Genetic Counseling Milestones are discussed. PMID:26462934

  13. The challenge of evaluating complex interventions: a framework for evaluating media advocacy.

    PubMed

    Stead, Martine; Hastings, Gerard; Eadie, Douglas

    2002-06-01

    New health promotion and public health approaches such as media advocacy pose particular evaluation challenges. Evaluation is important to provide feedback to media advocacy practitioners on how to enhance their efforts, and to funders and researchers seeking to assess media advocacy's effectiveness as a health promotion strategy. The media advocacy evaluation literature contains some examples of promising evaluation approaches but is still evolving. A comprehensive framework for the evaluation of media advocacy is presented. Building on existing approaches to evaluation in media advocacy and on current thinking regarding evaluation in health promotion, it proposes a series of indicators and research methods for evaluating media advocacy at the levels of formative, process and outcome evaluation. The framework can be used to encourage strategic reflection on the media advocacy process, to guide evaluation of specific interventions, and to demonstrate to funders the importance and complexity of evaluation in this promising field. PMID:12120850

  14. Benefit From Directional Microphone Hearing Aids: Objective and Subjective Evaluations

    PubMed Central

    Park, Hee-Sung; Jin, Sun Hwa; Choi, Ji Eun; Cho, Yang-Sun; Hong, Sung Hwa

    2015-01-01

    Objectives The aims of this study were to find and compare the effect of directional (DIR) processing of two different hearing aids via both subjective and objective methods, to determine the association between the results of the subjective and objective evaluations, and to find out individual predictive factors influencing the DIR benefit. Methods Twenty-six hearing aid users fitted unilaterally with each two different experimental hearing aid performed modified Korean Hearing in Noise Test (K-HINT) in three DIR conditions; omnidirectional (OMNI) mode, OMNI plus noise reduction feature, fixed DIR mode. In order to determine benefits from DIR benefit within a hearing aid and compare performance of the DIR processing between hearing aids, a subjective questionnaire was administrated on speech quality (SQ) and discomfort in noise (DN) domain. Correlation analysis of factors influencing DIR benefit was accomplished. Results Benefits from switching OMNI mode to DIR mode within both hearing aids in K-HINT were about 2.8 (standard deviation, 3.5) and 2.1 dB SNR (signal to ratio; SD, 2.5), but significant difference in K-HINT results between OMNI and OMNI plus noise reduction algorithm was not shown. The subjective evaluation resulted in the better SQ and DN scores in DIR mode than those in OMNI mode. However, the difference of scores on both SQ and DN between the two hearing aids with DIR mode was not statistically significant. Any individual factors did not significantly affect subjective and objective DIR benefits. Conclusion DIR benefit was found not only in the objective measurement performed in the laboratory but also in the subjective questionnaires, but the subjective results was failed to have significant correlation with the DIR benefit obtained in the K-HINT. Factors influencing individual variation in perceptual DIR benefit were still hard to explain. PMID:26330918

  15. A flexible object-based software framework for modeling complex systems with interacting natural and societal processes.

    SciTech Connect

    Christiansen, J. H.

    2000-06-15

    The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations. The DIAS infrastructure makes it feasible to build and manipulate complex simulation scenarios in which many thousands of objects can interact via dozens to hundreds of concurrent dynamic processes. The flexibility and extensibility of the DIAS software infrastructure stem mainly from (1) the abstraction of object behaviors, (2) the encapsulation and formalization of model functionality, and (3) the mutability of domain object contents. DIAS simulation objects are inherently capable of highly flexible and heterogeneous spatial realizations. Geospatial graphical representation of DIAS simulation objects is addressed via the GeoViewer, an object-based GIS toolkit application developed at ANL. DIAS simulation capabilities have been extended by inclusion of societal process models generated by the Framework for Addressing Cooperative Extended Transactions (FACET), another object-based framework developed at Argonne National Laboratory. By using FACET models to implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations, it has become possible to conveniently address a broad range of issues involving interaction and feedback among natural and societal processes. Example DIAS application areas discussed in this paper include a dynamic virtual oceanic environment, detailed simulation of clinical, physiological, and logistical aspects of health care delivery, and studies of agricultural sustainability of urban centers under environmental stress in ancient Mesopotamia.

  16. Marine monitoring: Its shortcomings and mismatch with the EU Water Framework Directive's objectives.

    PubMed

    de Jonge, V N; Elliott, M; Brauer, V S

    2006-01-01

    The main goal of the EU Water Framework Directive (WFD) is to achieve good ecological status across European surface waters by 2015 and as such, it offers the opportunity and thus the challenge to improve the protection of our coastal systems. It is the main example for Europe's increasing desire to conserve aquatic ecosystems. Ironically, since c. 1975 the increasing adoption of EU directives has been accompanied by a decreasing interest of, for example, the Dutch government to assess the quality of its coastal and marine ecosystems. The surveillance and monitoring started in NL in 1971 has declined since the 1980s resulting in a 35% reduction of sampling stations. Given this and interruptions the remaining data series is considered to be insufficient for purposes other than trend analysis and compliance. The Dutch marine managers have apparently chosen a minimal (cost-effective) approach despite the WFD implicitly requiring the incorporation of the system's 'ecological complexity' in indices used to evaluate the ecological status of highly variable systems such as transitional and coastal waters. These indices should include both the community structure and system functioning and to make this really cost-effective a new monitoring strategy is required with a tailor-made programme. Since the adoption of the WFD in 2000 and the launching of the European Marine Strategy in 2002 (and the recently proposed Marine Framework Directive) we suggest reviewing national monitoring programmes in order to integrate water quality monitoring and biological monitoring and change from 'station oriented monitoring' to 'basin or system oriented monitoring' in combination with specific 'cause-effect' studies for highly dynamic coastal systems. Progress will be made if the collected information is integrated and aggregated in valuable tools such as structure- and functioning-oriented computer simulation models and Decision Support Systems. The development of ecological indices

  17. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  18. Objective evaluation of the visual acuity in human eyes

    NASA Astrophysics Data System (ADS)

    Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.

    2009-08-01

    Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".

  19. Approach to proliferation risk assessment based on multiple objective analysis framework

    SciTech Connect

    Andrianov, A.; Kuptsov, I.

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  20. An Economic Evaluation Framework for Assessing Renewable Energy Projects

    SciTech Connect

    Omitaomu, Olufemi A; Badiru, Adedeji B

    2012-01-01

    It is becoming increasingly imperative to integrate renewable energy, such as solar and wind, into electricity generation due to increased regulations on air and water pollution and a sociopolitical desire to develop more clean energy sources. This increased spotlight on renewable energy requires evaluating competing projects using either conventional economic analysis techniques or other economics-based models and approaches in order to select a subset of the projects to be funded. Even then, there are reasons to suspect that techniques applied to renewable energy projects may result in decisions that will reject viable projects due to the use of a limited number of quantifiable and tangible attributes about the projects. This paper presents a framework for economic evaluation of renewable energy projects. The framework is based on a systems approach in which the processes within the entire network of the system, from generation to consumption, are accounted for. Furthermore, the framework uses the concept of fuzzy system to calculate the value of information under conditions of uncertainty.

  1. Electronic immunization data collection systems: application of an evaluation framework

    PubMed Central

    2014-01-01

    Background Evaluating the features and performance of health information systems can serve to strengthen the systems themselves as well as to guide other organizations in the process of designing and implementing surveillance tools. We adapted an evaluation framework in order to assess electronic immunization data collection systems, and applied it in two Ontario public health units. Methods The Centers for Disease Control and Prevention’s Guidelines for Evaluating Public Health Surveillance Systems are broad in nature and serve as an organizational tool to guide the development of comprehensive evaluation materials. Based on these Guidelines, and informed by other evaluation resources and input from stakeholders in the public health community, we applied an evaluation framework to two examples of immunization data collection and examined several system attributes: simplicity, flexibility, data quality, timeliness, and acceptability. Data collection approaches included key informant interviews, logic and completeness assessments, client surveys, and on-site observations. Results Both evaluated systems allow high-quality immunization data to be collected, analyzed, and applied in a rapid fashion. However, neither system is currently able to link to other providers’ immunization data or provincial data sources, limiting the comprehensiveness of coverage assessments. We recommended that both organizations explore possibilities for external data linkage and collaborate with other jurisdictions to promote a provincial immunization repository or data sharing platform. Conclusions Electronic systems such as the ones described in this paper allow immunization data to be collected, analyzed, and applied in a rapid fashion, and represent the infostructure required to establish a population-based immunization registry, critical for comprehensively assessing vaccine coverage. PMID:24423014

  2. 78 FR 52933 - Strengthening the Operating Framework and Furthering the Objectives of Coalition for Accelerating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... Objectives of Coalition for Accelerating Standards and Therapies Initiative (U24) AGENCY: Food and Drug... Objectives The CFAST Initiative aims to accelerate clinical research and medical product development...

  3. A software engineering perspective on environmental modeling framework design: The object modeling system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The environmental modeling community has historically been concerned with the proliferation of models and the effort associated with collective model development tasks (e.g., code generation, data provisioning and transformation, etc.). Environmental modeling frameworks (EMFs) have been developed to...

  4. An Evaluation Framework for Obesity Prevention Policy Interventions

    PubMed Central

    Sommers, Janice; Vu, Maihan; Jernigan, Jan; Payne, Gayle; Thompson, Diane; Heiser, Claire; Farris, Rosanne; Ammerman, Alice

    2012-01-01

    As the emphasis on preventing obesity has grown, so have calls for interventions that extend beyond individual behaviors and address changes in environments and policies. Despite the need for policy action, little is known about policy approaches that are most effective at preventing obesity. The Centers for Disease Control and Prevention (CDC) and others are funding the implementation and evaluation of new obesity prevention policies, presenting a distinct opportunity to learn from these practice-based initiatives and build the body of evidence-based approaches. However, contributions from this policy activity are limited by the incomplete and inconsistent evaluation data collected on policy processes and outcomes. We present a framework developed by the CDC-funded Center of Excellence for Training and Research Translation that public health practitioners can use to evaluate policy interventions and identify the practice-based evidence needed to fill the gaps in effective policy approaches to obesity prevention. PMID:22742594

  5. Evaluation of a performance appraisal framework for radiation therapists in planning and simulation

    SciTech Connect

    Becker, Jillian; Bridge, Pete; Brown, Elizabeth; Lusk, Ryan; Ferrari-Anderson, Janet

    2015-06-15

    Constantly evolving technology and techniques within radiation therapy require practitioners to maintain a continuous approach to professional development and training. Systems of performance appraisal and adoption of regular feedback mechanisms are vital to support this development yet frequently lack structure and rely on informal peer support. A Radiation Therapy Performance Appraisal Framework (RT-PAF) for radiation therapists in planning and simulation was developed to define expectations of practice and promote a supportive and objective culture of performance and skills appraisal. Evaluation of the framework was conducted via an anonymous online survey tool. Nine peer reviewers and fourteen recipients provided feedback on its effectiveness and the challenges and limitations of the approach. Findings from the evaluation were positive and suggested that both groups gained benefit from and expressed a strong interest in embedding the approach more routinely. Respondents identified common challenges related to the limited ability to implement suggested development strategies; this was strongly associated with time and rostering issues. This framework successfully defined expectations for practice and provided a fair and objective feedback process that focussed on skills development. It empowered staff to maintain their skills and reach their professional potential. Management support, particularly in regard to provision of protected time was highlighted as critical to the framework's ongoing success. The demonstrated benefits arising in terms of staff satisfaction and development highlight the importance of this commitment to the modern radiation therapy workforce.

  6. Method and apparatus for evaluating multilayer objects for imperfections

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)

    1997-01-01

    A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First, an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave, wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve, wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured, wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.

  7. Method and Apparatus for Evaluating Multilayer Objects for Imperfections

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)

    1999-01-01

    A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First. an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave. wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve. wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured. wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.

  8. Developing a monitoring and evaluation framework to integrate and formalize the informal waste and recycling sector: the case of the Philippine National Framework Plan.

    PubMed

    Serrona, Kevin Roy B; Yu, Jeongsoo; Aguinaldo, Emelita; Florece, Leonardo M

    2014-09-01

    The Philippines has been making inroads in solid waste management with the enactment and implementation of the Republic Act 9003 or the Ecological Waste Management Act of 2000. Said legislation has had tremendous influence in terms of how the national and local government units confront the challenges of waste management in urban and rural areas using the reduce, reuse, recycle and recovery framework or 4Rs. One of the sectors needing assistance is the informal waste sector whose aspiration is legal recognition of their rank and integration of their waste recovery activities in mainstream waste management. To realize this, the Philippine National Solid Waste Management Commission initiated the formulation of the National Framework Plan for the Informal Waste Sector, which stipulates approaches, strategies and methodologies to concretely involve the said sector in different spheres of local waste management, such as collection, recycling and disposal. What needs to be fleshed out is the monitoring and evaluation component in order to gauge qualitative and quantitative achievements vis-a-vis the Framework Plan. In the process of providing an enabling environment for the informal waste sector, progress has to be monitored and verified qualitatively and quantitatively and measured against activities, outputs, objectives and goals. Using the Framework Plan as the reference, this article developed monitoring and evaluation indicators using the logical framework approach in project management. The primary objective is to institutionalize monitoring and evaluation, not just in informal waste sector plans, but in any waste management initiatives to ensure that envisaged goals are achieved. PMID:25052014

  9. A Conceptual Framework for Evaluation of Public Health and Primary Care System Performance in Iran

    PubMed Central

    Jahanmehr, Nader; Rashidian, Arash; Khosravi, Ardeshir; Farzadfar, Farshad; Shariati, Mohammad; Majdzadeh, Reza; Sari, Ali Akbari; Mesdaghinia, Alireza

    2015-01-01

    Introduction: The main objective of this study was to design a conceptual framework, according to the policies and priorities of the ministry of health to evaluate provincial public health and primary care performance and to assess their share in the overall health impacts of the community. Methods: We used several tools and techniques, including system thinking, literature review to identify relevant attributes of health system performance framework and interview with the key stakeholders. The PubMed, Scopus, web of science, Google Scholar and two specialized databases of Persian language literature (IranMedex and SID) were searched using main terms and keywords. Following decision-making and collective agreement among the different stakeholders, 51 core indicators were chosen from among 602 obtained indicators in a four stage process, for monitoring and evaluation of Health Deputies. Results: We proposed a conceptual framework by identifying the performance area for Health Deputies between other determinants of health, as well as introducing a chain of results, for performance, consisting of Input, Process, Output and Outcome indicators. We also proposed 5 dimensions for measuring the performance of Health Deputies, consisting of efficiency, effectiveness, equity, access and improvement of health status. Conclusion: The proposed Conceptual Framework illustrates clearly the Health Deputies success in achieving best results and consequences of health in the country. Having the relative commitment of the ministry of health and Health Deputies at the University of Medical Sciences is essential for full implementation of this framework and providing the annual performance report. PMID:25946937

  10. A common evaluation framework for the African Health Initiative

    PubMed Central

    2013-01-01

    Background The African Health Initiative includes highly diverse partnerships in five countries (Ghana, Mozambique, Rwanda, Tanzania, and Zambia), each of which is working to improve population health by strengthening health systems and to evaluate the results. One aim of the Initiative is to generate cross-site learning that can inform implementation in the five partnerships during the project period and identify lessons that may be generalizable to other countries in the region. Collaborators in the Initiative developed a common evaluation framework as a basis for this cross-site learning. Methods This paper describes the components of the framework; this includes the conceptual model, core metrics to be measured in all sites, and standard guidelines for reporting on the implementation of partnership activities and contextual factors that may affect implementation, or the results it produces. We also describe the systems that have been put in place for data management, data quality assessments, and cross-site analysis of results. Results and conclusions The conceptual model for the Initiative highlights points in the causal chain between health system strengthening activities and health impact where evidence produced by the partnerships can contribute to learning. This model represents an important advance over its predecessors by including contextual factors and implementation strength as potential determinants, and explicitly including equity as a component of both outcomes and impact. Specific measurement challenges include the prospective documentation of program implementation and contextual factors. Methodological issues addressed in the development of the framework include the aggregation of data collected using different methods and the challenge of evaluating a complex set of interventions being improved over time based on continuous monitoring and intermediate results. PMID:23819778

  11. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  12. A Framework and Model for Evaluating Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper, we develop a four-phase model for evaluating architectures for clinical decision support that focuses on: defining a set of desirable features for a decision support architecture; building a proof-of-concept prototype; demonstrating that the architecture is useful by showing that it can be integrated with existing decision support systems and comparing its coverage to that of other architectures. We apply this framework to several well-known decision support architectures, including Arden Syntax, GLIF, SEBASTIAN and SAGE PMID:18462999

  13. Identifying Anomalous Citations for Objective Evaluation of Scholarly Article Impact.

    PubMed

    Bai, Xiaomei; Xia, Feng; Lee, Ivan; Zhang, Jun; Ning, Zhaolong

    2016-01-01

    Evaluating the impact of a scholarly article is of great significance and has attracted great attentions. Although citation-based evaluation approaches have been widely used, these approaches face limitations e.g. in identifying anomalous citations patterns. This negligence would inevitably cause unfairness and inaccuracy to the article impact evaluation. In this study, in order to discover the anomalous citations and ensure the fairness and accuracy of research outcome evaluation, we investigate the citation relationships between articles using the following factors: collaboration times, the time span of collaboration, citing times and the time span of citing to weaken the relationship of Conflict of Interest (COI) in the citation network. Meanwhile, we study a special kind of COI, namely suspected COI relationship. Based on the COI relationship, we further bring forward the COIRank algorithm, an innovative scheme for accurately assessing the impact of an article. Our method distinguishes the citation strength, and utilizes PageRank and HITS algorithms to rank scholarly articles comprehensively. The experiments are conducted on the American Physical Society (APS) dataset. We find that about 80.88% articles contain contributed citations by co-authors in 26,366 articles and 75.55% articles among these articles are cited by the authors belonging to the same affiliation, indicating COI and suspected COI should not be ignored for evaluating impact of scientific papers objectively. Moreover, our experimental results demonstrate COIRank algorithm significantly outperforms the state-of-art solutions. The validity of our approach is verified by using the probability of Recommendation Intensity. PMID:27606817

  14. An evaluation framework and comparative analysis of the widely used first programming languages.

    PubMed

    Farooq, Muhammad Shoaib; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed; Abid, Adnan

    2014-01-01

    Computer programming is the core of computer science curriculum. Several programming languages have been used to teach the first course in computer programming, and such languages are referred to as first programming language (FPL). The pool of programming languages has been evolving with the development of new languages, and from this pool different languages have been used as FPL at different times. Though the selection of an appropriate FPL is very important, yet it has been a controversial issue in the presence of many choices. Many efforts have been made for designing a good FPL, however, there is no ample way to evaluate and compare the existing languages so as to find the most suitable FPL. In this article, we have proposed a framework to evaluate the existing imperative, and object oriented languages for their suitability as an appropriate FPL. Furthermore, based on the proposed framework we have devised a customizable scoring function to compute a quantitative suitability score for a language, which reflects its conformance to the proposed framework. Lastly, we have also evaluated the conformance of the widely used FPLs to the proposed framework, and have also computed their suitability scores. PMID:24586449

  15. An Evaluation Framework and Comparative Analysis of the Widely Used First Programming Languages

    PubMed Central

    Farooq, Muhammad Shoaib; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed; Abid, Adnan

    2014-01-01

    Computer programming is the core of computer science curriculum. Several programming languages have been used to teach the first course in computer programming, and such languages are referred to as first programming language (FPL). The pool of programming languages has been evolving with the development of new languages, and from this pool different languages have been used as FPL at different times. Though the selection of an appropriate FPL is very important, yet it has been a controversial issue in the presence of many choices. Many efforts have been made for designing a good FPL, however, there is no ample way to evaluate and compare the existing languages so as to find the most suitable FPL. In this article, we have proposed a framework to evaluate the existing imperative, and object oriented languages for their suitability as an appropriate FPL. Furthermore, based on the proposed framework we have devised a customizable scoring function to compute a quantitative suitability score for a language, which reflects its conformance to the proposed framework. Lastly, we have also evaluated the conformance of the widely used FPLs to the proposed framework, and have also computed their suitability scores. PMID:24586449

  16. Evaluation in Cross-Cultural Contexts: Proposing a Framework for International Education and Training Project Evaluations.

    ERIC Educational Resources Information Center

    bin Yahya, Ismail; And Others

    This paper focuses on the need for increased sensitivity and responsiveness in international education and training project evaluations, particularly those in Third World countries. A conceptual-theoretical framework for designing and developing models appropriate for evaluating education and training projects in non-Western cultures is presented.…

  17. A Framework of Multi Objectives Negotiation for Dynamic Supply Chain Model

    NASA Astrophysics Data System (ADS)

    Chai, Jia Yee; Sakaguchi, Tatsuhiko; Shirase, Keiichi

    Trends of globalization and advances in Information Technology (IT) have created opportunity in collaborative manufacturing across national borders. A dynamic supply chain utilizes these advances to enable more flexibility in business cooperation. This research proposes a concurrent decision making framework for a three echelons dynamic supply chain model. The dynamic supply chain is formed by autonomous negotiation among agents based on multi agents approach. Instead of generating negotiation aspects (such as amount, price and due date) arbitrary, this framework proposes to utilize the information available at operational level of an organization in order to generate realistic negotiation aspect. The effectiveness of the proposed model is demonstrated by various case studies.

  18. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  19. Contrast matching techniques for digital subtraction radiography: an objective evaluation.

    PubMed Central

    Likar, B.; Bernard, R.; Pernus, F.

    1996-01-01

    Digital subtraction radiography (DSR) enables the detection of subtle early detrimental effects of periodontal disease as well as the evaluation of the effects of therapy. However, the differences between two radiographs due to alignment and contrast errors must be kept at minimum. In the present in vitro study we test the efficacy of three basic contrast correction methods in the reduction of contrast mismatches which can adversely affect a subtracted image. The ODTF (Optical Density Thickness Function) method, which is based on a function relating grey level values of the aluminium wedge image and the corresponding thickness of the wedge, induced less contrast correction error than the CDF (Cumulative Density Function) and the LSQA (Least Square Quadratic Approximation) methods. Moreover, CDF, ODTF, and LSQA functions obtained from the reference structure density distribution may be applied for objective contrast enhancements and for standardisation of image quality, while the ODTF function allows also bone change volume estimations. PMID:8947675

  20. Objective evaluation of dextromethorphan and glaucine as antitussive agents.

    PubMed Central

    Rühle, K H; Criscuolo, D; Dieterich, H A; Köhler, D; Riedel, G

    1984-01-01

    Twenty-four inpatients affected by chronic cough completed a single-dose double-blind cross-over study of placebo, glaucine 30 mg and dextromethorphan 30 mg. The study was carried out using a balanced incomplete block design, each patient receiving two of the three experimental treatments. Objective evaluation of cough was ensured by means of a writing cough recorder. Coughs after dextromethorphan and glaucine were fewer than coughs after placebo: however only glaucine was significantly different from placebo in reducing coughs. Treatments were well tolerated: clinical results included a reduction in pulse rate after both dextromethorphan and glaucine , and a large number of patients reporting side effects after dextromethorphan administration. PMID:6375709

  1. Objective evaluation of dextromethorphan and glaucine as antitussive agents.

    PubMed

    Rühle, K H; Criscuolo, D; Dieterich, H A; Köhler, D; Riedel, G

    1984-05-01

    Twenty-four inpatients affected by chronic cough completed a single-dose double-blind cross-over study of placebo, glaucine 30 mg and dextromethorphan 30 mg. The study was carried out using a balanced incomplete block design, each patient receiving two of the three experimental treatments. Objective evaluation of cough was ensured by means of a writing cough recorder. Coughs after dextromethorphan and glaucine were fewer than coughs after placebo: however only glaucine was significantly different from placebo in reducing coughs. Treatments were well tolerated: clinical results included a reduction in pulse rate after both dextromethorphan and glaucine , and a large number of patients reporting side effects after dextromethorphan administration. PMID:6375709

  2. Objective evaluation of insert material for diabetic and athletic footwear.

    PubMed

    Brodsky, J W; Kourosh, S; Stills, M; Mooney, V

    1988-12-01

    Five of the most commonly used materials for shoe inserts (soft Plastazote, medium Pelite, PPT, Spenco, and Sorbothane) were objectively evaluated in the laboratory to characterize their behavior in the following three specific functions that correspond to clinical use: (1) the effect on the materials of repeated compression. (2) the effect of a combination of repetitive shear and compression. (3) the force-distribution (force-attenuation) properties of these materials, both when new and after repeated compression. The last function represents a model for relief of pressure beneath plantar bony prominences, a topic of special concern for the insensitive foot. All materials were effective in reducing transmitted force over the simulated bony prominence with a rank order of effectiveness. Other factors considered were: amount and rate of permanent deformation offset by considerations of enhanced moldability when comparing the neoprene and urethane materials with the polyethylene foams. The ideal insert represents a combination of material to achieve both durability and moldability. PMID:3229697

  3. OpenSMOKE++: An object-oriented framework for the numerical modeling of reactive systems with detailed kinetic mechanisms

    NASA Astrophysics Data System (ADS)

    Cuoci, A.; Frassoldati, A.; Faravelli, T.; Ranzi, E.

    2015-07-01

    OpenSMOKE++ is a general framework for numerical simulations of reacting systems with detailed kinetic mechanisms, including thousands of chemical species and reactions. The framework is entirely written in object-oriented C++ and can be easily extended and customized by the user for specific systems, without having to modify the core functionality of the program. The OpenSMOKE++ framework can handle simulations of ideal chemical reactors (plug-flow, batch, and jet stirred reactors), shock-tubes, rapid compression machines, and can be easily incorporated into multi-dimensional CFD codes for the modeling of reacting flows. OpenSMOKE++ provides useful numerical tools such as the sensitivity and rate of production analyses, needed to recognize the main chemical paths and to interpret the numerical results from a kinetic point of view. Since simulations involving large kinetic mechanisms are very time consuming, OpenSMOKE++ adopts advanced numerical techniques able to reduce the computational cost, without sacrificing the accuracy and the robustness of the calculations. In the present paper we give a detailed description of the framework features, the numerical models available, and the implementation of the code. The possibility of coupling the OpenSMOKE++ functionality with existing numerical codes is discussed. The computational performances of the framework are presented, and the capabilities of OpenSMOKE++ in terms of integration of stiff ODE systems are discussed and analyzed with special emphasis. Some examples demonstrating the ability of the OpenSMOKE++ framework to successfully manage large kinetic mechanisms are eventually presented.

  4. A Conceptual Framework for Evaluating Outpatient Electronic Prescribing Systems Based on Their Functional Capabilities

    PubMed Central

    Bell, Douglas S.; Cretin, Shan; Marken, Richard S.; Landman, Adam B.

    2004-01-01

    Objective: Electronic prescribing (e-prescribing) may substantially improve health care quality and efficiency, but the available systems are complex and their heterogeneity makes comparing and evaluating them a challenge. The authors aimed to develop a conceptual framework for anticipating the effects of alternative designs for outpatient e-prescribing systems. Design: Based on a literature review and on telephone interviews with e-prescribing vendors, the authors identified distinct e-prescribing functional capabilities and developed a conceptual framework for evaluating e-prescribing systems' potential effects based on their capabilities. Analyses of two commercial e-prescribing systems are presented as examples of applying the conceptual framework. Measurements: Major e-prescribing functional capabilities identified and the availability of evidence to support their specific effects. Results: The proposed framework for evaluating e-prescribing systems is organized using a process model of medication management. Fourteen e-prescribing functional capabilities are identified within the model. Evidence is identified to support eight specific effects for six of the functional capabilities. The evidence also shows that a functional capability with generally positive effects can be implemented in a way that creates unintended hazards. Applying the framework involves identifying an e-prescribing system's functional capabilities within the process model and then assessing the effects that could be expected from each capability in the proposed clinical environment. Conclusion: The proposed conceptual framework supports the integration of available evidence in considering the full range of effects from e-prescribing design alternatives. More research is needed into the effects of specific e-prescribing functional alternatives. Until more is known, e-prescribing initiatives should include provisions to monitor for unintended hazards. PMID:14527975

  5. Defining competency-based evaluation objectives in family medicine

    PubMed Central

    Allen, Tim; Brailovsky, Carlos; Rainsberry, Paul; Lawrence, Katherine; Crichton, Tom; Carpentier, Marie-Pierre; Visser, Shaun

    2011-01-01

    Abstract Objective To develop a definition of competence in family medicine sufficient to guide a review of Certification examinations by the Board of Examiners of the College of Family Physicians of Canada. Design Delphi analysis of responses to a 4-question postal survey. Setting Canadian family practice. Participants A total of 302 family physicians who have served as examiners for the College of Family Physicians of Canada’s Certification examination. Methods A survey comprising 4 short-answer questions was mailed to the 302 participating family physicians asking them to list elements that define competence in family medicine among newly certified family physicians beginning independent practice. Two expert groups used a modified Delphi consensus process to analyze responses and generate 2 basic components of this definition of competence: first, the problems that a newly practising family physician should be competent to handle; second, the qualities, behaviour, and skills that characterize competence at the start of independent practice. Main findings Response rate was 54%; total number of elements among all responses was 5077, for an average 31 per respondent. Of the elements, 2676 were topics or clinical situations to be dealt with; the other 2401 were skills, behaviour patterns, or qualities, without reference to a specific clinical problem. The expert groups identified 6 essential skills, the phases of the clinical encounter, and 99 priority topics as the descriptors used by the respondents. More than 20% of respondents cited 30 of the topics. Conclusion Family physicians define the domain of competence in family medicine in terms of 6 essential skills, the phases of the clinical encounter, and priority topics. This survey represents the first level of definition of evaluation objectives in family medicine. Definition of the interactions among these elements will permit these objectives to become detailed enough to effectively guide assessment. PMID

  6. A scoping review about conference objectives and evaluative practices: how do we get more out of them?

    PubMed Central

    2012-01-01

    Large multi-day conferences have often been criticized as ineffective ways to improve social outcomes and to influence policy or practice. Unfortunately, many conference evaluations have also been inadequate in determining the impact of a conference on its associated social sector, with little evidence gathered or analyzed to substantiate or refute these criticisms. The aim of this scoping review is to investigate and report stakeholders’ objectives for planning or participating in large multi-day conferences and how these objectives are being evaluated. We conducted a scoping review supplemented by a small number of key informant interviews. Eight bibliographic databases were systematically searched to identify papers describing conference objectives and/or evaluations. We developed a conference evaluation framework based on theoretical models and empirical findings, which structured the descriptive synthesis of the data. We identified 3,073 potential papers for review, of which 44 were included in this study. Our evaluation framework connects five key elements in planning a conference and its evaluation (number in brackets refers to number of themes identified): conference objectives (8), purpose of evaluation (7), evaluation methods (5), indicators of success (9) and theories/models (8). Further analysis of indicators of success identified three categories of indicators with differing scopes (i.e. immediate, prospective or follow-up) as well as empirical links between the purpose of evaluations and these indicators. Conference objectives and evaluations were largely correlated with the type of conference (i.e. academic, political/governmental or business) but diverse overall. While much can be done to improve the quality and usefulness of conference evaluations, there are innovative assessments that are currently being utilized by some conferences and warrant further investigation. This review provides conference evaluators and organizers a simple resource to

  7. A Proposed Framework for Global Leadership Education: Learning Objectives and Curricula

    ERIC Educational Resources Information Center

    Brown, LeAnn M.; Whitaker, Brett L.; Brungardt, Curtis L.

    2012-01-01

    Many traditional leadership education paradigms are challenged by the transformational nature of globalization and are limited in application in diverse and complex contexts. In order to address these issues, a new framework must be adopted within higher education leadership programs to educate the next generation of global leaders. This paper…

  8. Prospective Secondary Teachers Repositioning by Designing, Implementing and Testing Mathematics Learning Objects: A Conceptual Framework

    ERIC Educational Resources Information Center

    Mgombelo, Joyce R.; Buteau, Chantal

    2009-01-01

    This article describes a conceptual framework developed to illuminate how prospective teachers' learning experiences are shaped by didactic-sensitive activities in departments of mathematics. We draw from the experiences of prospective teachers in the Department of Mathematics at our institution in designing, implementing (i.e. computer…

  9. Formulating a Curriculum Framework for Bible Study: Creating Course Objectives for Bible Curriculum in Jewish Schools

    ERIC Educational Resources Information Center

    Kohn, Eli; Goldstein, Gabriel

    2008-01-01

    Bible teachers worldwide lack a shared language with which to describe expectations of what pupils will learn at various stages of their schooling. This article attempts such a language. If defines a framework, formulated with the assistance of twenty-five Bible teachers in Jewish schools in the United Kingdom. It is hoped that this article will…

  10. An Object-Oriented Course Framework for Developing Adaptive Learning Systems

    ERIC Educational Resources Information Center

    Tseng, Shian-Shyong; Su, Jun-Ming; Hwang, Gwo-Jen; Hwang, Gwo-Haur; Tsai, Chin-Chung; Tsai, Chang-Jiun

    2008-01-01

    The popularity of web-based learning systems has encouraged researchers to pay attention to several new issues. One of the most important issues is the development of new techniques to provide personalized teaching materials. Although several frameworks or methods have been proposed, it remains a challenging issue to design an easy-to-realize…

  11. Helping Students Identify Base Words in Indonesian--Linking Learning Objects in an ICLL Framework

    ERIC Educational Resources Information Center

    Colman, Ingrid; Davison, Janine

    2008-01-01

    For students of Indonesian, learning to identify base words is very important, but can often be quite tricky. This article describes how one of the authors used interactive digital content from The Le@rning Federation (TLF) together with an extensive range of offline activities within an intercultural language learning (ICLL) framework. It helps…

  12. The WellingTONNE Challenge Toolkit: Using the RE-AIM Framework to Evaluate a Community Resource Promoting Healthy Lifestyle Behaviours

    ERIC Educational Resources Information Center

    Caperchione, Cristina; Coulson, Fiona

    2010-01-01

    Objective: The RE-AIM framework has been recognized as a tool to evaluate the adoption, delivery, and sustainability of an intervention, and estimate its potential public health impact. In this study four dimensions of the RE-AIM framework (adoption, implementation, effectiveness, and maintenance) were used to evaluate the WellingTONNE Challenge…

  13. Action semantics: A unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge.

    PubMed

    van Elk, Michiel; van Schie, Hein; Bekkering, Harold

    2014-06-01

    Our capacity to use tools and objects is often considered one of the hallmarks of the human species. Many objects greatly extend our bodily capabilities to act in the physical world, such as when using a hammer or a saw. In addition, humans have the remarkable capability to use objects in a flexible fashion and to combine multiple objects in complex actions. We prepare coffee, cook dinner and drive our car. In this review we propose that humans have developed declarative and procedural knowledge, i.e. action semantics that enables us to use objects in a meaningful way. A state-of-the-art review of research on object use is provided, involving behavioral, developmental, neuropsychological and neuroimaging studies. We show that research in each of these domains is characterized by similar discussions regarding (1) the role of object affordances, (2) the relation between goals and means in object use and (3) the functional and neural organization of action semantics. We propose a novel conceptual framework of action semantics to address these issues and to integrate the previous findings. We argue that action semantics entails both multimodal object representations and modality-specific sub-systems, involving manipulation knowledge, functional knowledge and representations of the sensory and proprioceptive consequences of object use. Furthermore, we argue that action semantics are hierarchically organized and selectively activated and used depending on the action intention of the actor and the current task context. Our framework presents an integrative account of multiple findings and perspectives on object use that may guide future studies in this interdisciplinary domain. PMID:24461373

  14. Action semantics: A unifying conceptual framework for the selective use of multimodal and modality-specific object knowledge

    NASA Astrophysics Data System (ADS)

    van Elk, Michiel; van Schie, Hein; Bekkering, Harold

    2014-06-01

    Our capacity to use tools and objects is often considered one of the hallmarks of the human species. Many objects greatly extend our bodily capabilities to act in the physical world, such as when using a hammer or a saw. In addition, humans have the remarkable capability to use objects in a flexible fashion and to combine multiple objects in complex actions. We prepare coffee, cook dinner and drive our car. In this review we propose that humans have developed declarative and procedural knowledge, i.e. action semantics that enables us to use objects in a meaningful way. A state-of-the-art review of research on object use is provided, involving behavioral, developmental, neuropsychological and neuroimaging studies. We show that research in each of these domains is characterized by similar discussions regarding (1) the role of object affordances, (2) the relation between goals and means in object use and (3) the functional and neural organization of action semantics. We propose a novel conceptual framework of action semantics to address these issues and to integrate the previous findings. We argue that action semantics entails both multimodal object representations and modality-specific sub-systems, involving manipulation knowledge, functional knowledge and representations of the sensory and proprioceptive consequences of object use. Furthermore, we argue that action semantics are hierarchically organized and selectively activated and used depending on the action intention of the actor and the current task context. Our framework presents an integrative account of multiple findings and perspectives on object use that may guide future studies in this interdisciplinary domain.

  15. Expected frontiers: Incorporating weather uncertainty into a policy analysis using an integrated bi-level multi-objective optimization framework

    EPA Science Inventory

    Weather is the main driver in both plant use of nutrients and fate and transport of nutrients in the environment. In previous work, we evaluated a green tax for control of agricultural nutrients in a bi-level optimization framework that linked deterministic models. In this study,...

  16. Evaluation of translucency of monolithic zirconia and framework zirconia materials

    PubMed Central

    Tuncel, İlkin; Üşümez, Aslıhan

    2016-01-01

    PURPOSE The opacity of zirconia is an esthetic disadvantage that hinders achieving natural and shade-matched restorations. The aim of this study was to evaluate the translucency of non-colored and colored framework zirconia and monolithic zirconia. MATERIALS AND METHODS The three groups tested were: non-colored framework zirconia, colored framework zirconia with the A3 shade according to Vita Classic Scale, and monolithic zirconia (n=5). The specimens were fabricated in the dimensions of 15×12×0.5 mm. A spectrophotometer was used to measure the contrast ratio, which is indicative of translucency. Three measurements were made to obtain the contrast ratios of the materials over a white background (L*w) and a black background (L*b). The data were analyzed using the one-way analysis of variance and Tukey HSD tests. One specimen from each group was chosen for scanning electron microscope analysis. The determined areas of the SEM images were divided by the number of grains in order to calculate the mean grain size. RESULTS Statistically significant differences were observed among all groups (P<.05). Non-colored zirconia had the highest translucency with a contrast ratio of 0.75, while monolithic zirconia had the lowest translucency with a contrast ratio of 0.8. The mean grain sizes of the non-colored, colored, and monolithic zirconia were 233, 256, and 361 nm, respectively. CONCLUSION The translucency of the zirconia was affected by the coloring procedure and the grain size. Although monolithic zirconia may not be the best esthetic material for the anterior region, it may serve as an alternative in the posterior region for the bilayered zirconia restorations. PMID:27350851

  17. Virtual Workshop Environment (VWE): A Taxonomy and Service Oriented Architecture (SOA) Framework for Modularized Virtual Learning Environments (VLE)--Applying the Learning Object Concept to the VLE

    ERIC Educational Resources Information Center

    Paulsson, Fredrik; Naeve, Ambjorn

    2006-01-01

    Based on existing Learning Object taxonomies, this article suggests an alternative Learning Object taxonomy, combined with a general Service Oriented Architecture (SOA) framework, aiming to transfer the modularized concept of Learning Objects to modularized Virtual Learning Environments. The taxonomy and SOA-framework exposes a need for a clearer…

  18. Nanoparticle risk management and cost evaluation: a general framework

    NASA Astrophysics Data System (ADS)

    Fleury, Dominique; Bomfim, João A. S.; Metz, Sébastien; Bouillard, Jacques X.; Brignon, Jean-Marc

    2011-07-01

    Industrial production of nano-objects has been growing fast during the last decade and a wide range of products containing nanoparticles (NPs) is proposed to the public in various markets (automotive, electronics, textiles...). The issues encountered in monitoring the presence of nano-objects in any media cause a major difficulty for controlling the risk associated to the production stage. It is therefore very difficult to assess the efficiency of prevention and mitigation solutions, which potentially leads to overestimate the level of the protection barriers that are recommended. The extra costs in adding nano-objects to the process, especially that of nanosafety, must be estimated and optimized to ensure the competitiveness of the future production lines and associated products. The risk management and cost evaluation methods presented herein have been designed for application in a pilot production line of injection-moulded nanocomposites.

  19. Framework and criteria for program evaluation in the Office of Conservation and Renewable Energy

    SciTech Connect

    Not Available

    1981-04-30

    This study addresses the development of a framework and generic criteria for conducting program evaluation in the Office of Conservation and Renewable Energy. The evaluation process is intended to provide the Assistant Secretary with comprehensive and consistent evaluation data for management decisions regarding policy and strategy, crosscutting energy impacts and resource allocation and justification. The study defines evaluation objectives, identifies basic information requirements (criteria), and identifies a process for collecting evaluation results at the basic program level, integrating the results, and summarizing information upward through the CE organization to the Assistant Secretary. Methods are described by which initial criteria were tested, analyzed, and refined for CE program applicability. General guidelines pertaining to evaluation and the Sunset Review requirements are examined and various types, designs, and models for evaluation are identified. Existing CE evaluation reports are reviewed and comments on their adequacy for meeting current needs are provided. An inventory and status survey of CE program evaluation activities is presented, as are issues, findings, and recommendations pertaining to CE evaluation and Sunset Review requirements. Also, sources of data for use in evaluation and the Sunset Review response are identified. An inventory of CE evaluation-related documents and reports is provided.

  20. A Framework for the Evaluation of Internet-based Diabetes Management

    PubMed Central

    Kidd, Michael

    2002-01-01

    Background While still in its infancy, Internet-based diabetes management shows great promise for growth. However, the following aspects must be considered: what are the key metrics for the evaluation of a diabetes management site? how should these sites grow in the future and what services should they offer? Objectives To examine the needs of the patient and the health care professional in an Internet-based diabetes management solution and how these needs are translated into services offered. Methods An evaluation framework was constructed based on a literature review that identified the requirements for an Internet-based diabetes management solution. The requirements were grouped into 5 categories: Monitoring, Information, Personalization, Communication, and Technology. Two of the market leaders (myDiabetes and LifeMasters) were selected and were evaluated with the framework. The Web sites were evaluated independently by 5 raters using the evaluation framework. All evaluations were performed from November 1, 2001 through December 15, 2001. Results The agreement level between raters ranged from 60% to 100%. The multi-rater reliability (kappa) was 0.75 for myDiabetes and 0.65 for LifeMasters, indicating substantial agreement. The results of the evaluations indicate that LifeMasters is a more-complete solution than myDiabetes in all dimensions except Information, where both sites were equivalent. LifeMasters satisfied 32 evaluation criteria while myDiabetes satisfied 24 evaluation criteria, out of a possible 40 in the framework. Conclusions The framework is based on the recognition that the management of diabetes via the Internet is based on several integrated dimensions: Monitoring, Information, Personalization, Communication, and Technology. A successful diabetes management system should efficiently integrate all dimensions. The evaluation found that LifeMasters is successful in integrating the health care professional in the management of diabetes and that My

  1. Toward a multi-objective decision support framework to support regulations of unconventional oil and gas development

    NASA Astrophysics Data System (ADS)

    Alongi, M.; Howard, C.; Kasprzyk, J. R.; Ryan, J. N.

    2015-12-01

    Unconventional oil and gas development (UOGD) using hydraulic fracturing and horizontal drilling has recently fostered an unprecedented acceleration in energy development. Regulations seek to protect environmental quality of areas surrounding UOGD, while maintaining economic benefits. One such regulation is a setback distance, which dictates the minimum proximity between an oil and gas well and an object such as a residential or commercial building, property line, or water source. In general, most setback regulations have been strongly politically motivated without a clear scientific basis for understanding the relationship between the setback distance and various performance outcomes. This presentation discusses a new decision support framework for setback regulations, as part of a large NSF-funded sustainability research network (SRN) on UOGD. The goal of the decision support framework is to integrate a wide array of scientific information from the SRN into a coherent framework that can help inform policy regarding UOGD. The decision support framework employs multiobjective evolutionary algorithm (MOEA) optimization coupled with simulation models of air quality and other performance-based outcomes on UOGD. The result of the MOEA optimization runs are quantitative tradeoff curves among different objectives. For example, one such curve could demonstrate air pollution concentrations versus estimates of energy development profits, for different levels of setback distance. Our results will also inform policy-relevant discussions surrounding UOGD such as comparing single- and multi-well pads, as well as regulations on the density of well development over a spatial area.

  2. Optimizing transformations of stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-31

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes two optimizing transformations suitable for certain classes of numerical algorithms, one for reducing the cost of inter-processor communication, and one for improving cache utilization; demonstrates and analyzes the resulting performance gains; and indicates how these transformations are being automated.

  3. Temporal locality optimizations for stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-01

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes a technique for introducing cache blocking suitable for certain classes of numerical algorithms, demonstrates and analyzes the resulting performance gains, and indicates how this optimization transformation is being automated.

  4. Collaborative evaluation within a framework of stakeholder-oriented evaluation approaches.

    PubMed

    O'Sullivan, Rita G

    2012-11-01

    Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike "distanced" evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in stronger evaluation designs, enhanced data collection and analysis, and results that stakeholder understand and use. Among similar "participant-oriented" evaluation approaches (Fitzpatrick, Sanders, & Worthen, 2011), Collaborative Evaluation distinguishes itself in that it uses a sliding scale for levels of collaboration. This means that different program evaluations will experience different levels of collaborative activity. The sliding scale is applied as the evaluator considers each program's evaluation needs, readiness, and resources. While Collaborative Evaluation is a term widely used in evaluation, its meaning varies considerably. Often used interchangeably with participatory and/or empowerment evaluation, the terms can be used to mean different things, which can be confusing. The articles use a comparative Collaborative Evaluation Framework to highlight how from a theoretical perspective, Collaborative Evaluation distinguishes itself from the other participatory evaluation approaches. PMID:22364849

  5. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  6. Polarization distance: a framework for modelling object detection by polarization vision systems

    PubMed Central

    How, Martin J.; Marshall, N. Justin

    2014-01-01

    The discrimination of polarized light is widespread in the natural world. Its use for specific, large-field tasks, such as navigation and the detection of water bodies, has been well documented. Some species of cephalopod and crustacean have polarization receptors distributed across the whole visual field and are thought to use polarized light cues for object detection. Both object-based polarization vision systems and large field detectors rely, at least initially, on an orthogonal, two-channel receptor organization. This may increase to three-directional analysis at subsequent interneuronal levels. In object-based and some of the large-field tasks, the dominant e-vector detection axes are often aligned (through eye, head and body stabilization mechanisms) horizontally and vertically relative to the outside world. We develop Bernard and Wehner's 1977 model of polarization receptor dynamics to apply it to the detection and discrimination of polarized objects against differently polarized backgrounds. We propose a measure of ‘polarization distance’ (roughly analogous to ‘colour distance’) for estimating the discriminability of objects in polarized light, and conclude that horizontal/vertical arrays are optimally designed for detecting differences in the degree, and not the e-vector axis, of polarized light under natural conditions. PMID:24352940

  7. Conceptual framework for standard economic evaluation of physical activity programs in primary prevention.

    PubMed

    Wolfenstetter, Silke B

    2011-12-01

    Economic evaluations of primary prevention physical activity programs have gained importance because of scarce resources in health-care-systems. A concept for economic evaluation should be based on the efficacy of physical activity, the standard methods of economic evaluation and the aims of public health. Previous publications have examined only parts of these components and have not developed a comprehensive conceptual framework; it is the objective of this article to develop such a framework. The derived method should aid decision makers and staff members of intervention programs in reviewing and conducting an economic evaluation. A literature search of articles was done using six electronic databases. Referenced works for standard methods and more comprehensive approaches for evaluation of preventive programs were studied. The newly developed conceptual framework for economic evaluation includes: (1) the type of physical activity program; (2) features of a selected study population; (3) the outcome dimension comprising exercise efficacy, reach, recruitment, response rate, maintenance, compliance and adverse health effects plus the social impact; and (4) the cost dimension consisting of program development costs, program implementation costs including the implementation, recruitment, program, participants' time costs and savings resulting from the health effects of the intervention. Cost-effectiveness also depends on the methodology, such as the chosen perspective, data collection, valuation methods and discounting. If an intervention is not considered cost-effective, it is necessary to check each dimension to find possible failures in order to learn for future interventions. A more detailed economic evaluation is of utmost importance for improved comparability and transferability. PMID:21773728

  8. The Effect of Instructional Objectives and General Objectives on Student Self-Evaluation of Psychomotor Performance in Power Mechanics.

    ERIC Educational Resources Information Center

    Janeczko, Robert John

    The major purpose of this study was to ascertain the relative effects of student exposure to instructional objectives upon student self-evaluation of psychomotor activities in a college-level power mechanics course. A randomized posttest-only control group design was used with two different approaches to the statement of the objectives. Four…

  9. Game Object Model Version II: A Theoretical Framework for Educational Game Development

    ERIC Educational Resources Information Center

    Amory, Alan

    2007-01-01

    Complex computer and video games may provide a vehicle, based on appropriate theoretical concepts, to transform the educational landscape. Building on the original game object model (GOM) a new more detailed model is developed to support concepts that educational computer games should: be relevant, explorative, emotive, engaging, and include…

  10. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    ERIC Educational Resources Information Center

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  11. WV R-EMAP STUDY: MULTIPLE-OBJECTIVE SAMPLING DESIGN FRAMEWORK

    EPA Science Inventory

    A multi-objective sampling design has been implemented through Regional Monitoring and Assessment Program (R-EMAP) support of a cooperative agreement with the state of West Virginia. Goals of the project include: 1) development and testing of a temperature-adjusted fish IBI for t...

  12. Smart Objects, Dumb Archives: A User-Centric, Layered Digital Library Framework.

    ERIC Educational Resources Information Center

    Maly, Kurt; Nelson, Michael L.; Zubair, Mohammad

    1999-01-01

    Discusses digital libraries, interoperability, and interfaces to access them, and proposes one universal protocol for communication for simple archives based on the hypertext transfer protocol (http). Describes the creation of a special class of digital objects called buckets, archives based on a NASA collection, and a set of digital library…

  13. A framework for evaluation of flood management strategies.

    PubMed

    Hansson, K; Danielson, M; Ekenberg, L

    2008-02-01

    The resulting impact of disasters on society depends on the affected country's economic strength prior to the disaster. The larger the disaster and the smaller the economy, the more significant is the impact. This is clearest seen in developing countries, where weak economies become even weaker afterwards. Deliberate strategies for the sharing of losses from hazardous events may aid a country or a community in efficiently using scarce prevention and mitigation resources, thus being better prepared for the effects of a disaster. Nevertheless, many governments lack an adequate institutional system for applying cost effective and reliable technologies for disaster prevention, early warnings, and mitigation. Modelling by event analyses and strategy models is one way of planning ahead, but these models have so far not been linked together. An approach to this problem was taken during a large study in Hungary, the Tisza case study, where a number of policy strategies for spreading of flood loss were formulated. In these strategies, a set of parameters of particular interest were extracted from interviews with stakeholders in the region. However, the study was focused on emerging economies, and, in particular, on insurance strategies. The scope is now extended to become a functional framework also for developing countries. In general, they have a higher degree of vulnerability. The paper takes northern Vietnam as an example of a developing region. We identify important parameters and discuss their importance for flood strategy formulations. Based on the policy strategies in the Tisza case, we extract data from the strategies and propose a framework for loss spread in developing and emerging economies. The parameter set can straightforwardly be included in a simulation and decision model for policy formulation and evaluation, taking multiple stakeholders into account. PMID:17292530

  14. Judgment under uncertainty; a probabilistic evaluation framework for decision-making about sanitation systems in low-income countries.

    PubMed

    Malekpour, Shirin; Langeveld, Jeroen; Letema, Sammy; Clemens, François; van Lier, Jules B

    2013-03-30

    This paper introduces the probabilistic evaluation framework, to enable transparent and objective decision-making in technology selection for sanitation solutions in low-income countries. The probabilistic framework recognizes the often poor quality of the available data for evaluations. Within this framework, the evaluations will be done based on the probabilities that the expected outcomes occur in practice, considering the uncertainties in evaluation parameters. Consequently, the outcome of evaluations will not be single point estimates; but there exists a range of possible outcomes. A first trial application of this framework for evaluation of sanitation options in the Nyalenda settlement in Kisumu, Kenya, showed how the range of values that an evaluation parameter may obtain in practice would influence the evaluation outcomes. In addition, as the probabilistic evaluation requires various site-specific data, sensitivity analysis was performed to determine the influence of each data set quality on the evaluation outcomes. Based on that, data collection activities could be (re)directed, in a trade-off between the required investments in those activities and the resolution of the decisions that are to be made. PMID:23416987

  15. A tiered assessment framework to evaluate human health risk of contaminated sediment.

    PubMed

    Greenfield, Ben K; Melwani, Aroon R; Bay, Steven M

    2015-07-01

    For sediment contaminated with bioaccumulative pollutants (e.g., PCBs and organochorine pesticides), human consumption of seafood that contain bioaccumulated sediment-derived contaminants is a well-established exposure pathway. Historically, regulation and management of this bioaccumulation pathway has focused on site-specific risk assessment. The state of California (United States) is supporting the development of a consistent and quantitative sediment assessment framework to aid in interpreting a narrative objective to protect human health. The conceptual basis of this framework focuses on 2 key questions: 1) do observed pollutant concentrations in seafood from a given site pose unacceptable health risks to human consumers? and 2) is sediment contamination at a site a significant contributor to seafood contamination? The first question is evaluated by interpreting seafood tissue concentrations at the site, based on health risk calculations. The second question is evaluated by interpreting site-specific sediment chemistry data using a food web bioaccumulation model. The assessment framework includes 3 tiers (screening assessment, site assessment, and refined site assessment), which enables the assessment to match variations in data availability, site complexity, and study objectives. The second and third tiers use a stochastic simulation approach, incorporating information on variability and uncertainty of key parameters, such as seafood contaminant concentration and consumption rate by humans. The framework incorporates site-specific values for sensitive parameters and statewide values for difficult to obtain or less sensitive parameters. The proposed approach advances risk assessment policy by incorporating local data into a consistent region-wide problem formulation, applying best available science in a streamlined fashion. PMID:25641876

  16. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  17. Incremental online object learning in a vehicular radar-vision fusion framework

    SciTech Connect

    Ji, Zhengping; Weng, Juyang; Luciw, Matthew; Zeng, Shuqing

    2010-10-19

    In this paper, we propose an object learning system that incorporates sensory information from an automotive radar system and a video camera. The radar system provides a coarse attention for the focus of visual analysis on relatively small areas within the image plane. The attended visual areas are coded and learned by a 3-layer neural network utilizing what is called in-place learning, where every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through inhibitory and excitatory connections with other neurons. The modeled bottom-up, lateral, and top-down connections in the network enable sensory sparse coding, unsupervised learning and supervised learning to occur concurrently. The presented work is applied to learn two types of encountered objects in multiple outdoor driving settings. Cross validation results show the overall recognition accuracy above 95% for the radar-attended window images. In comparison with the uncoded representation and purely unsupervised learning (without top-down connection), the proposed network improves the recognition rate by 15.93% and 6.35% respectively. The proposed system is also compared with other learning algorithms favorably. The result indicates that our learning system is the only one to fit all the challenging criteria for the development of an incremental and online object learning system.

  18. A framework for sustainable invasive species management: environmental, social and economic objectives

    USGS Publications Warehouse

    Larson, Diane L.; Phillips-Mao, Laura; Quiram, Gina; Sharpe, Leah; Stark, Rebecca; Sugita, Shinya; Weiler, Annie

    2011-01-01

    Applying the concept of sustainability to invasive species management (ISM) is challenging but necessary, given the increasing rates of invasion and the high costs of invasion impacts and control. To be sustainable, ISM must address environmental, social, and economic factors (or *pillars*) that influence the causes, impacts, and control of invasive species across multiple spatial and temporal scales. Although these pillars are generally acknowledged, their implementation is often limited by insufficient control options and significant economic and political constraints. In this paper, we outline specific objectives in each of these three *pillars* that, if incorporated into a management plan, will improve the plan's likelihood of sustainability. We then examine three case studies that illustrate how these objectives can be effectively implemented. Each pillar reinforces the others, such that the inclusion of even a few of the outlined objectives will lead to more effective management that achieves ecological goals, while generating social support and long-term funding to maintain projects to completion. We encourage agency directors and policy-makers to consider sustainability principles when developing funding schemes, management agendas, and policy.

  19. Object Creation and Human Factors Evaluation for Virtual Environments

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1998-01-01

    The main objective of this project is to provide test objects for simulated environments utilized by the recently established Army/NASA Virtual Innovations Lab (ANVIL) at Marshall Space Flight Center, Huntsville, Al. The objective of the ANVIL lab is to provide virtual reality (VR) models and environments and to provide visualization and manipulation methods for the purpose of training and testing. Visualization equipment used in the ANVIL lab includes head-mounted and boom-mounted immersive virtual reality display devices. Objects in the environment are manipulated using data glove, hand controller, or mouse. These simulated objects are solid or surfaced three dimensional models. They may be viewed or manipulated from any location within the environment and may be viewed on-screen or via immersive VR. The objects are created using various CAD modeling packages and are converted into the virtual environment using dVise. This enables the object or environment to be viewed from any angle or distance for training or testing purposes.

  20. Comparative analysis of toxicological evaluations for dermal exposure performed under two different EU regulatory frameworks.

    PubMed

    Westerholm, Emma; Schenk, Linda

    2014-02-01

    Dermal exposure to chemicals is highly relevant in relation to the use of cosmetic products, both in consumers and in individuals exposed occupationally. Regulatory frameworks exist within the EU to limit the dermal exposure of the general population and workers to chemicals in general, as well as to limit the use of certain substances in cosmetic products. The objective of the study was to investigate and compare toxicological evaluations of dermal exposure performed under current regulatory frameworks. The publicly disseminated hazard information under the respective regulatory frameworks was compiled and compared for the five substances resorcinol, p-phenylenediamine, p-aminophenol, N-phenyl-p-phenylenediamine, and diethylene glycol monoethyl ether. A low consistency between evaluations was observed in respect to data coverage and cited dose descriptors. No systematic differences over all five substances were identified from the viewpoint of dermal hazard assessment. The critical effect and corresponding systemic effect dose descriptor was identical for two substances, differed somewhat for two other (a factor of 2-2.5). For N-phenyl-p-phenylenediamine a critical effect was only identified under REACH. PMID:24269627

  1. Smart Objects, Dumb Archives: A User-Centric, Layered Digital Library Framework

    NASA Technical Reports Server (NTRS)

    Maly, Kurt; Nelson, Michael L.; Zubair, Mohammad

    1999-01-01

    Currently, there exist a large number of superb digital libraries, all of which are, unfortunately, vertically integrated and all presenting a monolithic interface to their users. Ideally, a user would want to locate resources from a variety of digital libraries dealing only with one interface. A number of approaches exist to this interoperability issue exist including: defining a universal protocol for all libraries to adhere to; or developing mechanisms to translate between protocols. The approach we illustrate in this paper is to push down the level of universal protocols to one for digital object communication and for communication for simple archives. This approach creates the opportunity for digital library service providers to create digital libraries tailored to the needs of user communities drawing from available archives and individual publishers who adhere to this standard. We have created a reference implementation based on the hyper text transfer protocol (http) with the protocols being derived from the Dienst protocol. We have created a special class of digital objects called buckets and a number of archives based on a NASA collection and NSF funded projects. Starting from NCSTRL we have developed a set of digital library services called NCSTRL+ and have created digital libraries for researchers, educators and students that can each draw on all the archives and individually created buckets.

  2. TRENCADIS - secure architecture to share and manage DICOM objects in a ontological framework based on OGSA.

    PubMed

    Blanquer, Ignacio; Hernandez, Vicente; Segrelles, Damià; Torres, Erik

    2007-01-01

    Today most European healthcare centers use the digital format for their databases of images. TRENCADIS is a software architecture comprising a set of services as a solution for interconnecting, managing and sharing selected parts of medical DICOM data for the development of training and decision support tools. The organization of the distributed information in virtual repositories is based on semantic criteria. Different groups of researchers could organize themselves to propose a Virtual Organization (VO). These VOs will be interested in specific target areas, and will share information concerning each area. Although the private part of the information to be shared will be removed, special considerations will be taken into account to avoid the access by non-authorized users. This paper describes the security model implemented as part of TRENCADIS. The paper is organized as follows. First introduces the problem and presents our motivations. Section 1 defines the objectives. Section 2 presents an overview of the existing proposals per objective. Section 3 outlines the overall architecture. Section 4 describes how TRENCADIS is architected to realize the security goals discussed in the previous sections. The different security services and components of the infrastructure are briefly explained, as well as the exposed interfaces. Finally, Section 5 concludes and gives some remarks on our future work. PMID:17476054

  3. European Healthy Cities evaluation: conceptual framework and methodology.

    PubMed

    de Leeuw, Evelyne; Green, Geoff; Dyakova, Mariana; Spanswick, Lucy; Palmer, Nicola

    2015-06-01

    This paper presents the methodology, programme logic and conceptual framework that drove the evaluation of the Fifth Phase of the WHO European Healthy Cities Network. Towards the end of the phase, 99 cities were designated progressively through the life of the phase (2009-14). The paper establishes the values, systems and aspirations that these cities sign up for, as foundations for the selection of methodology. We assert that a realist synthesis methodology, driven by a wide range of qualitative and quantitative methods, is the most appropriate perspective to address the wide geopolitical, demographic, population and health diversities of these cities. The paper outlines the rationale for a structured multiple case study approach, the deployment of a comprehensive questionnaire, data mining through existing databases including Eurostat and analysis of management information generation tools used throughout the period. Response rates were considered extremely high for this type of research. Non-response analyses are described, which show that data are representative for cities across the spectrum of diversity. This paper provides a foundation for further analysis on specific areas of interest presented in this supplement. PMID:26069320

  4. Developing Evaluation Capacity in Extension 4-H Field Faculty: A Framework for Success

    ERIC Educational Resources Information Center

    Arnold, Mary E.

    2006-01-01

    Developing evaluation capacity in organizations is a complex and multifaceted task. This article outlines a framework for building evaluation capacity. The framework is based on four strategic methods for teaching evaluation: (a) using logic models for sound program planning, (b) providing one-on-one help, (c) facilitating small-team collaborative…

  5. A user-centred evaluation framework for the Sealife semantic web browsers

    PubMed Central

    Oliver, Helen; Diallo, Gayo; de Quincey, Ed; Alexopoulou, Dimitra; Habermann, Bianca; Kostkova, Patty; Schroeder, Michael; Jupp, Simon; Khelif, Khaled; Stevens, Robert; Jawaheer, Gawesh; Madle, Gemma

    2009-01-01

    Background Semantically-enriched browsing has enhanced the browsing experience by providing contextualised dynamically generated Web content, and quicker access to searched-for information. However, adoption of Semantic Web technologies is limited and user perception from the non-IT domain sceptical. Furthermore, little attention has been given to evaluating semantic browsers with real users to demonstrate the enhancements and obtain valuable feedback. The Sealife project investigates semantic browsing and its application to the life science domain. Sealife's main objective is to develop the notion of context-based information integration by extending three existing Semantic Web browsers (SWBs) to link the existing Web to the eScience infrastructure. Methods This paper describes a user-centred evaluation framework that was developed to evaluate the Sealife SWBs that elicited feedback on users' perceptions on ease of use and information findability. Three sources of data: i) web server logs; ii) user questionnaires; and iii) semi-structured interviews were analysed and comparisons made between each browser and a control system. Results It was found that the evaluation framework used successfully elicited users' perceptions of the three distinct SWBs. The results indicate that the browser with the most mature and polished interface was rated higher for usability, and semantic links were used by the users of all three browsers. Conclusion Confirmation or contradiction of our original hypotheses with relation to SWBs is detailed along with observations of implementation issues. PMID:19796398

  6. Loop transformations for performance and message latency hiding in parallel object-oriented frameworks

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-09-01

    Application codes reliably achieve performance far less than the advertised capabilities of existing architectures, and this problem is worsening with increasingly-parallel machines. For large-scale numerical applications, stencil operations often impose the greater part of the computational cost, and the primary sources of inefficiency are the costs of message passing and poor cache utilization. This paper proposes and demonstrates optimizations for stencil and stencil-like computations for both serial and parallel environments that ameliorate these sources of inefficiency. Additionally, the authors argue that when stencil-like computations are encoded at a high level using object-oriented parallel array class libraries these optimizations, which are beyond the capability of compilers, may be automated.

  7. A software framework for microarray and gene expression object model (MAGE-OM) array design annotation

    PubMed Central

    Qureshi, Matloob; Ivens, Alasdair

    2008-01-01

    Background The MIAME and MAGE-OM standards defined by the MGED society provide a specification and implementation of a software infrastructure to facilitate the submission and sharing of data from microarray studies via public repositories. However, although the MAGE object model is flexible enough to support different annotation strategies, the annotation of array descriptions can be complex. Results We have developed a graphical Java-based application (Adamant) to assist with submission of Microarray designs to public repositories. Output of the application is fully compliant with the standards prescribed by the various public data repositories. Conclusion Adamant will allow researchers to annotate and submit their own array designs to public repositories without requiring programming expertise, knowledge of the MAGE-OM or XML. The application has been used to submit a number of ArrayDesigns to the Array Express database. PMID:18366695

  8. Object-based "dynamic cover types" - a new framework for monitoring landscape-level ecosystem change

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Wang, L.; Gong, P.; Zhong, L.

    2012-12-01

    Traditional analyses of ecosystem change with remote sensing data often focus on transitions between 'static' landscape cover types. However, in dynamic landscapes with frequent disturbance long-term surface trends may be obscured by intermediate shorter-term variation. Availability of high-quality remote sensing data is often inconsistent among change periods, which contributes to the uncertainty in change detection among 'static' classes. Alternatively, we propose Dynamic Cover Types (DCTs) to characterize highly variable areas based on their nested change regimes shaped by climate, phenology and disturbance. We define DCTs as sequences of surface transformations that have distinct temporal trajectories observable across landscapes within a given change period. To illustrate and test this concept, we combined multispectral and microwave satellite imagery to classify DCTs for a large complex seasonally inundated freshwater wetland in China in 2007-2008. Instead of using pixels, we mapped DCTs using object-based image analysis and supervised machine-learning algorithms to characterize common change types based on their spatial and temporal context. Spatial distributions of mapped DCTs simultaneously reflected several key drivers of wetland change, including broad-scale changes in submersion times, vegetation phenology and prevalence of plant cover and localized fine-scale disturbance. We further examined DCT response to a hypothetical scenario of a warmer wetter early spring by substituting spring 2008 images with 2007 ones. In this comparison, the strongest response was detected from DCTs that were closely associated with the water body and represented critical habitat for wintering migratory waterbirds in this area. Results indicate that object-based dynamic class boundaries may provide useful spatial units to highlight characteristic types of landscape change for environmental research, ecosystem monitoring and management considerations.

  9. Objective Situation Awareness Measurement Based on Performance Self-Evaluation

    NASA Technical Reports Server (NTRS)

    DeMaio, Joe

    1998-01-01

    The research was conducted in support of the NASA Safe All-Weather Flight Operations for Rotorcraft (SAFOR) program. The purpose of the work was to investigate the utility of two measurement tools developed by the British Defense Evaluation Research Agency. These tools were a subjective workload assessment scale, the DRA Workload Scale and a situation awareness measurement tool. The situation awareness tool uses a comparison of the crew's self-evaluation of performance against actual performance in order to determine what information the crew attended to during the performance. These two measurement tools were evaluated in the context of a test of innovative approach to alerting the crew by way of a helmet mounted display. The situation assessment data are reported here. The performance self-evaluation metric of situation awareness was found to be highly effective. It was used to evaluate situation awareness on a tank reconnaissance task, a tactical navigation task, and a stylized task used to evaluated handling qualities. Using the self-evaluation metric, it was possible to evaluate situation awareness, without exact knowledge the relevant information in some cases and to identify information to which the crew attended or failed to attend in others.

  10. Evaluation Manual for CIP Courses: Objectives and Implementation Procedures.

    ERIC Educational Resources Information Center

    Siri, Carmen

    This manual has been designed to guide courses on potato production sponsored by the International Potato Center (CIP) in Lima (Peru). It describes the CIP Course Evaluation System that is presently used and provides guidelines on how to use feedback more effectively for improving training. CIP evaluations are largely formative. The CIP focuses on…

  11. The Disposal Systems Evaluation Framework for DOE-NE

    SciTech Connect

    Blink, J A; Greenberg, H R; Halsey, W G; Jove-Colon, C; Nutt, W M; Sutton, M

    2010-12-15

    The Used Fuel Disposition (UFD) Campaign within DOE-NE is evaluating storage and disposal options for a range of waste forms and a range of geologic environments. For each waste form and geologic environment combination, there are multiple options for repository conceptual design. The Disposal Systems Evaluation Framework (DSEF) is being developed to formalize the development and documentation of options for each waste form and environment combination. The DSEF is being implemented in two parts. One part is an Excel workbook with multiple sheets. This workbook is designed to be user friendly, such that anyone within the UFD Campaign can use it as a guide to develop and document repository conceptual designs that respect thermal, geometric, and other constraints. The other part is an Access relational database file that will be centrally maintained to document the ensemble of conceptual designs developed with individual implementations of the Excel workbook. The DSEF Excel workbook includes sheets for waste form, environment, geometric constraints, engineered barrier system (EBS) design, thermal, performance assessment (PA), materials, cost, and fuel cycle system impacts. Each of these sheets guides the user through the process of developing internally consistent design options, and documenting the thought process. The sheets interact with each other to transfer information and identify inconsistencies to the user. In some cases, the sheets are stand-alone, and in other cases (such as PA), the sheets refer the user to another tool, with the user being responsible to transfer summary results into the DSEF sheet. Finally, the DSEF includes three top-level sheets: inputs & results, interface parameters, and knowledge management (references). These sheets enable users and reviewers to see the overall picture on only a few summary sheets, while developing the design option systematically using the detailed sheets. The DSEF Access relational database file collects the key

  12. Evaluation Primary School Students' Achievement of Objectives in English Lessons

    ERIC Educational Resources Information Center

    Erkan, Senem Seda Sahenk

    2015-01-01

    The problem statement of this survey is "How far are the specific objectives of English courses achieved by the primary students (4-5 grades) recently in Istanbul?" "Does the first stage state primary school students' achievement level of the specific English courses differ according to students' personal characteristics? Survey…

  13. Evaluating Object-Based Image Analysis on Glacial Micromorphology

    NASA Astrophysics Data System (ADS)

    Chin, K. S.; Sjogren, D. B.

    2007-12-01

    Micromorphology has recently been applied more in analyzing glacial sediments at a microscopic level. It provides additional information and details that may help to explain glacial processes in areas where macro- scale observations cannot yield sufficient information. However, the process of interpreting thin sections has been very subjective, and reaching general consensus about glacial processes is difficult. Remote sensing technology is increasingly helpful in the development and advancement of many sciences; the concepts that lie behind the technology in object cognition used in other fields, such as landscape ecology, can be applied to micromorphology. Similar to what has been done to landscape ecology in the past, automating the process of interpreting objects in glacial sediments may potentially simplify and decrease the subjectivity of the process. Definiens Professional 5 is an object-based image analysis program that imitates human cognitive methods; it is used in this study to identify objects apart from background matrices in multiple thin section images of glacial sediments. The program's initial results proved that more work was needed to be done for better results, but overall the software produced promising results. The method is repeatable and continues to generate consistent results with no bias or ambiguity, so the application of this method to micromorphology and other areas alike will be valuable.

  14. Evaluating the Use of Learning Objects for Improving Calculus Readiness

    ERIC Educational Resources Information Center

    Kay, Robin; Kletskin, Ilona

    2010-01-01

    Pre-calculus concepts such as working with functions and solving equations are essential for students to explore limits, rates of change, and integrals. Yet many students have a weak understanding of these key concepts which impedes performance in their first year university Calculus course. A series of online learning objects was developed to…

  15. Hubba: hub objects analyzer--a framework of interactome hubs identification for network biology.

    PubMed

    Lin, Chung-Yen; Chin, Chia-Hao; Wu, Hsin-Hung; Chen, Shu-Hwa; Ho, Chin-Wen; Ko, Ming-Tat

    2008-07-01

    One major task in the post-genome era is to reconstruct proteomic and genomic interacting networks using high-throughput experiment data. To identify essential nodes/hubs in these interactomes is a way to decipher the critical keys inside biochemical pathways or complex networks. These essential nodes/hubs may serve as potential drug-targets for developing novel therapy of human diseases, such as cancer or infectious disease caused by emerging pathogens. Hub Objects Analyzer (Hubba) is a web-based service for exploring important nodes in an interactome network generated from specific small- or large-scale experimental methods based on graph theory. Two characteristic analysis algorithms, Maximum Neighborhood Component (MNC) and Density of Maximum Neighborhood Component (DMNC) are developed for exploring and identifying hubs/essential nodes from interactome networks. Users can submit their own interaction data in PSI format (Proteomics Standards Initiative, version 2.5 and 1.0), tab format and tab with weight values. User will get an email notification of the calculation complete in minutes or hours, depending on the size of submitted dataset. Hubba result includes a rank given by a composite index, a manifest graph of network to show the relationship amid these hubs, and links for retrieving output files. This proposed method (DMNC || MNC) can be applied to discover some unrecognized hubs from previous dataset. For example, most of the Hubba high-ranked hubs (80% in top 10 hub list, and >70% in top 40 hub list) from the yeast protein interactome data (Y2H experiment) are reported as essential proteins. Since the analysis methods of Hubba are based on topology, it can also be used on other kinds of networks to explore the essential nodes, like networks in yeast, rat, mouse and human. The website of Hubba is freely available at http://hub.iis.sinica.edu.tw/Hubba. PMID:18503085

  16. A Conceptual Framework for How Evaluators Make Everyday Practice Decisions

    ERIC Educational Resources Information Center

    Kundin, Delia M.

    2010-01-01

    How do evaluators make decisions about how to approach an evaluation in their everyday practice? What are the bases for evaluators' approach choices? In what ways do evaluators think about evaluation models? The evaluation literature remains unclear about what specific information evaluators consider when making decisions in response to everyday…

  17. Evaluating the objective structured long examination record for nurse education.

    PubMed

    Traynor, Marian; Galanouli, Despina; Rice, Billiejoan; Lynn, Fiona

    2016-06-23

    An objective structured long examination record (OSLER) is a modification of the long-case clinical examination and is mainly used in medical education. This study aims to obtain nursing students' views of the OSLER compared with the objective structured clinical examination (OSCE), which is used to assess discrete clinical skills. A sample of third-year undergraduate nursing students (n=21) volunteered to participate from a cohort of 230 students. Participants undertook the OSLER under examination conditions. Pre-and post-test questionnaires gathered the students' views on the assessments and these were analysed from a mainly qualitative perspective. Teachers' and simulated patient views were also used for data triangulation. The findings indicate that the OSLER ensures more holistic assessment of a student's clinical skills and particularly essential skills such as communication, and that the OSLER, together with the OSCE, should be used to supplement the assessment of clinical competence in nursing education. PMID:27345072

  18. A Psychometric Evaluation of an Advanced Pharmacy Practice Experience Clinical Competency Framework

    PubMed Central

    Doty, Randell E.; Nemire, Ruth E.

    2015-01-01

    Objective. To assess the psychometric properties of the clinical competency framework known as the System of Universal Clinical Competency Evaluation in the Sunshine State (SUCCESS), including its internal consistency and content, construct, and criterion validity. Methods. Sub-competency items within each hypothesized competency pair were subjected to principal components factor analysis to demonstrate convergent and discriminant validity. Varimax rotation was conducted for each competency pair (eg, competency 1 vs competency 2, competency 1 vs competency 3, competency 2 vs competency 3). Internal consistency was evaluated using Cronbach alpha. Results. Of the initial 78 pairings, 44 (56%) demonstrated convergent and discriminant validity. Five pairs of competencies were unidimensional. Of the 34 pairs where at least 1 competency was multidimensional, most (91%) were from competencies 7, 11, and 12, indicating modifications were warranted in those competencies. After reconfiguring the competencies, 76 (94%) of the 81 pairs resulted in 2 factors as required. A unidimensional factor emerged when all 13 of the competencies were entered into a factor analysis. The internal consistency of all of the competencies was satisfactory. Conclusion. Psychometric evaluation shows the SUCCESS framework demonstrates adequate reliability and validity for most competencies. However, it also provides guidance where improvements are needed as part of a continuous quality improvement program. PMID:25861100

  19. The Aerosol Modeling Testbed: A community tool to objectively evaluate aerosol process modules

    SciTech Connect

    Fast, Jerome D.; Gustafson, William I.; Chapman, Elaine G.; Easter, Richard C.; Rishel, Jeremy P.; Zaveri, Rahul A.; Grell, Georg; Barth, Mary

    2011-03-02

    This study describes a new modeling paradigm that significantly advances how the third activity is conducted while also fully exploiting data and findings from the first two activities. The Aerosol Modeling Testbed (AMT) is a computational framework for the atmospheric sciences community that streamlines the process of testing and evaluating aerosol process modules over a wide range of spatial and temporal scales. The AMT consists of a fully-coupled meteorology-chemistry-aerosol model, and a suite of tools to evaluate the performance of aerosol process modules via comparison with a wide range of field measurements. The philosophy of the AMT is to systematically and objectively evaluate aerosol process modules over local to regional spatial scales that are compatible with most field campaigns measurement strategies. The performance of new treatments can then be quantified and compared to existing treatments before they are incorporated into regional and global climate models. Since the AMT is a community tool, it also provides a means of enhancing collaboration and coordination among aerosol modelers.

  20. RISE Evaluation and Development System: Student Learning Objectives Handbook

    ERIC Educational Resources Information Center

    Indiana Department of Education, 2016

    2016-01-01

    With the help of teachers and leaders throughout the state, the Indiana Department of Education has developed an optional model teacher evaluation system named RISE. Whether corporations choose to adopt RISE or a model of their own, the department's goal is to assist corporations in developing or adopting models that both comply with IC 20-28-11.5…

  1. Behavioral Treatment of Menopausal Hot Flashes: Evaluation by Objective Methods.

    ERIC Educational Resources Information Center

    Germaine, Leonard M.; Freedman, Robert R.

    1984-01-01

    Used latency to hot flash onset under heat stress to evaluate the effects of relaxation treatment or a control procedure in 14 menopausal women. Following treatment, the latency to hot flash onset during heat stress was increased in relaxation subjects. Reported symptom frequency was significantly reduced in relaxation subjects. (BH)

  2. An Application of the Impact Evaluation Process for Designing a Performance Measurement and Evaluation Framework in K-12 Environments

    ERIC Educational Resources Information Center

    Guerra-Lopez, Ingrid; Toker, Sacip

    2012-01-01

    This article illustrates the application of the Impact Evaluation Process for the design of a performance measurement and evaluation framework for an urban high school. One of the key aims of this framework is to enhance decision-making by providing timely feedback about the effectiveness of various performance improvement interventions. The…

  3. Evaluation methods for retrieving information from interferograms of biomedical objects

    NASA Astrophysics Data System (ADS)

    Podbielska, Halina; Rottenkolber, Matthias

    1996-04-01

    Interferograms in the form of fringe patterns can be produced in two-beam interferometers, holographic or speckle interferometers, in setups realizing moire techniques or in deflectometers. Optical metrology based on the principle of interference can be applied as a testing tool in biomedical research. By analyzing of the fringe pattern images, information about the shape or mechanical behavior of the object under study can be retrieved. Here, some of the techniques for creating fringe pattern images were presented along with methods of analysis. Intensity based analysis as well as methods of phase measurements, are mentioned. Applications of inteferometric methods, especially in the field of experimental orthopedics, endoscopy and ophthalmology are pointed out.

  4. A Framework for the Evaluation of CASE Tool Learnability in Educational Environments

    ERIC Educational Resources Information Center

    Senapathi, Mali

    2005-01-01

    The aim of the research is to derive a framework for the evaluation of Computer Aided Software Engineering (CASE) tool learnability in educational environments. Drawing from the literature of Human Computer Interaction and educational research, a framework for evaluating CASE tool learnability in educational environments is derived. The two main…

  5. Emerging technologies with potential for objectively evaluating speech recognition skills.

    PubMed

    Rawool, Vishakha Waman

    2016-01-01

    Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed. PMID:26807789

  6. Evaluation of the Performance of Routine Information System Management (PRISM) framework: evidence from Uganda

    PubMed Central

    2010-01-01

    Background Sound policy, resource allocation and day-to-day management decisions in the health sector require timely information from routine health information systems (RHIS). In most low- and middle-income countries, the RHIS is viewed as being inadequate in providing quality data and continuous information that can be used to help improve health system performance. In addition, there is limited evidence on the effectiveness of RHIS strengthening interventions in improving data quality and use. The purpose of this study is to evaluate the usefulness of the newly developed Performance of Routine Information System Management (PRISM) framework, which consists of a conceptual framework and associated data collection and analysis tools to assess, design, strengthen and evaluate RHIS. The specific objectives of the study are: a) to assess the reliability and validity of the PRISM instruments and b) to assess the validity of the PRISM conceptual framework. Methods Facility- and worker-level data were collected from 110 health care facilities in twelve districts in Uganda in 2004 and 2007 using records reviews, structured interviews and self-administered questionnaires. The analysis procedures include Cronbach's alpha to assess internal consistency of selected instruments, test-retest analysis to assess the reliability and sensitivity of the instruments, and bivariate and multivariate statistical techniques to assess validity of the PRISM instruments and conceptual framework. Results Cronbach's alpha analysis suggests high reliability (0.7 or greater) for the indices measuring a promotion of a culture of information, RHIS tasks self-efficacy and motivation. The study results also suggest that a promotion of a culture of information influences RHIS tasks self-efficacy, RHIS tasks competence and motivation, and that self-efficacy and the presence of RHIS staff have a direct influence on the use of RHIS information, a key aspect of RHIS performance. Conclusions The study

  7. Objective and subjective evaluation of the acoustic comfort in classrooms.

    PubMed

    Zannin, Paulo Henrique Trombetta; Marcon, Carolina Reich

    2007-09-01

    The acoustic comfort of classrooms in a Brazilian public school has been evaluated through interviews with 62 teachers and 464 pupils, measurements of background noise, reverberation time, and sound insulation. Acoustic measurements have revealed the poor acoustic quality of the classrooms. Results have shown that teachers and pupils consider the noise generated and the voice of the teacher in neighboring classrooms as the main sources of annoyance inside the classroom. Acoustic simulations resulted in the suggestion of placement of perforated plywood on the ceiling, for reduction in reverberation time and increase in the acoustic comfort of the classrooms. PMID:17202022

  8. A scalable portable object-oriented framework for parallel multisensor data-fusion applications in HPC systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Prasad, Guru

    2004-04-01

    Multi-sensor Data Fusion is synergistic integration of multiple data sets. Data fusion includes processes for aligning, associating and combining data and information in estimating and predicting the state of objects, their relationships, and characterizing situations and their significance. The combination of complex data sets and the need for real-time data storage and retrieval compounds the data fusion problem. The systematic development and use of data fusion techniques are particularly critical in applications requiring massive, diverse, ambiguous, and time-critical data. Such conditions are characteristic of new emerging requirements; e.g., network-centric and information-centric warfare, low intensity conflicts such as special operations, counter narcotics, antiterrorism, information operations and CALOW (Conventional Arms, Limited Objectives Warfare), economic and political intelligence. In this paper, Aximetric presents a novel, scalable, object-oriented, metamodel framework for parallel, cluster-based data-fusion engine on High Performance Computing (HPC) Systems. The data-clustering algorithms provide a fast, scalable technique to sift through massive, complex data sets coming through multiple streams in real-time. The load-balancing algorithm provides the capability to evenly distribute the workload among processors on-the-fly and achieve real-time scalability. The proposed data-fusion engine exploits unique data-structures for fast storage, retrieval and interactive visualization of the multiple data streams.

  9. Objective evaluation of the morphology of human epididymal sperm heads.

    PubMed

    Soler, C; Pérez-Sánchez, F; Schulze, H; Bergmann, M; Oberpenning, F; Yeung, C; Cooper, T G

    2000-04-01

    Spermatozoa were obtained from nine epididymal regions of six epididymides taken from five men undergoing castration for prostatic carcinoma (53-76 years) and from one man with testicular cancer (38 years). Spermatozoa were obtained by mincing tissue in phosphate-buffered saline, making air dried smears and staining with Hemacolor. The percentage of sperm heads categorised subjectively as normal (of uniform shape) or otherwise was calculated for each region. This confirmed that grossly swollen sperm heads (previously shown to be artefacts) were only present in proximal regions of the duct. A computer-aided sperm morphology analyser (Sperm Class Analyzer(R)) was used to provide objective measurements of sperm head area, perimeter, length and width of the uniform sperm heads and revealed that there was a gradual and statistically significant decline in sperm head size upon maturation, as occurs in other species. There was no significant difference between the morphometric parameters of spermatozoa obtained from the distal cauda epididymis and those obtained from the ejaculates of young normozoospermic patients. PMID:10762433

  10. The conceptual framework of the International Tobacco Control (ITC) Policy Evaluation Project

    PubMed Central

    Fong, G T; Cummings, K M; Borland, R; Hastings, G; Hyland, A; Giovino, G A; Hammond, D; Thompson, M E

    2006-01-01

    This paper describes the conceptual model that underlies the International Tobacco Control Policy Evaluation Project (ITC Project), whose mission is to measure the psychosocial and behavioural impact of key policies of the Framework Convention on Tobacco Control (FCTC) among adult smokers, and in some countries, among adult non‐smokers and among youth. The evaluation framework utilises multiple country controls, a longitudinal design, and a pre‐specified, theory‐driven conceptual model to test hypotheses about the anticipated effects of specific policies. The ITC Project consists of parallel prospective cohort surveys of representative samples of adult smokers currently in nine countries (inhabited by over 45% of the world's smokers), with other countries being added in the future. Collectively, the ITC Surveys constitute the first‐ever international cohort study of tobacco use. The conceptual model of the ITC Project draws on the psychosocial and health communication literature and assumes that tobacco control policies influence tobacco related behaviours through a causal chain of psychological events, with some variables more closely related to the policy itself (policy‐specific variables) and other variables that are more downstream from the policy, which have been identified by health behaviour and social psychological theories as being important causal precursors of behaviour (psychosocial mediators). We discuss the objectives of the ITC Project and its potential for building the evidence base for the FCTC. PMID:16754944

  11. Orchestration in Learning Technology Research: Evaluation of a Conceptual Framework

    ERIC Educational Resources Information Center

    Prieto, Luis P.; Dimitriadis, Yannis; Asensio-Pérez, Juan I.; Looi, Chee-Kit

    2015-01-01

    The term "orchestrating learning" is being used increasingly often, referring to the coordination activities performed while applying learning technologies to authentic settings. However, there is little consensus about how this notion should be conceptualised, and what aspects it entails. In this paper, a conceptual framework for…

  12. Defining competency-based evaluation objectives in family medicine

    PubMed Central

    Lawrence, Kathrine; Allen, Tim; Brailovsky, Carlos; Crichton, Tom; Bethune, Cheri; Donoff, Michel; Laughlin, Tom; Wetmore, Stephen; Carpentier, Marie-Pierre; Visser, Shaun

    2011-01-01

    Abstract Objective To develop key features for priority topics previously identified by the College of Family Physicians of Canada that, together with skill dimensions and phases of the clinical encounter, broadly describe competence in family medicine. Design Modified nominal group methodology, which was used to develop key features for each priority topic through an iterative process. Setting The College of Family Physicians of Canada. Participants An expert group of 7 family physicians and 1 educational consultant, all of whom had experience in assessing competence in family medicine. Group members represented the Canadian family medicine context with respect to region, sex, language, community type, and experience. Methods The group used a modified Delphi process to derive a detailed operational definition of competence, using multiple iterations until consensus was achieved for the items under discussion. The group met 3 to 4 times a year from 2000 to 2007. Main findings The group analyzed 99 topics and generated 773 key features. There were 2 to 20 (average 7.8) key features per topic; 63% of the key features focused on the diagnostic phase of the clinical encounter. Conclusion This project expands previous descriptions of the process of generating key features for assessment, and removes this process from the context of written examinations. A key-features analysis of topics focuses on higher-order cognitive processes of clinical competence. The project did not define all the skill dimensions of competence to the same degree, but it clearly identified those requiring further definition. This work generates part of a discipline-specific, competency-based definition of family medicine for assessment purposes. It limits the domain for assessment purposes, which is an advantage for the teaching and assessment of learners. A validation study on the content of this work would ensure that it truly reflects competence in family medicine. PMID:21998245

  13. The objective evaluation of obstructive pulmonary diseases with spirometry.

    PubMed

    Ozkaya, Sevket; Dirican, Adem; Tuna, Tibel

    2016-01-01

    Airway obstruction is variable in asthma, while it is progressive and persistent in chronic bronchitis and emphysema. However, some of the patients presenting with symptoms of chronic airway diseases have clinical features of both asthma and COPD. The group with "Asthma-COPD Overlap Syndrome" (ACOS) phenotype was characterized by definitely irreversible airway obstruction accompanied by symptoms and signs of reversibility. In this study, we aimed to classify obstructive airway diseases by clinical, radiological, and pulmonary function tests. Patients at Samsun Medical Park Hospital Chest Diseases outpatient clinic were evaluated between January 2013 and April 2016, and a total of 235 patients were included in this study. Mean age of the patients was 55.3±14.5 (15-88) years, and the male/female ratio was 45/190. The baseline pulmonary function test results of the patients were as follows: mean forced vital capacity (FVC) values 2,825±1,108 (710-6,870) mL and 74.3±22.4 (24-155)%, forced expiratory volume in 1 second (FEV1) values 1,789±774 (480-4,810) mL and 58.1±20.0 (20-130)%, FEV1/FVC values 62.5±6.8 (39-70)%. Reversibility criteria following bronchodilator treatment were present in 107 (45.5%) patients. We specified five subgroups for patients according to their clinical, radiological, and pulmonary test findings, namely Group 1 (asthma), Group 2 (ACOS), Group 3 (chronic bronchitis), and Group 4 (emphysema). Additionally, a group of patients who had clinical and spirometric features of both asthma and chronic bronchitis in association with underlying emphysema (emphysema with chronic bronchitis and emphysema with asthma) was defined as the undifferentiated obstruction (UNDO) group. Number and percentage distribution of patients by groups were 58 (24.7%) in the asthma group, 70 (29.8%) in the ACOS group, 61 (26%) in the chronic bronchitis group, 32 (13.6%) in the emphysema group, and 14 (6%) in the UNDO group. In conclusion, in our study, the types of

  14. The objective evaluation of obstructive pulmonary diseases with spirometry

    PubMed Central

    Ozkaya, Sevket; Dirican, Adem; Tuna, Tibel

    2016-01-01

    Airway obstruction is variable in asthma, while it is progressive and persistent in chronic bronchitis and emphysema. However, some of the patients presenting with symptoms of chronic airway diseases have clinical features of both asthma and COPD. The group with “Asthma–COPD Overlap Syndrome” (ACOS) phenotype was characterized by definitely irreversible airway obstruction accompanied by symptoms and signs of reversibility. In this study, we aimed to classify obstructive airway diseases by clinical, radiological, and pulmonary function tests. Patients at Samsun Medical Park Hospital Chest Diseases outpatient clinic were evaluated between January 2013 and April 2016, and a total of 235 patients were included in this study. Mean age of the patients was 55.3±14.5 (15–88) years, and the male/female ratio was 45/190. The baseline pulmonary function test results of the patients were as follows: mean forced vital capacity (FVC) values 2,825±1,108 (710–6,870) mL and 74.3±22.4 (24–155)%, forced expiratory volume in 1 second (FEV1) values 1,789±774 (480–4,810) mL and 58.1±20.0 (20–130)%, FEV1/FVC values 62.5±6.8 (39–70)%. Reversibility criteria following bronchodilator treatment were present in 107 (45.5%) patients. We specified five subgroups for patients according to their clinical, radiological, and pulmonary test findings, namely Group 1 (asthma), Group 2 (ACOS), Group 3 (chronic bronchitis), and Group 4 (emphysema). Additionally, a group of patients who had clinical and spirometric features of both asthma and chronic bronchitis in association with underlying emphysema (emphysema with chronic bronchitis and emphysema with asthma) was defined as the undifferentiated obstruction (UNDO) group. Number and percentage distribution of patients by groups were 58 (24.7%) in the asthma group, 70 (29.8%) in the ACOS group, 61 (26%) in the chronic bronchitis group, 32 (13.6%) in the emphysema group, and 14 (6%) in the UNDO group. In conclusion, in our study

  15. Measuring the Impact of a Moving Target: Towards a Dynamic Framework for Evaluating Collaborative Adaptive Interactive Technologies

    PubMed Central

    Witteman, Holly; Bender, Jacqueline L; Urowitz, Sara; Wiljer, David; Jadad, Alejandro R

    2009-01-01

    Background Website evaluation is a key issue for researchers, organizations, and others responsible for designing, maintaining, endorsing, approving, and/or assessing the use and impact of interventions designed to influence health and health services. Traditionally, these evaluations have included elements such as content credibility, interface usability, and overall design aesthetics. With the emergence of collaborative, adaptive, and interactive ("Web 2.0") technologies such as wikis and other forms of social networking applications, these metrics may no longer be sufficient to adequately assess the quality, use or impact of a health website. Collaborative, adaptive, interactive applications support different ways for people to interact with health information on the Web, including the potential for increased user participation in the design, creation, and maintenance of such sites. Objective We propose a framework that addresses how to evaluate collaborative, adaptive, and interactive applications. Methods In this paper, we conducted a comprehensive review of a variety of databases using terminology related to this area. Results We present a review of evaluation frameworks and also propose a framework that incorporates collaborative, adaptive, and interactive technologies, grounded in evaluation theory. Conclusion This framework can be applied by researchers who wish to compare Web-based interventions, non-profit organizations, and clinical groups who aim to provide health information and support about a particular health concern via the Web, and decisions about funding grants by agencies interested in the role of social networks and collaborative, adaptive, and interactive technologies technologies to improve health and the health system. PMID:19632973

  16. Objective Evaluation of Sensor Web Modeling and Data System Architectures

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Atlas, R. M.; Ardizzone, J.; Kemp, E. M.; Talabac, S.

    2013-12-01

    We discuss the recent development of an end-to-end simulator designed to quantitatively assess the scientific value of incorporating model- and event-driven "sensor web" capabilities into future NASA Earth Science missions. The intent is to provide an objective analysis tool for performing engineering and scientific trade studies in which new technologies are introduced. In the case study presented here we focus on meteorological applications in which a numerical model is used to intelligently schedule data collection by space-based assets. Sensor web observing systems that enable dynamic targeting by various observing platforms have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable meteorological events. The use case focuses on landfalling hurricanes and was selected due to the obvious societal impact and the ongoing need to improve warning times. Although hurricane track prediction has improved over the past several decades, further improvement is necessary in the prediction of hurricane intensity. We selected a combination of future observing platforms to apply sensor web measurement techniques: global 3D lidar winds, next-generation scatterometer ocean vector winds, and high resolution cloud motion vectors from GOES-R. Targeting of the assets by a numerical model would allow the spacecraft to change its attitude by performing a roll maneuver to enable off-nadir measurements to be acquired. In this study, synthetic measurements were derived through Observing System Simulation Experiments (OSSEs) and enabled in part through the Dopplar Lidar Simulation Model developed by Simpson Weather Associates. We describe the capabilities of the simulator through three different sensor web configurations of the wind lidar: winds obtained from a nominal "survey mode" operation, winds obtained with a reduced duty cycle of the lidar (designed for preserving the life of the instrument

  17. Objective evaluation of generic-specific drug information.

    PubMed

    Iijima, Hisashi; Koshimizu, Toshimasa; Shiragami, Makoto

    2007-03-01

    Growth in the use of generic drugs remains flat in Japan, and one of the reasons cited is information availability. We previously showed that the amount of information available on generic drugs differs greatly from one pharmaceutical industry to another, though, on average, it is inferior to that for original, brand name drugs. This report looks at information on individual generic drug products, rather than the active ingredients contained therein. In May 2004, we studied ingredients sold by at least 20 pharmaceutical industries. Here, for the same, particular ingredient, we evaluate current availability of generic-specific information (as of August 2005), as well as change over time. On the basis of ingredient, the amount of information provided for generic drugs is 31.1+/-17.5-57.3+/-11.7% that for the corresponding original drugs (Mean+/-S.D.), but in the company-by-company comparison, a large dispersion of 16.6+/-5.0-69.4+/-11.9% (Mean +/-S.D.) is observed. In terms of information content, generic drugs provided less than 50% as much information on "drug interactions", "clinical efficacy", and "outline of side effects", as that for original drugs. The difference between generic and original drugs was smaller in comparisons focusing on information specific to generics than on those including all drug information. Our study also revealed that, over time, some pharmaceutical industries have added to the amount of information provided. When information is a deciding factor, the quantity available at the current time is not the only relevant aspect; it is best to select a pharmaceutical industry that is proactive about supplementing information post-release. PMID:17329940

  18. Teaching Evaluation from an Experiential Framework: Connecting Theory and Organizational Development with Grant Making

    ERIC Educational Resources Information Center

    Kelly, Melissa A.; Kaczynski, Dan

    2008-01-01

    The authors present an approach for educating future evaluators by connecting evaluation theory and practice, organizational development, and grant making through experiential learning. They position this discussion on the conceptual framework of a newly developed graduate-level evaluation course, Advanced Program Development and Evaluation, which…

  19. Laboratory evaluation of dynamic traffic assignment systems: Requirements, framework, and system design

    SciTech Connect

    Miaou, S.-P.; Pillai, R.S.; Summers, M.S.; Rathi, A.K.; Lieu, H.C.

    1997-01-01

    The success of Advanced Traveler Information 5ystems (ATIS) and Advanced Traffic Management Systems (ATMS) depends on the availability and dissemination of timely and accurate estimates of current and emerging traffic network conditions. Real-time Dynamic Traffic Assignment (DTA) systems are being developed to provide the required timely information. The DTA systems will provide faithful and coherent real-time, pre-trip, and en-route guidance/information which includes routing, mode, and departure time suggestions for use by travelers, ATIS, and ATMS. To ensure the credibility and deployment potential of such DTA systems, an evaluation system supporting all phases of DTA system development has been designed and presented in this paper. This evaluation system is called the DTA System Laboratory (DSL). A major component of the DSL is a ground- truth simulator, the DTA Evaluation System (DES). The DES is envisioned to be a virtual representation of a transportation system in which ATMS and ATIS technologies are deployed. It simulates the driving and decision-making behavior of travelers in response to ATIS and ATMS guidance, information, and control. This paper presents the major evaluation requirements for a DTA Systems, a modular modeling framework for the DES, and a distributed DES design. The modeling framework for the DES is modular, meets the requirements, can be assembled using both legacy and independently developed modules, and can be implemented as a either a single process or a distributed system. The distributed design is extendible, provides for the optimization of distributed performance, and object-oriented design within each distributed component. A status report on the development of the DES and other research applications is also provided.

  20. Technology-Assisted Patient Access to Clinical Information: An Evaluation Framework for Blue Button

    PubMed Central

    Nazi, Kim M; Luger, Tana M; Amante, Daniel J; Smith, Bridget M; Barker, Anna; Shimada, Stephanie L; Volkman, Julie E; Garvin, Lynn; Simon, Steven R; Houston, Thomas K

    2014-01-01

    Background Patient access to clinical information represents a means to improve the transparency and delivery of health care as well as interactions between patients and health care providers. We examine the movement toward augmenting patient access to clinical information using technology. Our analysis focuses on “Blue Button,” a tool that many health care organizations are implementing as part of their Web-based patient portals. Objective We present a framework for evaluating the effects that technology-assisted access to clinical information may have on stakeholder experiences, processes of care, and health outcomes. Methods A case study of the United States Department of Veterans Affairs' (VA) efforts to make increasing amounts of clinical information available to patients through Blue Button. Drawing on established collaborative relationships with researchers, clinicians, and operational partners who are engaged in the VA’s ongoing implementation and evaluation efforts related to Blue Button, we assessed existing evidence and organizational practices through key informant interviews, review of documents and other available materials, and an environmental scan of published literature and the websites of other health care organizations. Results Technology-assisted access to clinical information represents a significant advance for VA patients and marks a significant change for the VA as an organization. Evaluations of Blue Button should (1) consider both processes of care and outcomes, (2) clearly define constructs of focus, (3) examine influencing factors related to the patient population and clinical context, and (4) identify potential unintended consequences. Conclusions The proposed framework can serve as a roadmap to guide subsequent research and evaluation of technology-assisted patient access to clinical information. To that end, we offer a series of related recommendations. PMID:24675395

  1. The Utility of the Memorable Messages Framework as an Intermediary Evaluation Tool for Fruit and Vegetable Consumption in a Nutrition Education Program

    ERIC Educational Resources Information Center

    Davis, LaShara A.; Morgan, Susan E.; Mobley, Amy R.

    2016-01-01

    Additional strategies to evaluate the impact of community nutrition education programs on low-income individuals are needed. The objective of this qualitative study was to examine the use of the Memorable Messages Framework as an intermediary nutrition education program evaluation tool to determine what fruit and vegetable messages were reported…

  2. Analyzing Electronic Question/Answer Services: Framework and Evaluations of Selected Services.

    ERIC Educational Resources Information Center

    White, Marilyn Domas, Ed.

    This report develops an analytical framework based on systems analysis for evaluating electronic question/answer or AskA services operated by a wide range of types of organizations, including libraries. Version 1.0 of this framework was applied in June 1999 to a selective sample of 11 electronic question/answer services, which cover a range of…

  3. Beyond the Rhetoric: A Framework for Evaluating Improvements to the Student Experience

    ERIC Educational Resources Information Center

    Baird, Jeanette; Gordon, George

    2009-01-01

    A framework is described to assist institutions in evaluating the extent to which activities described as "quality improvements" or "quality enhancements" are likely to directly improve the student experience. The framework classifies ways of improving the student experience into "coaching improvements", "umpiring improvements", and "facilities…

  4. Evaluating Public Spending: A Framework of Public Expenditure Reviews. World Bank Discussion Papers No. 323.

    ERIC Educational Resources Information Center

    Pradhan, Sanjay

    This paper presents a framework for evaluating the level and composition of public expenditures, illustrated by sectoral and country examples. The paper illustrates how this framework can be applied to analyzing broad allocations of spending within and across sectors, drawing upon some key findings and country examples from major sectors (health,…

  5. Comprehensive Framework for Evaluating e-Learning Systems: Using BSC Framework

    ERIC Educational Resources Information Center

    Momeni, Mansor; Jamporazmey, Mona; Mehrafrouz, Mohsen; Bahadori, Fatemeh

    2013-01-01

    The development of information and communication technology (ICT) is changing the way in which people work, communicate and learn. Recently developing and implementing e-learning solutions have increased dramatically. According to heavily investing in this area, it is essential to evaluate its different aspects and understand measures, which…

  6. A framework for evaluating eHealth research

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Health care is in the midst of a consumer-oriented technology explosion. Individuals of all ages and backgrounds have discovered eHealth. But the challenges of implementing and evaluating eHealth are just beginning to surface, and, as technology changes, new challenges emerge. Evaluation is critical...

  7. Proposal of a linear rather than hierarchical evaluation of educational initiatives: the 7Is framework

    PubMed Central

    Roland, Damian

    2015-01-01

    Extensive resources are expended attempting to change clinical practice; however, determining the effects of these interventions can be challenging. Traditionally, frameworks to examine the impact of educational interventions have been hierarchical in their approach. In this article, existing frameworks to examine medical education initiatives are reviewed and a novel ‘7Is framework’ discussed. This framework contains seven linearly sequenced domains: interaction, interface, instruction, ideation, integration, implementation, and improvement. The 7Is framework enables the conceptualization of the various effects of an intervention, promoting the development of a set of valid and specific outcome measures, ultimately leading to more robust evaluation. PMID:26101403

  8. Examining Readers' Evaluations of Objectivity and Bias in News Discourse

    ERIC Educational Resources Information Center

    Cramer, Peter; Eisenhart, Christopher

    2014-01-01

    Readers' objectivity and bias evaluations of news texts were investigated in order to better understand the process by which readers make these kinds of judgments and the evidence on which they base them. Readers were primed to evaluate news texts for objectivity and bias, and their selections and metacommentary were analyzed. Readers…

  9. The Objective and Subjective Evaluation of Multichannel Expansion in Wide Dynamic Range Compression Hearing Instruments

    ERIC Educational Resources Information Center

    Plyler, Patrick N.; Lowery, Kristy J.; Hamby, Hilary M.; Trine, Timothy D.

    2007-01-01

    Purpose: The effects of multichannel expansion on the objective and subjective evaluation of 20 listeners fitted binaurally with 4-channel, digital in-the-ear hearing instruments were investigated. Method: Objective evaluations were conducted in quiet using the Connected Speech Test (CST) and in noise using the Hearing in Noise Test (HINT) at 40,…

  10. New model framework and structure and the commonality evaluation model. [concerning unmanned spacecraft projects

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated.

  11. Beyond Effectiveness: A Pragmatic Evaluation Framework for Learning and Continuous Quality Improvement of e-Learning Interventions in Healthcare.

    PubMed

    Dafalla, Tarig Dafalla Mohamed; Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    A pragmatic evaluation framework for evaluating the usability and usefulness of an e-learning intervention for a patient clinical information scheduling system is presented in this paper. The framework was conceptualized based on two different but related concepts (usability and usefulness) and selection of appropriate and valid methods of data collection and analysis that included: (1) Low-Cost Rapid Usability Engineering (LCRUE), (2) Cognitive Task Analysis (CTA), (3) Heuristic Evaluation (HE) criteria for web-based learning, and (4) Software Usability Measurement Inventory (SUMI). The results of the analysis showed some areas where usability that were related to General Interface Usability (GIU), instructional design and content was problematic; some of which might account for the poorly rated aspects of usability when subjectively measured. This paper shows that using a pragmatic framework can be a useful way, not only for measuring the usability and usefulness, but also for providing a practical objective evidences for learning and continuous quality improvement of e-learning systems. The findings should be of interest to educators, developers, designers, researchers, and usability practitioners involved in the development of e-learning systems in healthcare. This framework could be an appropriate method for assessing the usability, usefulness and safety of health information systems both in the laboratory and in the clinical context. PMID:25676959

  12. The Climate Change Education Evidence Base: Lessons Learned from NOAA's Monitoring and Evaluation Framework Implementation

    NASA Astrophysics Data System (ADS)

    Baek, J.

    2012-12-01

    Federal science mission agencies are under increased pressure to ensure that their STEM education investments accomplish several objectives, including the identification and use of evidence-based approaches. Climate change education and climate literacy programs fall under these broader STEM initiatives. This paper is designed as a primer for climate change education evaluators and researchers to understand the policy context on the use of evidence. Recent initiatives, that include the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), point to a need for shared goals and measurements amongst the climate change education community. The Tri-agency Climate Change Education (CCE) collaboration, which includes NSF, NASA, and NOAA, developed the Tri-Agency Climate Change Education Common Evaluation Framework Initiative Stakeholder Statement (2012). An excerpt: From the perspective of the tri-agency collaboration, and its individual agency members, the goal of the common framework is not to build a required evaluation scheme or a set of new requirements for our funded climate change education initiatives. Rather, the collaboration would be strengthened by the development of a framework that includes tools, instruments, and/or documentation to: ● Help the agencies see and articulate the relationships between the individual pieces of the tri-agency CCE portfolio; ● Guide the agencies in reporting on the progress, lessons learned, and impacts of the collaboration between the three agencies in developing a coordinated portfolio of climate education initiatives; and ● Help the individual projects, as part of this broader portfolio, understand where they fit into a larger picture. The accomplishments of this initiative to date have been based on the collaborative nature of evaluators the climate change education community within the tri-agency portfolio. While this

  13. A framework for inverse planning of beam-on times for 3D small animal radiotherapy using interactive multi-objective optimisation

    NASA Astrophysics Data System (ADS)

    Balvert, Marleen; van Hoof, Stefan J.; Granton, Patrick V.; Trani, Daniela; den Hertog, Dick; Hoffmann, Aswin L.; Verhaegen, Frank

    2015-07-01

    Advances in precision small animal radiotherapy hardware enable the delivery of increasingly complicated dose distributions on the millimeter scale. Manual creation and evaluation of treatment plans becomes difficult or even infeasible with an increasing number of degrees of freedom for dose delivery and available image data. The goal of this work is to develop an optimisation model that determines beam-on times for a given beam configuration, and to assess the feasibility and benefits of an automated treatment planning system for small animal radiotherapy. The developed model determines a Pareto optimal solution using operator-defined weights for a multiple-objective treatment planning problem. An interactive approach allows the planner to navigate towards, and to select the Pareto optimal treatment plan that yields the most preferred trade-off of the conflicting objectives. This model was evaluated using four small animal cases based on cone-beam computed tomography images. Resulting treatment plan quality was compared to the quality of manually optimised treatment plans using dose-volume histograms and metrics. Results show that the developed framework is well capable of optimising beam-on times for 3D dose distributions and offers several advantages over manual treatment plan optimisation. For all cases but the simple flank tumour case, a similar amount of time was needed for manual and automated beam-on time optimisation. In this time frame, manual optimisation generates a single treatment plan, while the inverse planning system yields a set of Pareto optimal solutions which provides quantitative insight on the sensitivity of conflicting objectives. Treatment planning automation decreases the dependence on operator experience and allows for the use of class solutions for similar treatment scenarios. This can shorten the time required for treatment planning and therefore increase animal throughput. In addition, this can improve treatment standardisation and

  14. A framework for inverse planning of beam-on times for 3D small animal radiotherapy using interactive multi-objective optimisation.

    PubMed

    Balvert, Marleen; van Hoof, Stefan J; Granton, Patrick V; Trani, Daniela; den Hertog, Dick; Hoffmann, Aswin L; Verhaegen, Frank

    2015-07-21

    Advances in precision small animal radiotherapy hardware enable the delivery of increasingly complicated dose distributions on the millimeter scale. Manual creation and evaluation of treatment plans becomes difficult or even infeasible with an increasing number of degrees of freedom for dose delivery and available image data. The goal of this work is to develop an optimisation model that determines beam-on times for a given beam configuration, and to assess the feasibility and benefits of an automated treatment planning system for small animal radiotherapy. The developed model determines a Pareto optimal solution using operator-defined weights for a multiple-objective treatment planning problem. An interactive approach allows the planner to navigate towards, and to select the Pareto optimal treatment plan that yields the most preferred trade-off of the conflicting objectives. This model was evaluated using four small animal cases based on cone-beam computed tomography images. Resulting treatment plan quality was compared to the quality of manually optimised treatment plans using dose-volume histograms and metrics. Results show that the developed framework is well capable of optimising beam-on times for 3D dose distributions and offers several advantages over manual treatment plan optimisation. For all cases but the simple flank tumour case, a similar amount of time was needed for manual and automated beam-on time optimisation. In this time frame, manual optimisation generates a single treatment plan, while the inverse planning system yields a set of Pareto optimal solutions which provides quantitative insight on the sensitivity of conflicting objectives. Treatment planning automation decreases the dependence on operator experience and allows for the use of class solutions for similar treatment scenarios. This can shorten the time required for treatment planning and therefore increase animal throughput. In addition, this can improve treatment standardisation and

  15. Robust Operation of a System of Reservoir and Desalination Plant using a Multi-Objective Optimization Framework

    NASA Astrophysics Data System (ADS)

    Ng, T.; Bhushan, R.

    2013-12-01

    In many cities, the water supply system is under stress due to increased competition for reliable fresh water supplies from population growth and climate uncertainties resulting in water insecurity. One method to augment fresh water supplies is seawater desalination, which converts seawater to fresh water for industrial and domestic potable and non-potable uses. We propose to address this issue of water supply scarcity and uncertainty in coastal metropolitan cities by developing a robust operating policy for the joint operation of a desalination plant with a freshwater reservoir system using a multi-objective optimization framework. Due to the unlimited availability of seawater, desalination has a strong potential as a reliable source of water in coastal cities around the world. However, being an energy intensive and expensive process, its application is limited. Reservoir water, while cheaper due to its relatively small cost of transportation to the cities, is often limited and variable in its availability. We observe that combining the operation of a desalination plant with a water supply reservoir leads to more cost efficient and reliable water production than if both were to be operated separately. We model a joint reservoir-desalination system as a multi-objective optimization problem with risk, resilience, and vulnerability as the objective functions, and cost as a constraint. In our simulations, rule curves determine the release from the reservoir as a function of existing storage level, and the remaining demand that is unmet by the release from the reservoir determines the amount of water produced from desalination. The overall cost of the system is the sum of the cost of transporting reservoir water and the cost of energy of desalinating seawater. We employ a genetic algorithm to find the optimal values of the thresholds of the reservoir rule curves and the maximum operating capacity of the desalination plant. We will discuss the tradeoffs between water

  16. Evaluating the accuracy of size perception on screen-based displays: Displayed objects appear smaller than real objects.

    PubMed

    Stefanucci, Jeanine K; Creem-Regehr, Sarah H; Thompson, William B; Lessard, David A; Geuss, Michael N

    2015-09-01

    Accurate perception of the size of objects in computer-generated imagery is important for a growing number of applications that rely on absolute scale, such as medical visualization and architecture. Addressing this problem requires both the development of effective evaluation methods and an understanding of what visual information might contribute to differences between virtual displays and the real world. In the current study, we use 2 affordance judgments--perceived graspability of an object or reaching through an aperture--to compare size perception in high-fidelity graphical models presented on a large screen display to the real world. Our goals were to establish the use of perceived affordances within spaces near to the observer for evaluating computer graphics and to assess whether the graphical displays were perceived similarly to the real world. We varied the nature of the affordance task and whether or not the display enabled stereo presentation. We found that judgments of grasping and reaching through can be made effectively with screen-based displays. The affordance judgments revealed that sizes were perceived as smaller than in the real world. However, this difference was reduced when stereo viewing was enabled or when the virtual display was viewed before the real world. PMID:26121374

  17. Evaluation Framework and Tools for Distributed Energy Resources

    SciTech Connect

    Gumerman, Etan Z.; Bharvirkar, Ranjit R.; LaCommare, Kristina Hamachi; Marnay , Chris

    2003-02-01

    The Energy Information Administration's (EIA) 2002 Annual Energy Outlook (AEO) forecast anticipates the need for 375 MW of new generating capacity (or about one new power plant) per week for the next 20 years, most of which is forecast to be fueled by natural gas. The Distributed Energy and Electric Reliability Program (DEER) of the Department of Energy (DOE), has set a national goal for DER to capture 20 percent of new electric generation capacity additions by 2020 (Office of Energy Efficiency and Renewable Energy 2000). Cumulatively, this amounts to about 40 GW of DER capacity additions from 2000-2020. Figure ES-1 below compares the EIA forecast and DEER's assumed goal for new DER by 2020 while applying the same definition of DER to both. This figure illustrates that the EIA forecast is consistent with the overall DEER DER goal. For the purposes of this study, Berkeley Lab needed a target level of small-scale DER penetration upon which to hinge consideration of benefits and costs. Because the AEO2002 forecasted only 3.1 GW of cumulative additions from small-scale DER in the residential and commercial sectors, another approach was needed to estimate the small-scale DER target. The focus here is on small-scale DER technologies under 500 kW. The technology size limit is somewhat arbitrary, but the key results of interest are marginal additional costs and benefits around an assumed level of penetration that existing programs might achieve. Berkeley Lab assumes that small-scale DER has the same growth potential as large scale DER in AEO2002, about 38 GW. This assumption makes the small-scale goal equivalent to 380,000 DER units of average size 100 kW. This report lays out a framework whereby the consequences of meeting this goal might be estimated and tallied up. The framework is built around a list of major benefits and a set of tools that might be applied to estimate them. This study lists some of the major effects of an emerging paradigm shift away from central

  18. Evaluating participatory research: Framework, methods and implementation results.

    PubMed

    Smajgl, Alex; Ward, John

    2015-07-01

    This paper describes a structured participatory process and associated evaluation protocol developed to detect systems learning by decision makers involved in the management of natural resources. A series of facilitated participatory workshops were conducted to investigate learning when decision makers and influencers were confronted with the multiple, complex interactions arising from decisions concerned with the nexus of water, food and energy security. The participatory process and evaluation of learning were trialled in the Greater Mekong Subregion (GMS), where integrated scientific evidence was systematically presented to challenge existing beliefs concerned with the effectiveness of proposed policy actions and development investments. Consistent with theoretical propositions, individually held values, beliefs and attitudes were deployed as the primary factors (and psychometrics) that underpin and influence environmental management decision making. Observed and statistically significant changes in the three psychometrics expressed by decision makers in response to the facilitated presentation of scientific evidence during the participatory process, provided supportive evidence of systems learning and the evaluation protocol. PMID:25929196

  19. Research and Evaluations of the Health Aspects of Disasters, Part VII: The Relief/Recovery Framework.

    PubMed

    Birnbaum, Marvin L; Daily, Elaine K; O'Rourke, Ann P

    2016-04-01

    The principal goal of research relative to disasters is to decrease the risk that a hazard will result in a disaster. Disaster studies pursue two distinct directions: (1) epidemiological (non-interventional); and (2) interventional. Both interventional and non-interventional studies require data/information obtained from assessments of function. Non-interventional studies examine the epidemiology of disasters. Interventional studies evaluate specific interventions/responses in terms of their effectiveness in meeting their respective objectives, their contribution to the overarching goal, other effects created, their respective costs, and the efficiency with which they achieved their objectives. The results of interventional studies should contribute to evidence that will be used to inform the decisions used to define standards of care and best practices for a given setting based on these standards. Interventional studies are based on the Disaster Logic Model (DLM) and are used to change or maintain levels of function (LOFs). Relief and Recovery interventional studies seek to determine the effects, outcomes, impacts, costs, and value of the intervention provided after the onset of a damaging event. The Relief/Recovery Framework provides the structure needed to systematically study the processes involved in providing relief or recovery interventions that result in a new LOF for a given Societal System and/or its component functions. It consists of the following transformational processes (steps): (1) identification of the functional state prior to the onset of the event (pre-event); (2) assessments of the current functional state; (3) comparison of the current functional state with the pre-event state and with the results of the last assessment; (4) needs identification; (5) strategic planning, including establishing the overall strategic goal(s), objectives, and priorities for interventions; (6) identification of options for interventions; (7) selection of the most

  20. Framework for springback compensation based on mechanical factor evaluation

    NASA Astrophysics Data System (ADS)

    Oya, Tetsuo; Naoyuki Doke, Naoyuki

    2013-05-01

    Springback is an inevitable phenomenon in sheet metal forming, and many researches on its prediction and compensation method have been presented. The use of high-strength steels is now popular; therefore, the demand for effective springback compensation system is increasing. In this study, a novel approach of springback compensation is presented. The proposed framework consists of a springback solver and design system and optimization process. The springback solver is a finite element procedure in which the degenerated shell element is used instead of the typical shell element. This allows the designer to access directly to the resultant stresses such as the bending moment that is a major cause of springback. By our system, mechanically reasonable springback compensation is possible whereas the conventional compensation method only uses geometrical information that may lead to non-realistic solution. The authors have developed a system based on the proposed procedure to demonstrate the effectiveness of the presented strategy and applied it to some forming situations. In this paper, the overview of our approach and the latest progress is reported.

  1. Objective evaluation of ERCP procedures: a simple grading scale for evaluating technical difficulty

    PubMed Central

    Ragunath, K; Thomas, L; Cheung, W; Duane, P; Richards, D

    2003-01-01

    Background and objective: Endoscopic retrograde cholangiopancreatography (ERCP) is a technically demanding endoscopic procedure that varies from a simple diagnostic to a highly complex therapeutic procedure. Simple outcome measures such as success and complication rates do not reflect the competence of the operator or endoscopy unit, as case mix is not taken into account. A grading scale to assess the technical difficulty of ERCP can improve the objectivity of outcome data. Methods: A I to IV technical difficulty grading scale was constructed and applied prospectively to all ERCPs over a 12 month period at a single centre. The procedures were performed by two senior trainees and two experienced consultants (trainers). The grading scale was validated for construct validity and inter-rater reliability at the end of the study using the χ2 test and κ statistics. Results: There were 305 ERCPs in 259 patients over the 12 months study period (males: 112, females: 147, age range 17–97, mean 70.3 years). There was overall success in 244 (80%) procedures with complications in 13 (4%): bleeding in five (1.6%), cholangitis in one (0.3%), pancreatitis in five (1.6%), and perforation in two (0.7%). Success rate was highest for grade I, 49/55 (89%), compared with grade IV procedures, 8/11 (73%). There was a significant linear trend towards a lower success rate from grade I to IV (p=0.021) for trainees, but not for trainers. Complications were low in grade I, II, and III procedures, 12/295(4%), compared with grade IV procedures, 1/11(9%). The inter-rater reliability for the grading scale was good with a substantial agreement between the raters (κ=0.68, p<0.001). Conclusion: Success and complications of ERCP by trainees are influenced by the technical difficulty of the procedure. Outcome data incorporating a grading scale can give accurate information when auditing the qualitative outcomes. This can provide a platform for structured objective evaluation. PMID:12954961

  2. An Evaluation Framework for Sustaining the Impact of Educational Development

    ERIC Educational Resources Information Center

    Hashimoto, Kazuaki; Pillay, Hitendra; Hudson, Peter

    2010-01-01

    Notwithstanding significant efforts by international aid agencies, aid ineffectiveness became apparent in 1990s as the impact of continued development intervention did not endure the expected outcomes. Conventional monitoring and evaluation by those agencies is critiqued for focusing on measuring project outcomes and giving little attention to…

  3. A Human Capabilities Framework for Evaluating Student Learning

    ERIC Educational Resources Information Center

    Walker, Melanie

    2008-01-01

    This paper proposes a human capabilities approach for evaluating student learning and the social and pedagogical arrangements that support equality in capabilities for all students. It outlines the focus on valuable beings and doings in the capability approach developed by Amartya Sen, and Martha Nussbaum's capabilities focus on human flourishing.…

  4. Teacher Language Competence Description: Towards a New Framework of Evaluation

    ERIC Educational Resources Information Center

    Sokolova, Nataliya

    2012-01-01

    The article is centred around the concept of "language competence of a foreign language (FL) teacher" and the ways it can be evaluated. Though the definition of teacher language competence might sound obvious it has not yet been clearly structured and, therefore, no component has been thoroughly described. I use this fact as a starting point and…

  5. Contextual E-Learning Evaluation: A Preliminary Framework

    ERIC Educational Resources Information Center

    Voigt, Christian; Swatman, Paula M. C.

    2004-01-01

    The evaluation of solutions is a major unresolved issue for all those involved in e-learning. In this paper we illustrate the importance of context by means of a qualitative comparison of two e-learning prototype implementations--an action research case undertaken in conjunction with a major German insurance company; and a more experimental…

  6. School Self-Evaluation Instruments: An Assessment Framework

    ERIC Educational Resources Information Center

    Hofman, Roelande H.; Dukstra, Nynke J.; Hofman, W. H. Adriaan

    2005-01-01

    Many instruments for school self-evaluation have become available in primary education; however, they vary in focus, quality and type (e.g., questionnaires, tests, observations, classroom consultation, quality maps, quick scans, etc), creating problems for schools in selecting instruments fitting their specific situations. Research has been…

  7. A General Framework for the Evaluation of Clinical Trial Quality

    PubMed Central

    Berger, Vance W.; Alperson, Sunny Y.

    2009-01-01

    Flawed evaluation of clinical trial quality allows flawed trials to thrive (get funded, obtain IRB approval, get published, serve as the basis of regulatory approval, and set policy). A reasonable evaluation of clinical trial quality must recognize that any one of a large number of potential biases could by itself completely invalidate the trial results. In addition, clever new ways to distort trial results toward a favored outcome may be devised at any time. Finally, the vested financial and other interests of those conducting the experiments and publishing the reports must cast suspicion on any inadequately reported aspect of clinical trial quality. Putting these ideas together, we see that an adequate evaluation of clinical quality would need to enumerate all known biases, update this list periodically, score the trial with regard to each potential bias on a scale of 0% to 100%, offer partial credit for only that which can be substantiated, and then multiply (not add) the component scores to obtain an overall score between 0% and 100%. We will demonstrate that current evaluations fall well short of these ideals. PMID:19463104

  8. Evaluation Framework and Analyses for Thermal Energy Storage Integrated with Packaged Air Conditioning

    SciTech Connect

    Kung, F.; Deru, M.; Bonnema, E.

    2013-10-01

    Few third-party guidance documents or tools are available for evaluating thermal energy storage (TES) integrated with packaged air conditioning (AC), as this type of TES is relatively new compared to TES integrated with chillers or hot water systems. To address this gap, researchers at the National Renewable Energy Laboratory conducted a project to improve the ability of potential technology adopters to evaluate TES technologies. Major project outcomes included: development of an evaluation framework to describe key metrics, methodologies, and issues to consider when assessing the performance of TES systems integrated with packaged AC; application of multiple concepts from the evaluation framework to analyze performance data from four demonstration sites; and production of a new simulation capability that enables modeling of TES integrated with packaged AC in EnergyPlus. This report includes the evaluation framework and analysis results from the project.

  9. Picasso Paintings, Moon Rocks, and Hand-Written Beatles Lyrics: Adults’ Evaluations of Authentic Objects

    PubMed Central

    Frazier, Brandy N.; Gelman, Susan A.; Wilson, Alice; Hood, Bruce

    2010-01-01

    Authentic objects are those that have an historical link to a person, event, time, or place of some significance (e.g., original Picasso painting; gown worn by Princess Diana; your favorite baby blanket). The current study examines everyday beliefs about authentic objects, with three primary goals: to determine the scope of adults’ evaluation of authentic objects, to examine such evaluation in two distinct cultural settings, and to determine whether a person’s attachment history (i.e., whether or not they owned an attachment object as a child) predicts evaluation of authentic objects. We found that college students in the U.K. (N = 125) and U.S. (N = 119) consistently evaluate a broad range of authentic items as more valuable than matched control (inauthentic) objects, more desirable to keep, and more desirable to touch, though only non-personal authentic items were judged to be more appropriate for display in a museum. These patterns were remarkably similar across the two cultural contexts. Additionally, those who had an attachment object as a child evaluated objects more favorably, and in particular judged authentic objects to be more valuable. Altogether, these results demonstrate broad endorsement of "positive contagion" among college-educated adults. PMID:20631919

  10. Federated Process Framework in a Virtual Enterprise Using an Object-Oriented Database and Extensible Markup Language.

    ERIC Educational Resources Information Center

    Bae, Kyoung-Il; Kim, Jung-Hyun; Huh, Soon-Young

    2003-01-01

    Discusses process information sharing among participating organizations in a virtual enterprise and proposes a federated process framework and system architecture that provide a conceptual design for effective implementation of process information sharing supporting the autonomy and agility of the organizations. Develops the framework using an…

  11. A Retrospective Evaluation of Remote Pharmacist Interventions in a Telepharmacy Service Model Using a Conceptual Framework

    PubMed Central

    Murante, Lori J.; Moffett, Lisa M.

    2014-01-01

    Abstract Objectives: This retrospective cross-sectional study evaluated a telepharmacy service model using a conceptual framework to compare documented remote pharmacist interventions by year, hospital, and remote pharmacist and across rural hospitals with or without an on-site rural hospital pharmacist. Materials and Methods: Documented remote pharmacist interventions for patients at eight rural hospitals in the Midwestern United States during prospective prescription order review/entry from 2008 to 2011 were extracted from RxFusion® database (a home-grown system, i.e., internally developed program at The Nebraska Medical Center (TNMC) for capturing remote pharmacist-documented intervention data). The study authors conceptualized an analytical framework, mapping the 37 classes of remote pharmacist interventions to three broader-level definitions: (a) intervention, eight categories (interaction/potential interaction, contraindication, adverse effects, anticoagulation monitoring, drug product selection, drug regimen, summary, and recommendation), (b) patient medication management, two categories (therapy review and action), and (c) health system-centered medication use process, four categories (prescribing, transcribing and documenting, administering, and monitoring). Frequencies of intervention levels were compared by year, hospital, remote pharmacist, and hospital pharmacy status (with a remote pharmacist and on-site pharmacist or with a remote pharmacist only) using chi-squared test and univariate logistic regression analyses, as appropriate. Results: For 450,000 prescription orders 19,222 remote pharmacist interventions were documented. Frequency of interventions significantly increased each year (36% in 2009, 55% in 2010, and 7% in 2011) versus the baseline year (2008, 3%) when service started. The frequency of interventions also differed significantly across the eight hospitals and 16 remote pharmacists for the three defined intervention levels and categories

  12. Object detection in MOUT: evaluation of a hybrid approach for confirmation and rejection of object detection hypotheses

    NASA Astrophysics Data System (ADS)

    Manger, Daniel; Metzler, Jürgen

    2014-03-01

    Military Operations in Urban Terrain (MOUT) require the capability to perceive and to analyze the situation around a patrol in order to recognize potential threats. A permanent monitoring of the surrounding area is essential in order to appropriately react to the given situation, where one relevant task is the detection of objects that can pose a threat. Especially the robust detection of persons is important, as in MOUT scenarios threats usually arise from persons. This task can be supported by image processing systems. However, depending on the scenario, person detection in MOUT can be challenging, e.g. persons are often occluded in complex outdoor scenes and the person detection also suffers from low image resolution. Furthermore, there are several requirements on person detection systems for MOUT such as the detection of non-moving persons, as they can be a part of an ambush. Existing detectors therefore have to operate on single images with low thresholds for detection in order to not miss any person. This, in turn, leads to a comparatively high number of false positive detections which renders an automatic vision-based threat detection system ineffective. In this paper, a hybrid detection approach is presented. A combination of a discriminative and a generative model is examined. The objective is to increase the accuracy of existing detectors by integrating a separate hypotheses confirmation and rejection step which is built by a discriminative and generative model. This enables the overall detection system to make use of both the discriminative power and the capability to detect partly hidden objects with the models. The approach is evaluated on benchmark data sets generated from real-world image sequences captured during MOUT exercises. The extension shows a significant improvement of the false positive detection rate.

  13. Applying a Framework to Evaluate Assignment Marking Software: A Case Study on Lightwork

    ERIC Educational Resources Information Center

    Heinrich, Eva; Milne, John

    2012-01-01

    This article presents the findings of a qualitative evaluation on the effect of a specialised software tool on the efficiency and quality of assignment marking. The software, Lightwork, combines with the Moodle learning management system and provides support through marking rubrics and marker allocations. To enable the evaluation a framework has…

  14. Validity Research on Teacher Evaluation Systems Based on the Framework for Teaching

    ERIC Educational Resources Information Center

    Milanowski, Anthony T.

    2011-01-01

    After decades of disinterest, evaluation of the performance of elementary and secondary teachers in the United States has become an important educational policy issue. As U.S. states and districts have tried to upgrade their evaluation processes, one of the models that has been increasingly used is the Framework for Teaching. This paper summarizes…

  15. Using the RE-AIM framework to evaluate physical activity public health programs in Mexico

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Physical activity (PA) public health programming has been widely used in Mexico; however, few studies have documented individual and organizational factors that might be used to evaluate their public health impact. The RE-AIM framework is an evaluation tool that examines individual and organizationa...

  16. The Social Outcomes of Older Adult Learning in Taiwan: Evaluation Framework and Indicators

    ERIC Educational Resources Information Center

    Lin, Li-Hui

    2015-01-01

    The purpose of this study is to explore the social outcomes of older adult learning in Taiwan. In light of our society's aging population structure, the task of establishing evaluation framework and indicators for the social outcomes of learning (SOL) as applied to older adults is urgent. In order to construct evaluation indicators for older…

  17. A Reusable Framework for Regional Climate Model Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Goodale, C. E.; Mattmann, C. A.; Lean, P.; Kim, J.; Zimdars, P.; Waliser, D. E.; Crichton, D. J.

    2011-12-01

    Climate observations are currently obtained through a diverse network of sensors and platforms that include space-based observatories, airborne and seaborne platforms, and distributed, networked, ground-based instruments. These global observational measurements are critical inputs to the efforts of the climate modeling community and can provide a corpus of data for use in analysis and validation of climate models. The Regional Climate Model Evaluation System (RCMES) is an effort currently being undertaken to address the challenges of integrating this vast array of observational climate data into a coherent resource suitable for performing model analysis at the regional level. Developed through a collaboration between the NASA Jet Propulsion Laboratory (JPL) and the UCLA Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), the RCMES uses existing open source technologies (MySQL, Apache Hadoop, and Apache OODT), to construct a scalable, parametric, geospatial data store that incorporates decades of observational data from a variety of NASA Earth science missions, as well as other sources into a consistently annotated, highly available scientific resource. By eliminating arbitrary partitions in the data (individual file boundaries, differing file formats, etc), and instead treating each individual observational measurement as a unique, geospatially referenced data point, the RCMES is capable of transforming large, heterogeneous collections of disparate observational data into a unified resource suitable for comparison to climate model output. This facility is further enhanced by the availability of a model evaluation toolkit which consists of a set of Python libraries, a RESTful web service layer, and a browser-based graphical user interface that allows for orchestration of model-to-data comparisons by composing them visually through web forms. This combination of tools and interfaces dramatically simplifies the process of interacting with and

  18. Applying the clinical adoption framework to evaluate the impact of an ambulatory electronic medical record.

    PubMed

    Lau, Francis; Partridge, Colin; Randhawa, Gurprit; Bowen, Mike

    2013-01-01

    This paper describes the application of the Clinical Adoption (CA) Framework to evaluate the impact of a recently deployed electronic medical record (EMR) in a Canadian healthcare organization. The CA Framework dimensions evaluated were EMR quality, use and net benefits at the micro level; and people, organization and implementation at the meso level. The study involved clinical and support staff from two ambulatory care clinics, and managers and technical staff from the organization. A number of issues were identified at both levels of the CA Framework that had affected EMR adoption in the two clinics. Some perceived benefits in care coordination and efficiency were reported despite challenges that arose from early deployment decisions. There were five lessons that could be applied to other ambulatory care settings. The CA Framework has proved useful in making sense of ways that EMR can add value to the organization. PMID:23388247

  19. Performance evaluation of image enhancement methods for objects detection and recognition

    NASA Astrophysics Data System (ADS)

    Cai, Tiefeng; Zhu, Feng; Hao, Yingming; Fan, Xiaopeng

    2015-10-01

    Human eyes cannot notice low contrast objects in the image. Image contrast enhancement methods can make the unnoticed objects noticed, and human can detect and recognize the objects. In order to guide the design of enhancement methods, performance of enhancement methods for objects detection and recognition(ODR) should be valued. The existing performance evaluation methods evaluate image enhancement methods by calculating the increment of contrast or image information entropy. However, it is essentially an image information transmission process that human detect and recognize objects in the image, and image contrast enhancement can be viewed as a form of image coding. According to human visual properties, the transmission process of ODR information are modeled in this paper, and a performance evaluation method was proposed from the information theory of Shannon.

  20. A Conceptual Framework for Graduate Teaching Assistant Professional Development Evaluation and Research.

    PubMed

    Reeves, Todd D; Marbach-Ad, Gili; Miller, Kristen R; Ridgway, Judith; Gardner, Grant E; Schussler, Elisabeth E; Wischusen, E William

    2016-01-01

    Biology graduate teaching assistants (GTAs) are significant contributors to the educational mission of universities, particularly in introductory courses, yet there is a lack of empirical data on how to best prepare them for their teaching roles. This essay proposes a conceptual framework for biology GTA teaching professional development (TPD) program evaluation and research with three overarching variable categories for consideration: outcome variables, contextual variables, and moderating variables. The framework's outcome variables go beyond GTA satisfaction and instead position GTA cognition, GTA teaching practice, and undergraduate learning outcomes as the foci of GTA TPD evaluation and research. For each GTA TPD outcome variable, key evaluation questions and example assessment instruments are introduced to demonstrate how the framework can be used to guide GTA TPD evaluation and research plans. A common conceptual framework is also essential to coordinating the collection and synthesis of empirical data on GTA TPD nationally. Thus, the proposed conceptual framework serves as both a guide for conducting GTA TPD evaluation at single institutions and as a means to coordinate research across institutions at a national level. PMID:27193291

  1. Cramer-Rao lower bound and object reconstruction performance evaluation for intensity interferometry

    NASA Astrophysics Data System (ADS)

    Dolne, Jean J.; Gerwe, David R.; Crabtree, Peter N.

    2014-07-01

    This paper addresses the fundamental performance limits of object reconstruction methods using intensity interferometry measurements. It shows examples of reconstructed objects obtained with the FIIRE (Forward-model Interferometry Image Reconstruction Estimator) code developed by Boeing for AFRL. It considers various issues when calculating the multidimensional Cramér-Rao lower bound (CRLB) when the Fisher information matrix (FIM) is singular. In particular, when comparing FIIRE performance, characterized as the root mean square difference between the estimated and pristine objects with the CRLB, we found that FIIRE performance improved as the singularity became worse, a result not expected. We found that for invertible FIM, FIIRE yielded lower root mean squared error than the square root of the CRLB (by a factor as large as 100). This may be due to various regularization constraints (positivity, support, sharpness, and smoothness) included in FIIRE, rendering it a biased estimator, as opposed to the unbiased CRLB framework used. Using the sieve technique to mitigate false high frequency content inherent in point-by-point object reconstruction methods, we also show further improved FIIRE performance on some generic objects. It is worth noting that since FIIRE is an iterative algorithm searching to arrive at an object estimate consistent with the collected data and various constraints, an initial object estimate is required. In our case, we used a completely random initial object guess consisting of a 2-D array of uniformly distributed random numbers, sometimes multiplied with a 2-D Gaussian function.

  2. Nuclear waste management issues: a multidisciplinary evaluation framework.

    SciTech Connect

    Hoffman, M.

    1980-02-01

    Initially, this paper characterizes the nuclear waste problem that requires analysis to establish the rationale for an interdisciplinary approach to resolve it. The problem characterization also explains why the specific concern with contaminated groundwater and intrusion through drilling has been selected for the focus of the panel meeting. The Nominal Group Technique (NGT), the group process format chosen for the experts' deliberations, is explained in some detail and its value in facilitating the desired dialogue is described. The dialogue is organized around the various issue areas that would be of concern to a program manager dealing with the potential problem of radioactivity escaping to the biosphere through human intrusion into contaminated groundwater. The participants are identified by professional discipline so that the dialogue can be presented in a realistic fashion. Both the content of the dialogue and its format are evaluated. Particular attention is given to their usefulness in generating a cross-section of subissues and factors that should be addressed when analyzing the waste disposal system's adequacy to prevent contaminated groundwater escaping to the biosphere.

  3. A framework for outcome-level evaluation of in-service training of health care workers

    PubMed Central

    2013-01-01

    Background In-service training is a key strategic approach to addressing the severe shortage of health care workers in many countries. However, there is a lack of evidence linking these health care worker trainings to improved health outcomes. In response, the United States President’s Emergency Plan for AIDS Relief’s Human Resources for Health Technical Working Group initiated a project to develop an outcome-focused training evaluation framework. This paper presents the methods and results of that project. Methods A general inductive methodology was used for the conceptualization and development of the framework. Fifteen key informant interviews were conducted to explore contextual factors, perceived needs, barriers and facilitators affecting the evaluation of training outcomes. In addition, a thematic analysis of 70 published articles reporting health care worker training outcomes identified key themes and categories. These were integrated, synthesized and compared to several existing training evaluation models. This formed an overall typology which was used to draft a new framework. Finally, the framework was refined and validated through an iterative process of feedback, pilot testing and revision. Results The inductive process resulted in identification of themes and categories, as well as relationships among several levels and types of outcomes. The resulting framework includes nine distinct types of outcomes that can be evaluated, which are organized within three nested levels: individual, organizational and health system/population. The outcome types are: (1) individual knowledge, attitudes and skills; (2) individual performance; (3) individual patient health; (4) organizational systems; (5) organizational performance; (6) organizational-level patient health; (7) health systems; (8) population-level performance; and (9) population-level health. The framework also addresses contextual factors which may influence the outcomes of training, as well as the

  4. Evaluating an Objective Structured Clinical Examination (OSCE) Adapted for Social Work

    ERIC Educational Resources Information Center

    Bogo, Marion; Regehr, Cheryl; Katz, Ellen; Logie, Carmen; Tufford, Lea; Litvack, Andrea

    2012-01-01

    Objectives: To evaluate an objective structured clinical examination (OSCE) adapted for social work in a lab course and examine the degree to which it predicts competence in the practicum. Method: 125 Masters students participated in a one-scenario OSCE and wrote responses to standardized reflection questions. OSCE performance and reflections were…

  5. Framework for performance evaluation of face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Black, John A., Jr.; Gargesha, Madhusudhana; Kahol, Kanav; Kuchi, Prem; Panchanathan, Sethuraman

    2002-07-01

    Face detection and recognition is becoming increasingly important in the contexts of surveillance,credit card fraud detection,assistive devices for visual impaired,etc. A number of face recognition algorithms have been proposed in the literature.The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms.However,while existing publicly-available face databases contain face images with a wide variety of poses angles, illumination angles,gestures,face occlusions,and illuminant colors, these images have not been adequately annotated,thus limiting their usefulness for evaluating the relative performance of face detection algorithms. For example,many of the images in existing databases are not annotated with the exact pose angles at which they were taken.In order to compare the performance of various face recognition algorithms presented in the literature there is a need for a comprehensive,systematically annotated database populated with face images that have been captured (1)at a variety of pose angles (to permit testing of pose invariance),(2)with a wide variety of illumination angles (to permit testing of illumination invariance),and (3)under a variety of commonly encountered illumination color temperatures (to permit testing of illumination color invariance). In this paper, we present a methodology for creating such an annotated database that employs a novel set of apparatus for the rapid capture of face images from a wide variety of pose angles and illumination angles. Four different types of illumination are used,including daylight,skylight,incandescent and fluorescent. The entire set of images,as well as the annotations and the experimental results,is being placed in the public domain,and made available for download over the worldwide web.

  6. Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols

    PubMed Central

    Campagne, Fabien

    2008-01-01

    Background The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical applications. Results We describe two protocols for evaluating biomedical information retrieval techniques without human relevance judgments. We call these protocols No Title Evaluation (NT Evaluation). The first protocol measures performance for focused searches, where only one relevant document exists for each query. The second protocol measures performance for queries expected to have potentially many relevant documents per query (high-recall searches). Both protocols take advantage of the clear separation of titles and abstracts found in Medline. We compare the performance obtained with these evaluation protocols to results obtained by reusing the relevance judgments produced in the 2004 and 2005 TREC Genomics Track and observe significant correlations between performance rankings generated by our approach and TREC. Spearman's correlation coefficients in the range of 0.79–0.92 are observed comparing bpref measured with NT Evaluation or with TREC evaluations. For comparison, coefficients in the range 0.86–0.94 can be observed when evaluating the same set of methods with data from two independent TREC Genomics Track evaluations. We discuss the advantages of NT Evaluation over the TRels and the data fusion evaluation protocols introduced recently. Conclusion Our results suggest that the NT Evaluation protocols described here could be used to optimize some search engine parameters before human evaluation. Further research is needed to determine if NT Evaluation or variants of these

  7. Using a Learning Progression Framework to Assess and Evaluate Student Growth

    ERIC Educational Resources Information Center

    Briggs, Derek C.; Diaz-Bilello, Elena; Peck, Fred; Alzen, Jessica; Chattergoon, Rajendra; Johnson, Raymond

    2015-01-01

    This report describes the use of a Learning Progression Framework (LPF) to support the Student Learning Objectives (SLO) process. The report highlights a few common threats we currently see in the SLO process implemented at various states and districts, and offers the LPF as a possible solution for addressing these threats. This report was…

  8. Pilot Program on Common Status Measures Objective-Referenced Tests. Colorado Evaluation Project, Report No. 1.

    ERIC Educational Resources Information Center

    Colorado State Dept. of Education, Denver.

    The purpose of the Colorado Evaluation Project was to field test the Common Status Measures at grades four and eleven in conjunction with a statewide assessment program based on objective-referenced testing instruments developed by the Colorado Department of Education for grades kindergarten, three, six, nine, and twelve. The evaluation was…

  9. Fast evaluation of Sommerfeld integrals for EM scattering and radiation by three-dimensional buried objects

    SciTech Connect

    Cui, T.J.; Chew, W.C.

    1999-03-01

    This paper presents a fast method for electromagnetic scattering and radiation problems pertinent to three-dimensional (3-D) buried objects. In this approach, a new symmetrical form of the Green`s function is derived, which can reduce the number of Sommerfeld integrals involved in the buried objects problem. The integration along steepest descent paths and leading-order approximations are introduced to evaluate these Sommerfeld integrals, which can greatly accelerate the computation. Based on the fast evaluation of Sommerfeld integrals, the radiation of an arbitrarily oriented electric dipole buried in a half space is first analyzed and computed. Then, the scattering by buried dielectric objects and conducting objects is considered using the method of moments (MOM). Numerical results show that the fast method can save tremendous CPU time in radiation and scattering problems involving buried objects.

  10. Evaluation of Conceptual Framework for Recruitment of African American Patients With Breast Cancer

    PubMed Central

    Heiney, Sue P.; Adams, Swann Arp; Wells, Linda M.; Johnson, Hiluv

    2010-01-01

    Purpose/Objectives To describe the Heiney-Adams Recruitment Framework (H-ARF); to delineate a recruitment plan for a randomized, behavioral trial (RBT) based on H-ARF; and to provide evaluation data on its implementation. Data Sources All data for this investigation originated from a recruitment database created for an RBT designed to test the effectiveness of a therapeutic group convened via teleconference for African American women with breast cancer. Data Synthesis Major H-ARF concepts include social marketing and relationship building. The majority of social marketing strategies yielded 100% participant recruitment. Greater absolute numbers were recruited via Health Insurance Portability and Accountability Act waivers. Using H-ARF yielded a high recruitment rate (66%). Conclusions Application of H-ARF led to successful recruitment in an RBT. The findings highlight three areas that researchers should consider when devising recruitment plans: absolute numbers versus recruitment rate, cost, and efficiency with institutional review board–approved access to protected health information. Implications for Nursing H-ARF may be applied to any clinical or population-based research setting because it provides direction for researchers to develop a recruitment plan based on the target audience and cultural attributes that may hinder or help recruitment. PMID:20439201

  11. Decision framework for evaluating compliance with the Threshold Test Ban Treaty

    SciTech Connect

    Judd, B.R.; Younker, L.W.; Hannon, W.J. Jr.; Strait, R.S.; Meagher, P.C.; Sicherman, A.; Kamelgarn, M.B.

    1988-08-01

    We have developed a decision framework for evaluating Soviet compliance with the 150-kt limit on the yield of nuclear tests, as specified by the Threshold Test Ban Treaty. The framework is designed to help interpret available evidence of possible violations and respond appropriately to that evidence. The ''evidence'' consists of estimates of the yield of Soviet tests. Interpreting and responding to evidence of possible violations requires a series of technical determinations and policy judgments. The decision maker may wish to consider: the degree of uncertainty in the monitoring data; potential Soviet violation scenarios and their significance; the probability of Soviet violations; the relative values of correct or incorrect responses; and the role of US responses to evidence in deterring future violations. The decision framework provides methods for incorporating explicitly each of these factors when interpreting and responding to evidence. The framework is best viewed as an aid to decision making. The intent, of course, is not to replace the policy maker with an analytic process. Rather, the framework provides a systematic method for organizing and incorporating diverse inputs, exploring the implications of alternative technical and value judgments, and understanding complex trade-off. By exercising the framework, technical analysts and policy makers can build new insights, which ultimately can lead to better compliance evaluation decisions. 31 refs., 32 figs.

  12. Evaluation of a new data staging framework for the ARC middleware

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Filipčič, A.; Karpenko, D.; Konstantinov, A.

    2012-12-01

    Staging data to and from remote storage services on the Grid for users’ jobs is a vital component of the ARC computing element. A new data staging framework for the computing element has recently been developed to address issues with the present framework, which has essentially remained unchanged since its original implementation 10 years ago. This new framework consists of an intelligent data transfer scheduler which handles priorities and fair-share, a rapid caching system, and the ability to delegate data transfer over multiple nodes to increase network throughput. This paper uses data from real user jobs running on production ARC sites to present an evaluation of the new framework. It is shown to make more efficient use of the available resources, reduce the overall time to run jobs, and avoid the problems seen with the previous simplistic scheduling system. In addition, its simple design coupled with intelligent logic provides greatly increased flexibility for site administrators, end users and future development.

  13. Using a scalable modeling and simulation framework to evaluate the benefits of intelligent transportation systems.

    SciTech Connect

    Ewing, T.; Tentner, A.

    2000-03-21

    A scalable, distributed modeling and simulation framework has been developed at Argonne National Laboratory to study Intelligent Transportation Systems. The framework can run on a single-processor workstation, or run distributed on a multiprocessor computer or network of workstations. The framework is modular and supports plug-in models, hardware, and live data sources. The initial set of models currently includes road network and traffic flow, probe and smart vehicles, traffic management centers, communications between vehicles and centers, in-vehicle navigation systems, roadway traffic advisories. The modeling and simulation capability has been used to examine proposed ITS concepts. Results are presented from modeling scenarios from the Advanced Driver and Vehicle Advisory Navigation Concept (ADVANCE) experimental program to demonstrate how the framework can be used to evaluate the benefits of ITS and to plan future ITS operational tests and deployment initiatives.

  14. Operationalization of National Objectives of Ethiopia into Educational Objectives. African Studies in Curriculum Development & Evaluation. No. 60.

    ERIC Educational Resources Information Center

    Adaye, Abebe Alaro

    This paper reports on past educational objectives of the old political regime in Ethiopia and new educational objectives of revolutionary Ethiopia. It is reported that these new objectives focus on education for production, scientific research, and socialist consciousness, and that all subjects are based on Marxism-Leninism. Curricular objectives…

  15. Using the Australian and New Zealand Telehealth Committee framework to evaluate telehealth: identifying conceptual gaps.

    PubMed

    Hughes, Emma; King, Chris; Kitt, Sharon

    2002-12-01

    Telehealth is strongly supported in policy rhetoric as being economically significant to Australia, but evaluation standards have been insufficiently developed to ensure that this is the case. The use of one such evaluation standard, the Australian and New Zealand Telehealth Committee (ANZTC) framework, for telehealth evaluation in Australia makes good sense. However, that framework emphasizes economic and technical considerations at the expense of social contexts. Furthermore, there must be questions about the utility of a framework which, it appears, has been used to evaluate only a single telehealth project in Australia. The combination of the economic rationalism of health-care policy and the technological determinism of a tool model of information and communication technologies (ICTs) can result in evaluations that fail to match the complexities of the intersection of health-care and ICTs. Using the ANZTC framework while at the same time focusing on explaining, rather than just describing, the links between interventions and outcomes seems a reasonable compromise. This involves understanding complex socio-technical networks and relationships, and requires investigators to engage with the gulf between private opinions, public statements and actual behaviour. PMID:12537899

  16. Using the Australian and New Zealand Telehealth Committee framework to evaluate telehealth: identifying conceptual gaps.

    PubMed

    Hughes, Emma; King, Chris; Kitt, Sharon

    2002-01-01

    Telehealth is strongly supported in policy rhetoric as being economically significant to Australia, but evaluation standards have been insufficiently developed to ensure that this is the case. The use of one such evaluation standard, the Australian and New Zealand Telehealth Committee (ANZTC) framework, for telehealth evaluation in Australia makes good sense. However, that framework emphasizes economic and technical considerations at the expense of social contexts. Furthermore, there must be questions about the utility of a framework which, it appears, has been used to evaluate only a single telehealth project in Australia. The combination of the economic rationalism of health-care policy and the technological determinism of a tool model of information and communication technologies (ICTs) can result in evaluations that fail to match the complexities of the intersection of health-care and ICTs. Using the ANZTC framework while at the same time focusing on explaining, rather than just describing, the links between interventions and outcomes seems a reasonable compromise. This involves understanding complex socio-technical networks and relationships, and requires investigators to engage with the gulf between private opinions, public statements and actual behaviour. PMID:12661616

  17. A Conceptual Framework for Graduate Teaching Assistant Professional Development Evaluation and Research

    PubMed Central

    Reeves, Todd D.; Marbach-Ad, Gili; Miller, Kristen R.; Ridgway, Judith; Gardner, Grant E.; Schussler, Elisabeth E.; Wischusen, E. William

    2016-01-01

    Biology graduate teaching assistants (GTAs) are significant contributors to the educational mission of universities, particularly in introductory courses, yet there is a lack of empirical data on how to best prepare them for their teaching roles. This essay proposes a conceptual framework for biology GTA teaching professional development (TPD) program evaluation and research with three overarching variable categories for consideration: outcome variables, contextual variables, and moderating variables. The framework’s outcome variables go beyond GTA satisfaction and instead position GTA cognition, GTA teaching practice, and undergraduate learning outcomes as the foci of GTA TPD evaluation and research. For each GTA TPD outcome variable, key evaluation questions and example assessment instruments are introduced to demonstrate how the framework can be used to guide GTA TPD evaluation and research plans. A common conceptual framework is also essential to coordinating the collection and synthesis of empirical data on GTA TPD nationally. Thus, the proposed conceptual framework serves as both a guide for conducting GTA TPD evaluation at single institutions and as a means to coordinate research across institutions at a national level. PMID:27193291

  18. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    PubMed Central

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  19. Assessing Students' Understandings of Biological Models and their Use in Science to Evaluate a Theoretical Framework

    NASA Astrophysics Data System (ADS)

    Grünkorn, Juliane; Belzen, Annette Upmeier zu; Krüger, Dirk

    2014-07-01

    Research in the field of students' understandings of models and their use in science describes different frameworks concerning these understandings. Currently, there is no conjoint framework that combines these structures and so far, no investigation has focused on whether it reflects students' understandings sufficiently (empirical evaluation). Therefore, the purpose of this article is to present the results of an empirical evaluation of a conjoint theoretical framework. The theoretical framework integrates relevant research findings and comprises five aspects which are subdivided into three levels each: nature of models, multiple models, purpose of models, testing, and changing models. The study was conducted with a sample of 1,177 seventh to tenth graders (aged 11-19 years) using open-ended items. The data were analysed by identifying students' understandings of models (nature of models and multiple models) and their use in science (purpose of models, testing, and changing models), and comparing as well as assigning them to the content of the theoretical framework. A comprehensive category system of students' understandings was thus developed. Regarding the empirical evaluation, the students' understandings of the nature and the purpose of models were sufficiently described by the theoretical framework. Concerning the understandings of multiple, testing, and changing models, additional initial understandings (only one model possible, no testing of models, and no change of models) need to be considered. This conjoint and now empirically tested framework for students' understandings can provide a common basis for future science education research. Furthermore, evidence-based indications can be provided for teachers and their instructional practice.

  20. Proposed Framework for the Evaluation of Standalone Corpora Processing Systems: An Application to Arabic Corpora

    PubMed Central

    Al-Thubaity, Abdulmohsen; Alqifari, Reem

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework. PMID:25610910

  1. Collaborative Evaluation within a Framework of Stakeholder-Oriented Evaluation Approaches

    ERIC Educational Resources Information Center

    O'Sullivan, Rita G.

    2012-01-01

    Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike "distanced" evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff, result in…

  2. A Conceptual Framework to Help Evaluate the Quality of Institutional Performance

    ERIC Educational Resources Information Center

    Kettunen, Juha

    2008-01-01

    Purpose: This study aims to present a general conceptual framework which can be used to evaluate quality and institutional performance in higher education. Design/methodology/approach: The quality of higher education is at the heart of the setting up of the European Higher Education Area. Strategic management is widely used in higher education…

  3. Community health promotion: a framework to facilitate and evaluate supportive social environments for health.

    PubMed

    Wagemakers, Annemarie; Vaandrager, Lenneke; Koelen, Maria A; Saan, Hans; Leeuwis, Cees

    2010-11-01

    The evaluation of community health promotion designed to create supportive social environments for health is still in its infancy. There is a lack of consensus on concepts, a lack of information on interventions that bring about social change, and a lack of feasible methods and tools. Consequently, the effectiveness of community health promotion may not be evaluated under all relevant headings. Therefore, this study aims to contribute to the evaluation of change in the social environment by presenting a framework. On the basis of the relevant literature we describe the relation between social environment and health predicting mediators. We selected participation and collaboration as core concepts in moderating the social environment of health because these terms give insight into the actual dynamics of health promotion practice. We synthesize the results into a framework with operational variables and offer four guidelines on how to apply the framework: use the variables as a menu, set specific aims for social change processes, use an action research approach, and triangulate data. The framework and guidelines enable the start-up, facilitation and evaluation of social change and learning processes and provide common ground for researchers and practitioners to improve the practice of their professions. PMID:20106527

  4. Epistemologically Authentic Inquiry in Schools: A Theoretical Framework for Evaluating Inquiry Tasks.

    ERIC Educational Resources Information Center

    Chinn, Clark A.; Malhotra, Betina A.

    2002-01-01

    Presents a theoretical framework for evaluating inquiry tasks and how similar they are to authentic science. Suggests that inquiry tasks commonly used in schools evoke reasoning processes that are qualitatively different from the processes employed in real scientific inquiry, and school reasoning tasks appear to be based on epistemology that…

  5. Editor and Section Editor's Perspective Article: A Look at the Danielson Framework for Teacher Evaluation

    ERIC Educational Resources Information Center

    Evans, Brian R.; Wills, Fran; Moretti, Megan

    2015-01-01

    In this age of teacher accountability, school districts are increasingly interested in using the best possible methods in evaluating their teachers. This interest impacts new alternative certification teachers, as well as traditional teachers. An increasingly popular assessment is the Danielson Framework, which is a set of 22 components of…

  6. A Conceptual Framework for the Development, Implementation, and Evaluation of Formal Mentoring Programs.

    ERIC Educational Resources Information Center

    Gaskill, LuAnn Ricketts

    1993-01-01

    Data from a survey of executive development directors were the basis for this mentoring program framework, consisting of (1) program development (protege and mentor selection, training, and linkage); (2) implementation (career and psychosocial functions); and (3) evaluation (formal and informal outcomes assessment). (SK)

  7. Gradually Adaptive Frameworks: Reasonable Disagreement and the Evolution of Evaluative Systems in Music Education

    ERIC Educational Resources Information Center

    Haskins, Stanley

    2013-01-01

    The concept of "gradually adaptive frameworks" is introduced as a model with the potential to describe the evolution of belief evaluative systems through the consideration of reasonable arguments and evidence. This concept is demonstrated through an analysis of specific points of disagreement between David Elliott's praxial philosophy…

  8. Gradually Adaptive Frameworks: Reasonable Disagreement and the Evolution of Evaluative Systems in Music Education

    ERIC Educational Resources Information Center

    Haskins, Stanley

    2013-01-01

    The concept of "gradually adaptive frameworks" is introduced as a model with the potential to describe the evolution of belief evaluative systems through the consideration of reasonable arguments and evidence. This concept is demonstrated through an analysis of specific points of disagreement between David Elliott's praxial…

  9. Evaluating Action-Learning and Professional Networking as a Framework for Educational Leadership Capacity Development

    ERIC Educational Resources Information Center

    Gunn, Cathy; Lefoe, Geraldine

    2013-01-01

    This article describes the responsive evaluation component of an educational leadership capacity-building initiative developed at one Australian university and implemented by three others. The project aimed to develop, implement and disseminate an innovative framework to address the national strategic goal to increase the pool of qualified…

  10. Revisiting "What Works for Whom?": A Qualitative Framework for Evaluating Clinical Effectiveness in Child Psychotherapy

    ERIC Educational Resources Information Center

    Urwin, Cathy

    2007-01-01

    This paper describes a framework for evaluating the effectiveness of child psychotherapy used by child psychotherapists in an inner city Child and Adolescent Mental Health Service (CAMHS). The Hopes and Expectations for Treatment Approach (HETA) involves using the assessment for psychotherapy that normally precedes treatment to derive a baseline…

  11. MODELING FRAMEWORK FOR EVALUATING SEDIMENTATION IN STREAM NETWORKS: FOR USE IN SEDIMENT TMDL ANALYSIS

    EPA Science Inventory

    A modeling framework that can be used to evaluate sedimentation in stream networks is described. This methodology can be used to determine sediment Total Maximum Daily Loads (TMDLs) in sediment impaired waters, and provide the necessary hydrodynamic and sediment-related data t...

  12. An audience-channel-message-evaluation (ACME) framework for health communication campaigns.

    PubMed

    Noar, Seth M

    2012-07-01

    Recent reviews of the literature have indicated that a number of health communication campaigns continue to fail to adhere to principles of effective campaign design. The lack of an integrated, organizing framework for the design, implementation, and evaluation of health communication campaigns may contribute to this state of affairs. The current article introduces an audience-channel-message-evaluation (ACME) framework that organizes the major principles of health campaign design, implementation, and evaluation. ACME also explicates the relationships and linkages between the varying principles. Insights from ACME include the following: The choice of audience segment(s) to focus on in a campaign affects all other campaign design choices, including message strategy and channel/component options. Although channel selection influences options for message design, choice of message design also influences channel options. Evaluation should not be thought of as a separate activity, but rather should be infused and integrated throughout the campaign design and implementation process, including formative, process, and outcome evaluation activities. Overall, health communication campaigns that adhere to this integrated set of principles of effective campaign design will have a greater chance of success than those using principles idiosyncratically. These design, implementation, and evaluation principles are embodied in the ACME framework. PMID:21441207

  13. Application of Scientific Approaches for Evaluation of Quality of Learning Objects in eQNet Project

    NASA Astrophysics Data System (ADS)

    Kurilovas, Eugenijus; Serikoviene, Silvija

    The paper is aimed to analyse the application of several scientific approaches, methods, and principles for evaluation of quality of learning objects for Mathematics subject. The authors analyse the following approaches to minimise subjectivity level in expert evaluation of the quality of learning objects, namely: (1) principles of multiple criteria decision analysis for identification of quality criteria, (2) technological quality criteria classification principle, (3) fuzzy group decision making theory to obtain evaluation measures, (4) normalisation requirement for criteria weights, and (5) scalarisation method for learning objects quality optimisation. Another aim of the paper is to outline the central role of social tagging to describe usage, attention, and other aspects of the context; as well as to help to exploit context data towards making learning object repositories more useful, and thus enhance the reuse. The applied approaches have been used practically for evaluation of learning objects and metadata tagging while implementing European eQNet and te@ch.us projects in Lithuanian comprehensive schools in 2010.

  14. Formation of the system indicators analytic dependence during bisubject qualimetric evaluation of arbitrary objects

    NASA Astrophysics Data System (ADS)

    Morozova, A.

    2016-04-01

    The paper presents an analytical model of calculating the values of parametric clusters, a matrix of the parameters compliance and a model of the analyzed parameter values calculation allowing to form an analytical matrix of the indicators system during bisubject qualimetric evaluation of arbitrary objects and to identify the quantitative relationship of the parameters values. The results are useful for solving problems of control over both technical and socio-economic systems for evaluating objects using parameter systems generated by different subjects taking into account their performance and priorities of decision-making.

  15. A fuzzy MCDM model with objective and subjective weights for evaluating service quality in hotel industries

    NASA Astrophysics Data System (ADS)

    Zoraghi, Nima; Amiri, Maghsoud; Talebi, Golnaz; Zowghi, Mahdi

    2013-12-01

    This paper presents a fuzzy multi-criteria decision-making (FMCDM) model by integrating both subjective and objective weights for ranking and evaluating the service quality in hotels. The objective method selects weights of criteria through mathematical calculation, while the subjective method uses judgments of decision makers. In this paper, we use a combination of weights obtained by both approaches in evaluating service quality in hotel industries. A real case study that considered ranking five hotels is illustrated. Examples are shown to indicate capabilities of the proposed method.

  16. Assessing Learning, Quality and Engagement in Learning Objects: The Learning Object Evaluation Scale for Students (LOES-S)

    ERIC Educational Resources Information Center

    Kay, Robin H.; Knaack, Liesel

    2009-01-01

    Learning objects are interactive web-based tools that support the learning of specific concepts by enhancing, amplifying, and/or guiding the cognitive processes of learners. Research on the impact, effectiveness, and usefulness of learning objects is limited, partially because comprehensive, theoretically based, reliable, and valid evaluation…

  17. An object-oriented modeling and simulation framework for bearings-only multi-target tracking using an unattended acoustic sensor network

    NASA Astrophysics Data System (ADS)

    Aslan, Murat Šamil

    2013-10-01

    Tracking ground targets using low cost ground-based sensors is a challenging field because of the limited capabilities of such sensors. Among the several candidates, including seismic and magnetic sensors, the acoustic sensors based on microphone arrays have a potential of being useful: They can provide a direction to the sound source, they can have a relatively better range, and the sound characteristics can provide a basis for target classification. However, there are still many problems. One of them is the difficulty to resolve multiple sound sources, another is that they do not provide distance, a third is the presence of background noise from wind, sea, rain, distant air and land traffic, people, etc., and a fourth is that the same target can sound very differently depending on factors like terrain type, topography, speed, gear, distance, etc. Use of sophisticated signal processing and data fusion algorithms is the key for compensating (to an extend) the limited capabilities and mentioned problems of these sensors. It is hard, if not impossible, to evaluate the performance of such complex algorithms analytically. For an effective evaluation, before performing expensive field trials, well-designed laboratory experiments and computer simulations are necessary. Along this line, in this paper, we present an object-oriented modeling and simulation framework which can be used to generate simulated data for the data fusion algorithms for tracking multiple on-road targets in an unattended acoustic sensor network. Each sensor node in the network is a circular microphone array which produces the direction of arrival (DOA) (or bearing) measurements of the targets and sends this information to a fusion center. We present the models for road networks, targets (motion and acoustic power) and acoustic sensors in an object-oriented fashion where different and possibly time-varying sampling periods for each sensor node is possible. Moreover, the sensor's signal processing and

  18. A new multiscale routing framework and its evaluation for land surface modeling applications

    NASA Astrophysics Data System (ADS)

    Wen, Zhiqun; Liang, Xu; Yang, Shengtian

    2012-08-01

    A new multiscale routing framework is developed and coupled with the Hydrologically based Three-layer Variable Infiltration Capacity (VIC-3L) land surface model (LSM). This new routing framework has a characteristic of reducing impacts of different scales (both in space and time) on the routing results. The new routing framework has been applied to three different river basins with six different spatial resolutions and two different temporal resolutions. Their results have also been compared to the D8-based (eight direction based) routing scheme, whose flow network is generated from the widely used eight direction (D8) method, to evaluate the new framework's capability of reducing the impacts of spatial and temporal resolutions on the routing results. Results from the new routing framework show that they are significantly less affected by the spatial resolutions than those from the D8-based routing scheme. Comparing the results at the basins' outlets to those obtained from the instantaneous unit hydrograph (IUH) method which has, in principle, the least spatial resolution impacts on the routing results, the new routing framework provides results similar to those by the IUH method. However, the new routing framework has an advantage over the IUH method of providing routing information within the interior locations of a basin and along the river channels, while the IUH method cannot. The new routing framework also reduces impacts of different temporal resolutions on the routing results. The problem of spiky hydrographs caused by a typical routing method, due to the impacts of different temporal resolutions, can be significantly reduced.

  19. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth

    PubMed Central

    Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.

    2015-01-01

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output. PMID:26430292

  20. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.

    2015-03-01

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  1. The scholar role in the National Competence Based Catalogues of Learning Objectives for Undergraduate Medical Education (NKLM) compared to other international frameworks

    PubMed Central

    Hautz, Stefanie C.; Hautz, Wolf E.; Keller, Niklas; Feufel, Markus A.; Spies, Claudia

    2015-01-01

    Background: In Germany, a national competence based catalogue of learning objectives in medicine (NKLM) was developed by the Society for Medical Education and the Council of Medical Faculties. As many of its international counterparts the NKLM describes the qualifications of medical school graduates. The definition of such outcome frameworks indents to make medical education transparent to students, teachers and society. The NKLM aims to amend existing lists of medical topics for assessment with learnable competencies. All outcome frameworks are structured into chapters, domains or physician roles. The definition of the scholar-role poses a number of questions such as: What distinguishes necessary qualifications of a scientifically qualified physician from those of a medical scientist? Methods: 13 outcome frameworks were identified through a systematic three-step literature review and their content compared to the scholar role in the NKLM by means of a qualitative text analysis. The three steps consist of (1) search for outcome frameworks, (2) in- and exclusion, and (3) data extraction, categorization, and validation. The results were afterwards matched with the scholar role of the NKLM. Results: Extracted contents of all frameworks may be summarized into the components Common Basics, Clinical Application, Research, Teaching and Education, and Lifelong Learning. Compared to the included frameworks the NKLM emphasises competencies necessary for research and teaching while clinical application is less prominently mentioned. Conclusion: The scholar role of the NKLM differs from other international outcome frameworks. Discussing these results shall increase propagation and understanding of the NKLM and thus contribute to the qualification of future medical graduates in Germany. PMID:26609287

  2. Evaluation of five non-rigid image registration algorithms using the NIREP framework

    NASA Astrophysics Data System (ADS)

    Wei, Ying; Christensen, Gary E.; Song, Joo Hyun; Rudrauf, David; Bruss, Joel; Kuhl, Jon G.; Grabowski, Thomas J.

    2010-03-01

    Evaluating non-rigid image registration algorithm performance is a difficult problem since there is rarely a "gold standard" (i.e., known) correspondence between two images. This paper reports the analysis and comparison of five non-rigid image registration algorithms using the Non-Rigid Image Registration Evaluation Project (NIREP) (www.nirep.org) framework. The NIREP framework evaluates registration performance using centralized databases of well-characterized images and standard evaluation statistics (methods) which are implemented in a software package. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and SICLE) was evaluated using 22 images from two NIREP neuroanatomical evaluation databases. Six evaluation statistics (relative overlap, intensity variance, normalized ROI overlap, alignment of calcarine sulci, inverse consistency error and transitivity error) were used to evaluate and compare image registration performance. The results indicate that the Demons registration algorithm produced the best registration results with respect to the relative overlap statistic but produced nearly the worst registration results with respect to the inverse consistency statistic. The fact that one registration algorithm produced the best result for one criterion and nearly the worst for another illustrates the need to use multiple evaluation statistics to fully assess performance.

  3. A Framework for Evaluating and Enhancing Alignment in Self-Regulated Learning Research

    PubMed Central

    Dent, Amy L.; Hoyle, Rick H.

    2015-01-01

    We discuss the articles of this special issue with reference to an important yet previously only implicit dimension of study quality: alignment across the theoretical and methodological decisions that collectively define an approach to self-regulated learning. Integrating and extending work by leaders in the field, we propose a framework for evaluating alignment in the way self-regulated learning research is both conducted and reported. Within this framework, the special issue articles provide a springboard for discussing methodological promises and pitfalls of increasingly sophisticated research on the dynamic, contingent, and contextualized features of self-regulated learning. PMID:25825589

  4. Advancing a Framework to Enable Characterization and Evaluation of Data Streams Useful for Biosurveillance

    PubMed Central

    Margevicius, Kristen J.; Generous, Nicholas; Taylor-McCabe, Kirsten J.; Brown, Mac; Daniel, W. Brent; Castro, Lauren; Hengartner, Andrea; Deshpande, Alina

    2014-01-01

    In recent years, biosurveillance has become the buzzword under which a diverse set of ideas and activities regarding detecting and mitigating biological threats are incorporated depending on context and perspective. Increasingly, biosurveillance practice has become global and interdisciplinary, requiring information and resources across public health, One Health, and biothreat domains. Even within the scope of infectious disease surveillance, multiple systems, data sources, and tools are used with varying and often unknown effectiveness. Evaluating the impact and utility of state-of-the-art biosurveillance is, in part, confounded by the complexity of the systems and the information derived from them. We present a novel approach conceptualizing biosurveillance from the perspective of the fundamental data streams that have been or could be used for biosurveillance and to systematically structure a framework that can be universally applicable for use in evaluating and understanding a wide range of biosurveillance activities. Moreover, the Biosurveillance Data Stream Framework and associated definitions are proposed as a starting point to facilitate the development of a standardized lexicon for biosurveillance and characterization of currently used and newly emerging data streams. Criteria for building the data stream framework were developed from an examination of the literature, analysis of information on operational infectious disease biosurveillance systems, and consultation with experts in the area of biosurveillance. To demonstrate utility, the framework and definitions were used as the basis for a schema of a relational database for biosurveillance resources and in the development and use of a decision support tool for data stream evaluation. PMID:24392093

  5. GRAVITATIONALLY INDUCED DENSITY WAKE OF A CIRCULARLY ORBITING OBJECT AS AN INTERPRETATIVE FRAMEWORK OF UBIQUITOUS SPIRALS AND ARCS

    SciTech Connect

    Kim, Hyosun

    2011-10-01

    An orbiting object in a gas-rich environment creates a gravitational density wake containing information about the object and its orbit. Using linear perturbation theory, we analyze the observable properties of the gravitational wake due to the object circularly moving in a static homogeneous gaseous medium, in order to derive the Bondi accretion radius r{sub B} , the orbital distance r{sub p} , and the Mach number M{sub p} of the object. Supersonic motion, producing a wake of spiral-onion shell structure, exhibits a single-armed Archimedes spiral and two-centered circular arcs with respect to the line of sight. The pitch angle, arm width, and spacing of the spiral pattern are entirely determined by the orbital distance r{sub p} and Mach number M{sub p} of the object. The arm-interarm density contrast is proportional to r{sub B} , decreasing as a function of distance with a power index of -1. The background density distribution is globally changed from initially uniform to centrally concentrated. The vertical structure of the wake is manifested as circular arcs with the center at the object location. The angular extent of the arcs is determined by the Mach number M{sub p} of the object motion. Diagnostic probes of nonlinear wakes such as a detached bow shock, the absence of the definite inner arm boundary, the presence of turbulent low-density eddies, and elongated shapes of arcs are explained in the extension of the linear analysis. The density enhancement at the center is always r{sub B} /r{sub p} independent of the nonlinearity, suggesting that massive objects can substantially modify the background distribution.

  6. An Evaluation Framework for EU Research and Development e-Health Projects' Systems

    NASA Astrophysics Data System (ADS)

    Mavridis, Androklis; Katriou, Stamatia-Ann; Koumpis, Adamantios

    Over the past years it has become evident that an evaluation system was necessary for the European Research and Competitive funded projects which are large and complex structures needing constant monitoring. This is especially so for e-Health projects. The race to complete assignments means that this area is usually neglected. A proposed framework for the evaluation of R & D project systems using ATAM, ISO 14598 and ISO 9126 standards is presented. The evaluation framework covers a series of steps which ensures that the offered system satisfies quality, attributes such as operability, usability and maintainability imposed by the end users. The main advantage of this step by step procedure is that faults in the architecture, software or prototype can be recognised early in the development phase and corrected more rapidly. The system has a common set of attributes against which the various project’s deliverables are assessed.

  7. Allocation of Capacitors and Voltage Regulators in Unbalanced Distribution Systems: A Multi-objective Problem in Probabilistic Frameworks

    NASA Astrophysics Data System (ADS)

    Carpinelli, Guido; Noce, Christian; Russo, Angela; Varilone, Pietro

    2014-12-01

    Capacitors and series voltage regulators are used extensively in distribution systems to reduce power losses and improve the voltage profile along the feeders. This paper deals with the problem of contemporaneously choosing optimal locations and sizes for both capacitors and series voltage regulators in three-phase, unbalanced distribution systems. This is a mixed, non-linear, constrained, multi-objective optimization problem that usually is solved in deterministic scenarios. However, distribution systems are stochastic in nature, which can lead to inaccurate deterministic solutions. To take into account the unavoidable uncertainties that affect the input data related to the problem, in this paper, we have formulated and solved the multi-objective optimization problem in probabilistic scenarios. To address the multi-objective optimization problem, algorithms were used in which all the objective functions were combined to form a single function. These algorithms allow us to transform the original multi-objective optimization problem into an equivalent, single-objective, optimization problem, an approach that appeared to be particularly suitable since computational time was an important issue. To further reduce the computational efforts, a linearized form of the equality constraints of the optimization model was used, and a micro-genetic algorithm-based procedure was applied in the solution method.

  8. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  9. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  10. A Framework for Evaluation of Climate Science Professional Development Projects: A NICE NASA Example

    NASA Astrophysics Data System (ADS)

    Comfort, K.; Bleicher, R. E.

    2012-12-01

    Purpose of Presentation This research presents the overall logic model for the evaluation plan for a three-year NASA-funded project focused on teacher professional development. This session will highlight how we are using data to continually revise the evaluation plan, and we will also share insights about communication between the external evaluator and the PI. Objectives and Research Questions PEL leverages three NASA NICE projects with a high school district, providing professional development for teachers, learning opportunities for students, parental involvement and interaction with NASA scientists. PEL aims to increase Climate Science literacy in high school students, with a focus on Hispanic students, through scientific argumentation using authentic NASA data. Our research will concentrate on investigating the following questions: 1. What do we know about the alternative conceptions students' hold about climate science and what is challenging for students? 2. Are students developing climate science literacy, especially in the difficult concept areas, after PEL implementation? 3. How effective is PEL in nurturing scientific argumentation skills? 4. How effective are the resources we are providing in PEL? 5. Is there evidence that teachers are establishing stronger leadership capacity in their schools? Theoretical Framework for PEL Evaluation The expectancy-value theory of achievement motivation (E-V-C) (Fan, 2011; Wigfield & Eccles, 1994) provides a theoretical foundation for the research. Expectancy is the degree to which a teacher or student has reason to expect that they will be successful in school. Value indicates whether they think that performance at school will be worthwhile to them. Cost is the perceived sacrifices that must be undertaken, or factors that can inhibit a successful performance at school. For students, data from an embedded E-V-C investigation will help articulate how E-V-C factors relate to student interest in science, continuing to

  11. Selecting robust solutions from a trade-off surface through the evaluation of the distribution of parameter sets in objective space and parameter space

    NASA Astrophysics Data System (ADS)

    Dumedah, G.; Berg, A. A.; Wineberg, M.

    2009-12-01

    Hydrological models are increasingly been calibrated using multi-objective genetic algorithms (GAs). Multi-objective GAs facilitate the evaluation of several model evaluation objectives and the examination of massive combinations of parameter sets. Usually, the outcome is a set of several equally-accurate parameter sets which make-up a trade-off surface between the objective functions often referred to as Pareto set. The Pareto set describes a decision-front in a way that each solution has unique values in parameter space with competing accuracy in objective space. An automated framework of choosing a single from such a trade-off surface has not been thoroughly investigated in the model calibration literature. As a result, this presentation will demonstrate an automated selection of robust solutions from a trade-off surface using the distribution of solutions in both objective space and parameter space. The trade-off surface was generated using the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to calibrate the Soil and Water Assessment Tool (SWAT) for streamflow simulation based on model bias and root mean square error. Our selection method generates solutions with unique properties including a representative pathway in parameter space, a basin of attraction or the center of mass in objective space, and a proximity to the origin in objective space. Additionally, our framework determines a robust solution as a balanced compromise for the distribution of solutions in objective space and parameter space. That is, the robust solution emphasizes stability in model parameter values and in objective function values in a way that similarity in parameter space implies similarity in objective space.

  12. Correlating objective and subjective evaluation of texture appearance with applications to camera phone imaging

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.

    2009-01-01

    Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.

  13. [Bio-objects and biological methods of space radiation effects evaluation].

    PubMed

    Kaminskaia, E V; Nevzgodina, L V; Platova, N G

    2009-01-01

    The unique conditions of space experiments place austere requirements to bio-objects and biological methods of radiation effects evaluation. The paper discusses suitability of a number of bio-objects varying in stage of evolution and metabolism for space researches aimed to state common patterns of the radiation damage caused by heavy ions (HI), and character of HI-cell interaction. Physical detectors in space experiments of the BIOBLOCK series make it possible to identify bio-objects hit by space HI and to set correlation between HI track topography and biological effect. The paper provides an all-round description of the bio-objects chosen for two BIOBLOCK experiments (population of hydrophyte Wolffia arrhiza (fam. duckweed) and Lactuca sativa seeds) and the method of evaluating effects from single space radiation HI. Direct effects of heavy ions on cells can be determined by the criteria of chromosomal aberrations and delayed morphologic abnormalities. The evaluation results are compared with the data about human blood lymphocytes. Consideration is being given to the procedures of test-objects' treatment and investigation. PMID:20120909

  14. Real Progress in Maryland: Student Learning Objectives and Teacher and Principal Evaluation

    ERIC Educational Resources Information Center

    Slotnik, William J.; Bugler, Daniel; Liang, Guodong

    2014-01-01

    The Maryland State Department of Education (MSDE) is making significant strides in guiding and supporting the implementation of Student Learning Objectives (SLOs) as well as a teacher and principal evaluation (TPE) system statewide. MSDE support focuses on helping districts prepare for full SLO implementation by providing technical assistance with…

  15. A Component Analysis of the Impact of Evaluative and Objective Feedback on Performance

    ERIC Educational Resources Information Center

    Johnson, Douglas A.

    2013-01-01

    Despite the frequency with which performance feedback interventions are used in organizational behavior management, component analyses of such feedback are rare. It has been suggested that evaluation of performance and objective details about performance are two necessary components for performance feedback. The present study was designed to help…

  16. An Evaluation of Learning Objects in Singapore Primary Education: A Case Study Approach

    ERIC Educational Resources Information Center

    Grace, Tay Pei Lyn; Suan, Ng Peck; Wanzhen, Liaw

    2008-01-01

    Purpose: The purpose of this paper is to evaluate the usability and interface design of e-learning portal developed for primary schools in Singapore. Design/methodology/approach: Using Singapore-based learning EDvantage (LEAD) portal as a case study, this paper reviews and analyses the usability and usefulness of embedded learning objects (LOs)…

  17. Evaluating the Use of Learning Objects in Australian and New Zealand Schools

    ERIC Educational Resources Information Center

    Schibeci, Renato; Lake, David; Phillips, Rob; Lowe, Kate; Cummings, Rick; Miller, Erica

    2008-01-01

    The Le@rning Federation, an agency funded by Australian and New Zealand governments, initiated a Field Review project as the start of a long-term research study to evaluate the impact, application and effectiveness of the online digital content developed according to the learning object model. In terms of content, the pilot Field Review found that…

  18. PharmD Drug Information Rotation Experience: Philosophy, Objectives and Evaluation.

    ERIC Educational Resources Information Center

    Evens, Ronald P.

    1979-01-01

    A drug information service clerkship is described including the philosophy, environment, objectives, methodology, and assessments of what is believed to constitute an idealized model for adaptation to other university settings. Appended are textbook references, indexing and abstracting resources, reading list, and two student evaluation forms.…

  19. A GIS-BASED METHOD FOR MULTI-OBJECTIVE EVALUATION OF PARK VEGETATION. (R824766)

    EPA Science Inventory

    Abstract

    In this paper we describe a method for evaluating the concordance between a set of mapped landscape attributes and a set of quantitatively expressed management priorities. The method has proved to be useful in planning urban green areas, allowing objectively d...

  20. Objective Evaluation of Muscle Strength in Infants with Hypotonia and Muscle Weakness

    ERIC Educational Resources Information Center

    Reus, Linda; van Vlimmeren, Leo A.; Staal, J. Bart; Janssen, Anjo J. W. M.; Otten, Barto J.; Pelzer, Ben J.; Nijhuis-van der Sanden, Maria W. G.

    2013-01-01

    The clinical evaluation of an infant with motor delay, muscle weakness, and/or hypotonia would improve considerably if muscle strength could be measured objectively and normal reference values were available. The authors developed a method to measure muscle strength in infants and tested 81 typically developing infants, 6-36 months of age, and 17…

  1. OBJECTIVE EVALUATION OF HYPERACTIVATED MOTILITY IN RAT SPERMATOZA USING COMPUTER-ASSISTED SPERM ANALYSIS (CASA)

    EPA Science Inventory

    Objective evaluation of hyperactivated motility in rat spermatozoa using computer-assisted sperm analysis.

    Cancel AM, Lobdell D, Mendola P, Perreault SD.

    Toxicology Program, University of North Carolina, Chapel Hill, NC 27599, USA.

    The aim of this study was t...

  2. An Integrative Modeling Framework to Evaluate the Productivity and Sustainability of Biofuel Crop Production Systems

    SciTech Connect

    Zhang, Xuesong; Izaurralde, Roberto C.; Manowitz, David H.; West, T. O.; Post, W. M.; Thomson, Allison M.; Bandaru, V. P.; Nichols, J.; Williams, J.R.

    2010-09-08

    The potential expansion of biofuel production raises food, energy, and environmental challenges that require careful assessment of the impact of biofuel production on greenhouse gas (GHG) emissions, soil erosion, nutrient loading, and water quality. In this study, we describe a spatially-explicit integrative modeling framework (SEIMF) to understand and quantify the environmental impacts of different biomass cropping systems. This SEIMF consists of three major components: 1) a geographic information system (GIS)-based data analysis system to define spatial modeling units with resolution of 56 m to address spatial variability, 2) the biophysical and biogeochemical model EPIC (Environmental Policy Integrated Climate) applied in a spatially-explicit way to predict biomass yield, GHG emissions, and other environmental impacts of different biofuel crops production systems, and 3) an evolutionary multi-objective optimization algorithm for exploring the trade-offs between biofuel energy production and unintended ecosystem-service responses. Simple examples illustrate the major functions of the SEIMF when applied to a 9-county Regional Intensive Modeling Area (RIMA) in SW Michigan to 1) simulate biofuel crop production, 2) compare impacts of management practices and local ecosystem settings, and 3) optimize the spatial configuration of different biofuel production systems by balancing energy production and other ecosystem-service variables. Potential applications of the SEIMF to support life cycle analysis and provide information on biodiversity evaluation and marginal-land identification are also discussed. The SEIMF developed in this study is expected to provide a useful tool for scientists and decision makers to understand sustainability issues associated with the production of biofuels at local, regional, and national scales.

  3. Evaluation of satellite-based precipitation estimates in winter season using an object-based approach

    NASA Astrophysics Data System (ADS)

    Li, J.; Hsu, K.; AghaKouchak, A.; Sorooshian, S.

    2012-12-01

    Verification has become an integral component of satellite precipitation algorithms and products. A number of object-based verification methods have been proposed to provide diagnostic information regarding the precipitation products' ability to capture the spatial pattern, intensity, and placement of precipitation. However, most object-based methods are not capable of investigating precipitation objects at the storm-scale. In this study, an image processing approach known as watershed segmentation was adopted to detect the storm-scale rainfall objects. Then, a fuzzy logic-based technique was utilized to diagnose and analyze storm-scale object attributes, including centroid distance, area ratio, intersection area ratio and orientation angle difference. Three verification metrics (i.e., false alarm ratio, missing ratio and overall membership score) were generated for validation and verification. Three satellite-based precipitation products, including PERSIANN, CMORPH, 3B42RT, were evaluated against NOAA stage IV MPE multi-sensor composite rain analysis at 0.25° by 0.25° on a daily scale in the winter season of 2010 over the contiguous United States. Winter season is dominated by frontal systems which usually have larger area coverage. All three products and the stage IV observation tend to find large size storm objects. With respect to the evaluation attributes, PERSIANN tends to obtain larger area ratio and consequently has larger centroid distance to the stage IV observations, while 3B42RT are found to be closer to the stage IV for the object size. All evaluation products give small orientation angle differences but vary significantly for the missing ratio and false alarm ratio. This implies that satellite estimates can fail to detect storms in winter. The overall membership scores are close for all three different products which indicate that all three satellite-based precipitation products perform well for capturing the spatial and geometric characteristics of

  4. A novel objective sour taste evaluation method based on near-infrared spectroscopy.

    PubMed

    Hoshi, Ayaka; Aoki, Soichiro; Kouno, Emi; Ogasawara, Masashi; Onaka, Takashi; Miura, Yutaka; Mamiya, Kanji

    2014-05-01

    One of the most important themes in the development of foods and drinks is the accurate evaluation of taste properties. In general, a sensory evaluation system is frequently used for evaluating food and drink. This method, which is dependent on human senses, is highly sensitive but is influenced by the eating experience and food palatability of individuals, leading to subjective results. Therefore, a more effective method for objectively estimating taste properties is required. Here we show that salivary hemodynamic signals, as measured by near-infrared spectroscopy, are a useful objective indicator for evaluating sour taste stimulus. In addition, the hemodynamic responses of the parotid gland are closely correlated to the salivary secretion volume of the parotid gland in response to basic taste stimuli and respond to stimuli independently of the hedonic aspect. Moreover, we examined the hemodynamic responses to complex taste stimuli in food-based solutions and demonstrated for the first time that the complicated phenomenon of the "masking effect," which decreases taste intensity despite the additional taste components, can be successfully detected by near-infrared spectroscopy. In summary, this study is the first to demonstrate near-infrared spectroscopy as a novel tool for objectively evaluating complex sour taste properties in foods and drinks. PMID:24474216

  5. Linking Assessment to Decision Making in Water Resources Planning - Decision Making Frameworks and Case Study Evaluations

    NASA Astrophysics Data System (ADS)

    Broman, D.; Gangopadhyay, S.; Simes, J.

    2015-12-01

    Climate assessments have become an accepted and commonly used component of long term water management and planning. There is substantial variation in the methods used in these assessments; however, managers and decision-makers have come to value their utility to identify future system limitations, and to evaluate future alternatives to ensure satisfactory system performance. A new set of decision-making frameworks have been proposed, including robust decision making (RDM), and decision scaling, that directly address the deep uncertainties found in both future climate, and non-climatic factors. Promising results have been obtained using these new frameworks, offering a more comprehensive understanding of future conditions leading to failures, and identification of measures to address these failures. Data and resource constraints have limited the use of these frameworks within the Bureau of Reclamation. We present here a modified framework that captures the strengths of previously proposed methods while using a suite of analysis tool that allow for a 'rapid climate assessment' to be performed. A scalable approach has been taken where more complex tools can be used if project resources allow. This 'rapid assessment' is demonstrated through two case studies on the Santa Ana and Colorado Rivers where previous climate assessments have been completed. Planning-level measures are used to compare how decision making is affected when using this new decision making framework.

  6. Testing thermal comfort of trekking boots: an objective and subjective evaluation.

    PubMed

    Arezes, P M; Neves, M M; Teixeira, S F; Leão, C P; Cunha, J L

    2013-07-01

    The study of the thermal comfort of the feet when using a specific type of shoe is of paramount importance, in particular if the main goal of the study is to attend to the needs of users. The main aim of this study was to propose a test battery for thermal comfort analysis and to apply it to the analysis of trekking boots. Methodologically, the project involves both objective and subjective evaluations. An objective evaluation of the thermal properties of the fabrics used in the boots was developed and applied. In addition, the thermal comfort provided when using the boots was also assessed both subjective and objectively. The evaluation of the thermal comfort during use, which was simulated in a laboratory environment, included the measurement of the temperature and moisture of the feet. The subjective assessment was performed using a questionnaire. From the results obtained, it was possible to define an optimal combination of fabrics to apply to trekking boots by considering the provided thermal insulation, air permeability and wicking. The results also revealed that the subjective perception of thermal comfort appears to be more related to the increase in temperature of the feet than to the moisture retention inside the boot. Although the evaluation of knits used in the boots indicated that a particular combination of fibres was optimal for use in the inner layer, the subjective and objective evaluation of thermal comfort revealed that the evaluation provided by users did not necessarily match the technical assessment data. No correlation was observed between the general comfort and specific thermal comfort assessments. Finally, the identification of thermal discomfort by specific foot areas would be useful in the process of designing and developing boots. PMID:23317756

  7. A fuel cycle framework for evaluating greenhouse gas emission reduction technology

    SciTech Connect

    Ashton, W.B.; Barns, D.W. ); Bradley, R.A. . Office of Environmental Analysis)

    1990-05-01

    Energy-related greenhouse gas (GHG) emissions arise from a number of fossil fuels, processes and equipment types throughout the full cycle from primary fuel production to end-use. Many technology alternatives are available for reducing emissions based on efficiency improvements, fuel switching to low-emission fuels, GHG removal, and changes in end-use demand. To conduct systematic analysis of how new technologies can be used to alter current emission levels, a conceptual framework helps develop a comprehensive picture of both the primary and secondary impacts of a new technology. This paper describes a broad generic fuel cycle framework which is useful for this purpose. The framework is used for cataloging emission source technologies and for evaluating technology solutions to reduce GHG emissions. It is important to evaluate fuel mix tradeoffs when investigating various technology strategies for emission reductions. For instance, while substituting natural gas for coal or oil in end-use applications to reduce CO{sub 2} emissions, natural gas emissions of methane in the production phase of the fuel cycle may increase. Example uses of the framework are given.

  8. Analytics4Action Evaluation Framework: A Review of Evidence-Based Learning Analytics Interventions at the Open University UK

    ERIC Educational Resources Information Center

    Rienties, Bart; Boroowa, Avinash; Cross, Simon; Kubiak, Chris; Mayles, Kevin; Murphy, Sam

    2016-01-01

    There is an urgent need to develop an evidence-based framework for learning analytics whereby stakeholders can manage, evaluate, and make decisions about which types of interventions work well and under which conditions. In this article, we will work towards developing a foundation of an Analytics4Action Evaluation Framework (A4AEF) that is…

  9. Classroom Teacher's Performance-Based Evaluation Form (CTPBEF) for Public Education Schools in the State of Kuwait: A Framework

    ERIC Educational Resources Information Center

    Al-Shammari, Zaid; Yawkey, Thomas D.

    2008-01-01

    This investigation using Grounded Theory focuses on developing, designing and testing out an evaluation method used as a framework for this study. This framework evolved into the instrument entitled, "Classroom Teacher's Performance Based Evaluation Form (CTPBEF)". This study shows the processes and procedures used in CTPBEF's development and…

  10. Well-Being With Objects: Evaluating a Museum Object-Handling Intervention for Older Adults in Health Care Settings.

    PubMed

    Thomson, Linda J M; Chatterjee, Helen J

    2016-03-01

    The extent to which a museum object-handling intervention enhanced older adult well-being across three health care settings was examined. The program aimed to determine whether therapeutic benefits could be measured objectively using clinical scales. Facilitator-led, 30 to 40 min sessions handling and discussing museum objects were conducted in acute and elderly care (11 one-to-ones), residential (4 one-to-ones and 1 group of five), and psychiatric (4 groups of five) settings. Pre-post measures of psychological well-being (Positive Affect and Negative Affect Schedule) and subjective wellness and happiness (Visual Analogue Scales) were compared. Positive affect and wellness increased significantly in acute and elderly and residential care though not psychiatric care whereas negative affect decreased and happiness increased in all settings. Examination of audio recordings revealed enhanced confidence, social interaction, and learning. The program allowed adults access to a museum activity who by virtue of age and ill health would not otherwise have engaged with museum objects. PMID:25421749

  11. Object-oriented programming applied to the evaluation of reliability fault trees

    SciTech Connect

    Patterson-Hine, F.A.

    1988-01-01

    Object-oriented programming techniques are used to implement an algorithm for the direct evaluation of fault trees. A simple bottom-up procedure evaluates independent branches. The identification of dependencies within a branch results in the application of a top-down recursive procedure. A unique approach to modularization enables dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. The algorithm is implemented on a Texas Instruments Explorer LISP workstation which offers an environment that incorporates an object-oriented system called Flavors with Common LISP. Several example fault trees from the literature are evaluated with the object-oriented algorithm, and the results are compared with conventional reduction techniques. The program includes a graphical tree editor to display the fault tree objects. The graphical display of the tree enables a visual check of the input tree structure.

  12. Benefit-risk Evaluation for Diagnostics: A Framework (BED-FRAME).

    PubMed

    Evans, Scott R; Pennello, Gene; Pantoja-Galicia, Norberto; Jiang, Hongyu; Hujer, Andrea M; Hujer, Kristine M; Manca, Claudia; Hill, Carol; Jacobs, Michael R; Chen, Liang; Patel, Robin; Kreiswirth, Barry N; Bonomo, Robert A

    2016-09-15

    The medical community needs systematic and pragmatic approaches for evaluating the benefit-risk trade-offs of diagnostics that assist in medical decision making. Benefit-Risk Evaluation of Diagnostics: A Framework (BED-FRAME) is a strategy for pragmatic evaluation of diagnostics designed to supplement traditional approaches. BED-FRAME evaluates diagnostic yield and addresses 2 key issues: (1) that diagnostic yield depends on prevalence, and (2) that different diagnostic errors carry different clinical consequences. As such, evaluating and comparing diagnostics depends on prevalence and the relative importance of potential errors. BED-FRAME provides a tool for communicating the expected clinical impact of diagnostic application and the expected trade-offs of diagnostic alternatives. BED-FRAME is a useful fundamental supplement to the standard analysis of diagnostic studies that will aid in clinical decision making. PMID:27193750

  13. QUANTITATIVE EVALUATION OF THE HYPOTHESIS THAT BL LACERTAE OBJECTS ARE QSO REMNANTS

    SciTech Connect

    Borra, E. F.

    2014-11-20

    We evaluate with numerical simulations the hypothesis that BL Lacertae objects (BLLs) are the remnants of quasi-stellar objects. This hypothesis is based on their highly peculiar redshift evolution. They have a comoving space density that increases with decreasing redshift, contrary to all other active galactic nuclei. We assume that relativistic jets are below detection in young radio-quiet quasars and increase in strength with cosmic time so that they eventually are detected as BLLs. Our numerical simulations fit very well the observed redshift distributions of BLLs. There are strong indications that only the high-synchrotron-peaked BLLs could be QSO remnants.

  14. Land Surface Verification Toolkit (LVT) - A Generalized Framework for Land Surface Model Evaluation

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Peters-Lidard, Christa D.; Santanello, Joseph; Harrison, Ken; Liu, Yuqiong; Shaw, Michael

    2011-01-01

    Model evaluation and verification are key in improving the usage and applicability of simulation models for real-world applications. In this article, the development and capabilities of a formal system for land surface model evaluation called the Land surface Verification Toolkit (LVT) is described. LVT is designed to provide an integrated environment for systematic land model evaluation and facilitates a range of verification approaches and analysis capabilities. LVT operates across multiple temporal and spatial scales and employs a large suite of in-situ, remotely sensed and other model and reanalysis datasets in their native formats. In addition to the traditional accuracy-based measures, LVT also includes uncertainty and ensemble diagnostics, information theory measures, spatial similarity metrics and scale decomposition techniques that provide novel ways for performing diagnostic model evaluations. Though LVT was originally designed to support the land surface modeling and data assimilation framework known as the Land Information System (LIS), it also supports hydrological data products from other, non-LIS environments. In addition, the analysis of diagnostics from various computational subsystems of LIS including data assimilation, optimization and uncertainty estimation are supported within LVT. Together, LIS and LVT provide a robust end-to-end environment for enabling the concepts of model data fusion for hydrological applications. The evolving capabilities of LVT framework are expected to facilitate rapid model evaluation efforts and aid the definition and refinement of formal evaluation procedures for the land surface modeling community.

  15. Integration of electro-anatomical and imaging data of the left ventricle: An evaluation framework.

    PubMed

    Soto-Iglesias, David; Butakoff, Constantine; Andreu, David; Fernández-Armenta, Juan; Berruezo, Antonio; Camara, Oscar

    2016-08-01

    Integration of electrical and structural information for scar characterization in the left ventricle (LV) is a crucial step to better guide radio-frequency ablation therapies, which are usually performed in complex ventricular tachycardia (VT) cases. This integration requires finding a common representation where to map the electrical information from the electro-anatomical map (EAM) surfaces and tissue viability information from delay-enhancement magnetic resonance images (DE-MRI). However, the development of a consistent integration method is still an open problem due to the lack of a proper evaluation framework to assess its accuracy. In this paper we present both: (i) an evaluation framework to assess the accuracy of EAM and imaging integration strategies with simulated EAM data and a set of global and local measures; and (ii) a new integration methodology based on a planar disk representation where the LV surface meshes are quasi-conformally mapped (QCM) by flattening, allowing for simultaneous visualization and joint analysis of the multi-modal data. The developed evaluation framework was applied to estimate the accuracy of the QCM-based integration strategy on a benchmark dataset of 128 synthetically generated ground-truth cases presenting different scar configurations and EAM characteristics. The obtained results demonstrate a significant reduction in global overlap errors (50-100%) with respect to state-of-the-art integration techniques, also better preserving the local topology of small structures such as conduction channels in scars. Data from seventeen VT patients were also used to study the feasibility of the QCM technique in a clinical setting, consistently outperforming the alternative integration techniques in the presence of sparse and noisy clinical data. The proposed evaluation framework has allowed a rigorous comparison of different EAM and imaging data integration strategies, providing useful information to better guide clinical practice in

  16. Towards a Holistic Framework for the Evaluation of Emergency Plans in Indoor Environments

    PubMed Central

    Serrano, Emilio; Poveda, Geovanny; Garijo, Mercedes

    2014-01-01

    One of the most promising fields for ambient intelligence is the implementation of intelligent emergency plans. Because the use of drills and living labs cannot reproduce social behaviors, such as panic attacks, that strongly affect these plans, the use of agent-based social simulation provides an approach to evaluate these plans more thoroughly. (1) The hypothesis presented in this paper is that there has been little interest in describing the key modules that these simulators must include, such as formally represented knowledge and a realistic simulated sensor model, and especially in providing researchers with tools to reuse, extend and interconnect modules from different works. This lack of interest hinders researchers from achieving a holistic framework for evaluating emergency plans and forces them to reconsider and to implement the same components from scratch over and over. In addition to supporting this hypothesis by considering over 150 simulators, this paper: (2) defines the main modules identified and proposes the use of semantic web technologies as a cornerstone for the aforementioned holistic framework; (3) provides a basic methodology to achieve the framework; (4) identifies the main challenges; and (5) presents an open and free software tool to hint at the potential of such a holistic view of emergency plan evaluation in indoor environments. PMID:24662453

  17. Towards a holistic framework for the evaluation of emergency plans in indoor environments.

    PubMed

    Serrano, Emilio; Poveda, Geovanny; Garijo, Mercedes

    2014-01-01

    One of the most promising fields for ambient intelligence is the implementation of intelligent emergency plans. Because the use of drills and living labs cannot reproduce social behaviors, such as panic attacks, that strongly affect these plans, the use of agent-based social simulation provides an approach to evaluate these plans more thoroughly. (1) The hypothesis presented in this paper is that there has been little interest in describing the key modules that these simulators must include, such as formally represented knowledge and a realistic simulated sensor model, and especially in providing researchers with tools to reuse, extend and interconnect modules from different works. This lack of interest hinders researchers from achieving a holistic framework for evaluating emergency plans and forces them to reconsider and to implement the same components from scratch over and over. In addition to supporting this hypothesis by considering over 150 simulators, this paper: (2) defines the main modules identified and proposes the use of semantic web technologies as a cornerstone for the aforementioned holistic framework; (3) provides a basic methodology to achieve the framework; (4) identifies the main challenges; and (5) presents an open and free software tool to hint at the potential of such a holistic view of emergency plan evaluation in indoor environments. PMID:24662453

  18. Matching methods evaluation framework for stereoscopic breast x-ray images.

    PubMed

    Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric

    2016-01-01

    Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps. PMID:26587552

  19. Evaluating and Improving Cloud Processes in the Multi-Scale Modeling Framework

    SciTech Connect

    Ackerman, Thomas P.

    2015-03-01

    The research performed under this grant was intended to improve the embedded cloud model in the Multi-scale Modeling Framework (MMF) for convective clouds by using a 2-moment microphysics scheme rather than the single moment scheme used in all the MMF runs to date. The technical report and associated documents describe the results of testing the cloud resolving model with fixed boundary conditions and evaluation of model results with data. The overarching conclusion is that such model evaluations are problematic because errors in the forcing fields control the results so strongly that variations in parameterization values cannot be usefully constrained

  20. Design and evaluation of an ultra-slim objective for in-vivo deep optical biopsy

    PubMed Central

    Landau, Sara M.; Liang, Chen; Kester, Robert T.; Tkaczyk, Tomasz S.; Descour, Michael R.

    2010-01-01

    An estimated 1.6 million breast biopsies are performed in the US each year. In order to provide real-time, in-vivo imaging with sub-cellular resolution for optical biopsies, we have designed an ultra-slim objective to fit inside the 1-mm-diameter hypodermic needles currently used for breast biopsies to image tissue stained by the fluorescent probe proflavine. To ensure high-quality imaging performance, experimental tests were performed to characterize fiber bundle’s light-coupling efficiency and simulations were performed to evaluate the impact of candidate lens materials’ autofluorescence. A prototype of NA = 0.4, 250-µm field of view, ultra-slim objective optics was built and tested, yielding diffraction-limited performance and estimated resolution of 0.9 µm. When used in conjunction with a commercial coherent fiber bundle to relay the image formed by the objective, the measured resolution was 2.5 µm. PMID:20389489

  1. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments.

    PubMed

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-01-01

    A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613

  2. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments

    PubMed Central

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-01-01

    A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613

  3. Direct evaluation of fault trees using object-oriented programming techniques

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1989-01-01

    Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.

  4. An evaluation framework for effective public participation in EIA in Pakistan

    SciTech Connect

    Nadeem, Obaidullah; Fischer, Thomas B.

    2011-01-15

    Evaluating the effectiveness of public participation in EIA related decisions is of crucial importance for developing a better understanding of overall EIA effectiveness. This paper aims to contribute to the professional debate by establishing a country specific evaluation framework for Pakistan, which, it is suggested, could also potentially be used in other developing countries. The framework is used to evaluate performance of public participation in EIA in terms of 40 attributes for four selected projects from the province of Punjab. The evaluation is based on interviews with stakeholders, review of EIA reports as well as public hearing proceedings and environmental approval conditions. The evaluation of the selected projects revealed an overall weak influence of public participation on substantive quality of EIA and on the final decision. Overall, EIA public participation has succeeded in providing a more egalitarian environment. Furthermore, it appears fair to say that sufficient time for submitting written comments on EIA reports as well as for raising concerns during public hearings had been given. Also, public consultation was significantly contributing to educating participants. Despite some impediments, it is argued that public participation in EIA is gradually gaining ground in Pakistan. Recommendations to enhance EIA public participation effectiveness in Pakistan include applying a more proactive approach which should take place before EIA is conducted and before site selection for development projects is happening.

  5. Evaluating social outcomes of HIV/AIDS interventions: a critical assessment of contemporary indicator frameworks

    PubMed Central

    Mannell, Jenevieve; Cornish, Flora; Russell, Jill

    2014-01-01

    Introduction Contemporary HIV-related theory and policy emphasize the importance of addressing the social drivers of HIV risk and vulnerability for a long-term response. Consequently, increasing attention is being given to social and structural interventions, and to social outcomes of HIV interventions. Appropriate indicators for social outcomes are needed in order to institutionalize the commitment to addressing social outcomes. This paper critically assesses the current state of social indicators within international HIV/AIDS monitoring and evaluation frameworks. Methods We analyzed the indicator frameworks of six international organizations involved in efforts to improve and synchronize the monitoring and evaluation of the HIV/AIDS response. Our analysis classifies the 328 unique indicators according to what they measure and assesses the degree to which they offer comprehensive measurement across three dimensions: domains of the social context, levels of change and organizational capacity. Results and discussion The majority of indicators focus on individual-level (clinical and behavioural) interventions and outcomes, neglecting structural interventions, community interventions and social outcomes (e.g. stigma reduction; community capacity building; policy-maker sensitization). The main tool used to address social aspects of HIV/AIDS is the disaggregation of data by social group. This raises three main limitations. Indicator frameworks do not provide comprehensive coverage of the diverse social drivers of the epidemic, particularly neglecting criminalization, stigma, discrimination and gender norms. There is a dearth of indicators for evaluating the social impacts of HIV interventions. Indicators of organizational capacity focus on capacity to effectively deliver and manage clinical services, neglecting capacity to respond appropriately and sustainably to complex social contexts. Conclusions Current indicator frameworks cannot adequately assess the social

  6. A preliminary sub-basin scale evaluation framework of site suitability for onshore aquifer-based CO{sub 2} storage in China

    SciTech Connect

    Wei, Ning; Li, Xiaochun; Wang, Ying; Dahowski, Robert T; Davidson, Casie L; Bromhal, Grant S

    2013-01-01

    Development of a reliable, broadly applicable framework for the identification and suitability evaluation of potential CO{sub 2} storage sites is essential before large-scale deployment of carbon dioxide capture and geological storage (CCS) can commence. In this study, a sub-basin scale evaluation framework was developed to assess the suitability of potential onshore deep saline aquifers for CO{sub 2} storage in China. The methodology, developed in consultation with experts from the academia and the petroleum industry in China, is based on a multi-criteria analysis (MCA) framework that considers four objectives: (1) storage optimization, in terms of storage capacity and injectivity; (2) risk minimization and storage security; (3) environmental restrictions regarding surface and subsurface use; and (4) economic considerations. The framework is designed to provide insights into both the suitability of potential aquifer storage sites as well as the priority for early deployment of CCS with existing CO{sub 2} sources. Preliminary application of the framework, conducted using GIS-based evaluation tools revealed that 18% of onshore aquifer sites with a combined CO{sub 2} storage capacity of 746 gigatons are considered to exhibit very high suitability, and 11% of onshore aquifer sites with a total capacity of 290 gigatons exhibit very high priority opportunities for implementation. These onshore aquifer sites may provide promising opportunities for early large-scale CCS deployment and contribute to CO{sub 2} mitigation in China for many decades.

  7. A preliminary sub-basin scale evaluation framework of site suitability for onshore aquifer-based CO2 storage in China

    SciTech Connect

    Wei, Ning; Li, Xiaochun; Wang, Ying; Dahowski, Robert T.; Davidson, Casie L.; Bromhal, Grant

    2013-01-30

    Development of a reliable, broadly applicable framework for the identification and suitability evaluation of potential CO2 storage sites is essential before large scale deployment of carbon dioxide capture and geological storage (CCS) can commence. In this study, a sub-basin scale evaluation framework was developed to assess the suitability of potential onshore deep saline aquifers for CO2 storage in China. The methodology, developed in consultation with experts from the academia and the petroleum industry in China, is based on a multi-criteria analysis (MCA) framework that considers four objectives: (1) storage optimization, in terms of storage capacity and injectivity; (2) risk minimization and storage security; (3) environmental restrictions regarding surface and subsurface use; and (4) economic considerations. The framework is designed to provide insights into both the suitability of potential aquifer storage sites as well as the priority for early deployment of CCS with existing CO2 sources. Preliminary application of the framework, conducted using GIS-based evaluation tools revealed that 18% of onshore aquifer sites with a combined CO2 storage capacity of 746 gigatons are considered to exhibit very high suitability, and 11% of onshore aquifer sites with a total capacity of 290 gigatons exhibit very high priority opportunities for implementation. These onshore aquifer sites may provide promising opportunities for early large-scale CCS deployment and contribute to CO2 mitigation in China for many decades.

  8. Research and Evaluations of the Health Aspects of Disasters, Part IX: Risk-Reduction Framework.

    PubMed

    Birnbaum, Marvin L; Daily, Elaine K; O'Rourke, Ann P; Loretti, Alessandro

    2016-06-01

    A disaster is a failure of resilience to an event. Mitigating the risks that a hazard will progress into a destructive event, or increasing the resilience of a society-at-risk, requires careful analysis, planning, and execution. The Disaster Logic Model (DLM) is used to define the value (effects, costs, and outcome(s)), impacts, and benefits of interventions directed at risk reduction. A Risk-Reduction Framework, based on the DLM, details the processes involved in hazard mitigation and/or capacity-building interventions to augment the resilience of a community or to decrease the risk that a secondary event will develop. This Framework provides the structure to systematically undertake and evaluate risk-reduction interventions. It applies to all interventions aimed at hazard mitigation and/or increasing the absorbing, buffering, or response capacities of a community-at-risk for a primary or secondary event that could result in a disaster. The Framework utilizes the structure provided by the DLM and consists of 14 steps: (1) hazards and risks identification; (2) historical perspectives and predictions; (3) selection of hazard(s) to address; (4) selection of appropriate indicators; (5) identification of current resilience standards and benchmarks; (6) assessment of the current resilience status; (7) identification of resilience needs; (8) strategic planning; (9) selection of an appropriate intervention; (10) operational planning; (11) implementation; (12) assessments of outputs; (13) synthesis; and (14) feedback. Each of these steps is a transformation process that is described in detail. Emphasis is placed on the role of Coordination and Control during planning, implementation of risk-reduction/capacity building interventions, and evaluation. Birnbaum ML , Daily EK , O'Rourke AP , Loretti A . Research and evaluations of the health aspects of disasters, part IX: Risk-Reduction Framework. Prehosp Disaster Med. 2016;31(3):309-325. PMID:27033777

  9. Evaluating River Restoration Objectives As Research Hypotheses: A Case Study Of Engineered Log Jams

    NASA Astrophysics Data System (ADS)

    Hanrahan, T. P.; Vernon, C. R.

    2010-12-01

    Recent evaluations of river restoration monitoring efforts in the U.S. indicate the need for improved approaches to quantifying the effectiveness of restoration actions. As part of a river restoration project involving the installation of engineered log jams (ELJ), a monitoring framework was designed to quantify the effectiveness of ELJ installation at achieving stated restoration goals. During the ELJ planning and design phases, project managers were required to identify specific salmon habitat benefits expected to result from the restoration actions. The expected habitat benefits were restated as restoration hypotheses with quantifiable metrics focused on characteristics of the physical environment that were directly linked to the proposed restoration activities. A before-after sampling design was established to quantify metrics of channel planform and lateral profile; channel bedform and longitudinal profile; large woody debris; riverbed substrate; and hydrologic connectivity. The monitoring framework will quantify the cause-effect relationships among restoration activities and salmon habitat benefits, and will inform the planning and design of similar future restoration actions through the restoration program’s adaptive management process.

  10. What is healthy food? Objective nutrient profile scores and subjective lay evaluations in comparison.

    PubMed

    Bucher, T; Müller, B; Siegrist, M

    2015-12-01

    To date, it is unclear how consumers evaluate the healthiness of individual foods and meals and how consumers' perceptions are related to expert opinions. This knowledge is essential for efficient communication of nutrition information with the goal of promoting healthy eating. This study used the fake food buffet method to investigate health perceptions of selected meals and of 54 individual foods and beverages. Lay consumers' subjective healthiness evaluations of meals and foods were compared to objective nutrient profile scores, which were previously shown to correlate highly with expert opinions. The results show that nutrition profile scores and lay evaluations were highly correlated, which indicates that lay people used similar criteria as experts to evaluate the healthiness of foods. However, lay consumers tended to neglect the amount of saturated fat, protein and sodium for their judgments. Also, it was found that while lay consumers were quite able to evaluate single food products, they had difficulties in evaluating entire meals. Future interventions should focus particularly on educating the consumer about the negative effects of diets high in salt and saturated fat and they should improve the consumer's abilities to evaluate entire meals. PMID:26256557

  11. Improving scalability with loop transformations and message aggregation in parallel object-oriented frameworks for scientific computing

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-09-01

    Application codes reliably achieve performance far less than the advertised capabilities of existing architectures, and this problem is worsening with increasingly-parallel machines. For large-scale numerical applications, stencil operations often impose the great part of the computational cost, and the primary sources of inefficiency are the costs of message passing and poor cache utilization. This paper proposes and demonstrates optimizations for stencil and stencil-like computations for both serial and parallel environments that ameliorate these sources of inefficiency. Achieving scalability, they believe, requires both algorithm design and compile-time support. The optimizations they present are automatable because the stencil-like computations are implemented at a high level of abstraction using object-oriented parallel array class libraries. These optimizations, which are beyond the capabilities of today compilers, may be performed automatically by a preprocessor such as the one they are currently developing.

  12. A new framework for evaluating the impacts of drought on net primary productivity of grassland.

    PubMed

    Lei, Tianjie; Wu, Jianjun; Li, Xiaohan; Geng, Guangpo; Shao, Changliang; Zhou, Hongkui; Wang, Qianfeng; Liu, Leizhen

    2015-12-01

    This paper presented a valuable framework for evaluating the impacts of droughts (single factor) on grassland ecosystems. This framework was defined as the quantitative magnitude of drought impact that unacceptable short-term and long-term effects on ecosystems may experience relative to the reference standard. Long-term effects on ecosystems may occur relative to the reference standard. Net primary productivity (NPP) was selected as the response indicator of drought to assess the quantitative impact of drought on Inner Mongolia grassland based on the Standardized Precipitation Index (SPI) and BIOME-BGC model. The framework consists of six main steps: 1) clearly defining drought scenarios, such as moderate, severe and extreme drought; 2) selecting an appropriate indicator of drought impact; 3) selecting an appropriate ecosystem model and verifying its capabilities, calibrating the bias and assessing the uncertainty; 4) assigning a level of unacceptable impact of drought on the indicator; 5) determining the response of the indicator to drought and normal weather state under global-change; and 6) investigating the unacceptable impact of drought at different spatial scales. We found NPP losses assessed using the new framework were more sensitive to drought and had higher precision than the long-term average method. Moreover, the total and average losses of NPP are different in different grassland types during the drought years from 1961-2009. NPP loss was significantly increased along a gradient of increasing drought levels. Meanwhile, NPP loss variation under the same drought level was different in different grassland types. The operational framework was particularly suited for integrative assessing the effects of different drought events and long-term droughts at multiple spatial scales, which provided essential insights for sciences and societies that must develop coping strategies for ecosystems for such events. PMID:26204052

  13. The Framework on Multi-Scale Landslide Hazard Evaluation in China

    NASA Astrophysics Data System (ADS)

    Li, W. Y.; Liu, C.; Gao, J.

    2016-06-01

    Nowadays, Landslide has been one of the most frequent and seriously widespread natural hazards all over the world. How landslides can be monitored and predicted is an urgent research topic of the international landslide research community. Particularly, there is a lack of high quality and updated landslide risk maps and guidelines that can be employed to better mitigate and prevent landslide disasters in many emerging regions, including China. This paper considers national and regional scale, and introduces the framework on combining the empirical and physical models for landslide evaluation. Firstly, landslide susceptibility in national scale is mapped based on empirical model, and indicates the hot-spot areas. Secondly, the physically based model can indicate the process of slope instability in the hot-spot areas. The result proves that the framework is a systematic method on landslide hazard monitoring and early warning.

  14. A comprehensive model to evaluate implementation of the world health organization framework convention of tobacco control

    PubMed Central

    Sarrafzadegan, Nizal; Kelishad, Roya; Rabiei, Katayoun; Abedi, Heidarali; Mohaseli, Khadijeh Fereydoun; Masooleh, Hasan Azaripour; Alavi, Mousa; Heidari, Gholamreza; Ghaffari, Mostafa; O’Loughlin, Jennifer

    2012-01-01

    Background: Iran is one of the countries that has ratified the World Health Organization Framework Convention of Tobacco Control (WHO-FCTC), and has implemented a series of tobacco control interventions including the Comprehensive Tobacco Control Law. Enforcement of this legislation and assessment of its outcome requires a dedicated evaluation system. This study aimed to develop a generic model to evaluate the implementation of the Comprehensive Tobacco Control Law in Iran that was provided based on WHO-FCTC articles. Materials and Methods: Using a grounded theory approach, qualitative data were collected from 265 subjects in individual interviews and focus group discussions with policymakers who designed the legislation, key stakeholders, and members of the target community. In addition, field observations data in supermarkets/shops, restaurants, teahouses and coffee shops were collected. Data were analyzed in two stages through conceptual theoretical coding. Findings: Overall, 617 open codes were extracted from the data into tables; 72 level-3 codes were retained from the level-2 code series. Using a Model Met paradigm, the relationships between the components of each paradigm were depicted graphically. The evaluation model entailed three levels, namely: short-term results, process evaluation and long-term results. Conclusions: Central concept of the process of evaluation is that enforcing the law influences a variety of internal and environmental factors including legislative changes. These factors will be examined during the process evaluation and context evaluation. The current model can be applicable for providing FCTC evaluation tools across other jurisdictions. PMID:23833621

  15. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    SciTech Connect

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  16. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    SciTech Connect

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  17. A process-based rejectionist framework for evaluating catchment runoff model structure

    NASA Astrophysics Data System (ADS)

    Vaché, Kellie B.; McDonnell, Jeffrey J.

    2006-02-01

    Complex hydrological descriptions at the hillslope scale have been difficult to incorporate within a catchment modeling framework because of the disparity between the scale of measurements and the scale of model subunits. As a result, parameters represented in many conceptual models are often not related to physical properties and therefore cannot be established prior to a model calibration. While tolerable for predictions involving water quantity, water quality simulations require additional attention to transport processes, flow path sources, and water age. This paper examines how isotopic estimates of residence time may be used to subsume flow path process complexity and to provide a simple, scalable evaluative data source for water quantity- and quality-based conceptual models. We test a set of simple distributed hydrologic models (from simple to more complex) against measured discharge and residence time and employ a simple Monte Carlo framework to evaluate the identifiability of parameters and how the inclusion of residence time contributes to the evaluative process. Results indicate that of the models evaluated, only the most complex, including an explicit unsaturated zone volume and an effective porosity, successfully reproduced both discharge dynamics and residence time. In addition, the inclusion of residence time in the evaluation of the accepted models results in a reduction of the a posteriori parameter uncertainty. Results from this study support the conclusion that the incorporation of soft data, in this case, isotopically estimated residence times, in model evaluation is a useful mechanism to bring experimental evidence into the process of model evaluation and selection, thereby providing one mechanism to further reconcile hillslope-scale complexity with catchment-scale simplicity.

  18. A standardized framework for evaluating the skill of regional climate downscaling techniques

    NASA Astrophysics Data System (ADS)

    Hayhoe, Katharine Anne

    Regional climate impact assessments require high-resolution projections to resolve local factors that modify the impact of global-scale forcing. To generate these projections, global climate model simulations are commonly downscaled using a variety of statistical and dynamical techniques. Despite the essential role of downscaling in regional assessments, there is no standard approach to evaluating various downscaling methods. Hence, impact communities often have little awareness of limitations and uncertainties associated with downscaled projections. To develop a standardized framework for evaluating and comparing downscaling approaches, I first identify three primary characteristics of a distribution directly relevant to impact analyses that can be used to evaluate a simulated variable such as temperature or precipitation at a given location: (1) annual, seasonal, and monthly mean values; (2) thresholds, extreme values, and accumulated quantities such as 24h precipitation or degree-days; and (3) persistence, reflecting multi-day events such as heat waves, cold spells, and wet periods. Based on a survey of the literature and solicitation of expert opinion, I select a set of ten statistical tests to evaluate these characteristics, including measures of error, skill, and correlation. I apply this framework to evaluate the skill of four downscaling methods, from a simple delta approach to a complex asynchronous quantile regression, in simulating daily temperature at twenty stations across North America. Identical global model fields force each downscaling method, and the historical observational record at each location is randomly divided by year into two equal parts, such that each statistical method is trained on one set of historical observations, and evaluated on an entirely independent set of observations. Biases relative to observations are calculated for the historical evaluation period, and differences between projections for the future. Application of the

  19. A Framework for Evaluating the Effects of Degraded Digital I and C Systems on Human Performance

    SciTech Connect

    OHara,J.; Gunther, B.; Hughes, N.; Barnes, V.

    2009-04-09

    New and advanced reactors will use integrated digital instrumentation and control (I&C) systems to support operators in their monitoring and control functions. Even though digital systems are typically highly reliable, their potential for degradation or failure could significantly affect operator situation awareness and performance and, consequently, impact plant safety. The U.S. Nuclear Regulatory Commission has initiated a research project to investigate the effects of degraded I&C systems on human performance and plant operations. The ultimate objective of this project is to develop the technical basis for human factors review guidance for conditions of degraded I&C, including complete failure. Based on the results of this effort, NRC will determine the need for developing new guidance or revising NUREG-0800, NUREG-0711, NUREG-0700 and other pertinent NRC review guidance. This paper reports on the first phase of the research, the development of a framework for linking degraded I&C system conditions to human performance. The framework consists of three levels: I&C subsystems, human-system interfaces, and human performance. Each level is composed of a number of discrete elements. This paper will describe the elements at each level and their integration. In the next phase of the research, the framework will be used to systematically investigate the human performance consequences of various classes of failures.

  20. i-SERF: An Integrated Self-Evaluated and Regulated Framework for Deploying Web 2.0 Technologies in the Educational Process

    ERIC Educational Resources Information Center

    Karvounidis, Theodoros; Himos, Konstantinos; Bersimis, Sotirios; Douligeris, Christos

    2015-01-01

    In this paper we propose i-SERF (integrated-Self Evaluated and Regulated Framework) an integrated self-evaluated and regulated framework, which facilitates synchronous and asynchronous education, focusing on teaching and learning in higher education. The i-SERF framework is a two-layered framework that takes into account various elements of…

  1. Risk Assessment of Physical Hazards in Greek Hospitals Combining Staff's Perception, Experts' Evaluation and Objective Measurements

    PubMed Central

    Sourtzi, Panayiota; Kalokairinou, Athina; Sgourou, Evi; Koumoulas, Emmanouel; Velonakis, Emmanouel

    2011-01-01

    Objectives The promotion of health and safety (H&S) awareness among hospital staff can be applied through various methods. The aim of this study was to assess the risk level of physical hazards in the hospital sector by combining workers' perception, experts' evaluation and objective measurements. Methods A cross-sectional study was designed using multiple triangulation. Hospital staff (n = 447) filled in an H&S questionnaire in a general hospital in Athens and an oncology one in Thessaloniki. Experts observed and filled in a checklist on H&S in the various departments of the two hospitals. Lighting, noise and microclimate measurements were performed. Results The staff's perception of risk was higher than that of the experts in many cases. The measured risk levels were low to medium. In cases of high-risk noise and lighting, staff and experts agreed. Staff's perception of risk was influenced by hospital's department, hospital's service, years of working experience and level of education. Therefore, these factors should be taken into account in future studies aimed at increasing the participation of hospital workers. Conclusion This study confirmed the usefulness of staff participation in the risk assessment process, despite the tendency for staff to overestimate the risk level of physical hazards. The combination of combining staff perception, experts' evaluation and objective measures in the risk assessment process increases the efficiency of risk management in the hospital environment and the enforcement of relevant legislation. PMID:22953210

  2. A Comprehensive Framework for Quantitative Evaluation of Downscaled Climate Predictions and Projections

    NASA Astrophysics Data System (ADS)

    Barsugli, J. J.; Guentchev, G.

    2012-12-01

    The variety of methods used for downscaling climate predictions and projections is large and growing larger. Comparative studies of downscaling techniques to date are often initiated in relation to specific projects, are focused on limited sets of downscaling techniques, and hence do not allow for easy comparison of outcomes. In addition, existing information about the quality of downscaled datasets is not available in digital form. There is a strong need for systematic evaluation of downscaling methods using standard protocols which will allow for a fair comparison of their advantages and disadvantages with respect to specific user needs. The National Climate Predictions and Projections platform, with the contributions of NCPP's Climate Science Advisory Team, is developing community-based standards and a prototype framework for the quantitative evaluation of downscaling techniques and datasets. Certain principles guide the development of this framework. We want the evaluation procedures to be reproducible and transparent, simple to understand, and straightforward to implement. To this end we propose a set of open standards that will include the use of specific data sets, time periods of analysis, evaluation protocols, evaluation tests and metrics. Secondly, we want the framework to be flexible and extensible to downscaling techniques which may be developed in the future, to high-resolution global models, and to evaluations that are meaningful for additional applications and sectors. Collaboration among practitioners who will be using the downscaled data and climate scientists who develop downscaling methods will therefore be essential to the development of this framework. The proposed framework consists of three analysis protocols, along with two tiers of specific metrics and indices that are to be calculated. The protocols describe the following types of evaluation that can be performed: 1) comparison to observations, 2) comparison to a "perfect model" simulation

  3. Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration

    NASA Astrophysics Data System (ADS)

    Shafii, M.; de Smedt, F.

    2009-04-01

    Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.

  4. [Study on objectively evaluating skin aging according to areas of skin texture].

    PubMed

    Shan, Gaixin; Gan, Ping; He, Ling; Sun, Lu; Li, Qiannan; Jiang, Zheng; He, Xiangqian

    2015-02-01

    Skin aging principles play important roles in skin disease diagnosis, the evaluation of skin cosmetic effect, forensic identification and age identification in sports competition, etc. This paper proposes a new method to evaluate the skin aging objectively and quantitatively by skin texture area. Firstly, the enlarged skin image was acquired. Then, the skin texture image was segmented by using the iterative threshold method, and the skin ridge image was extracted according to the watershed algorithm. Finally, the skin ridge areas of the skin texture were extracted. The experiment data showed that the average areas of skin ridges, of both men and women, had a good correlation with age (the correlation coefficient r of male was 0.938, and the correlation coefficient r of female was 0.922), and skin texture area and age regression curve showed that the skin texture area increased with age. Therefore, it is effective to evaluate skin aging objectively by the new method presented in this paper. PMID:25997282

  5. Objective evaluation of surgical competency for minimally invasive surgery with a collection of simple tests

    PubMed Central

    Gonzalez-Neira, Eliana Maria; Jimenez-Mendoza, Claudia Patricia; Rugeles-Quintero, Saul

    2016-01-01

    Objective: This study aims at determining if a collection of 16 motor tests on a physical simulator can objectively discriminate and evaluate practitioners' competency level, i.e. novice, resident, and expert. Methods: An experimental design with three study groups (novice, resident, and expert) was developed to test the evaluation power of each of the 16 simple tests. An ANOVA and a Student Newman-Keuls (SNK) test were used to analyze results of each test to determine which of them can discriminate participants' competency level. Results: Four of the 16 tests used discriminated all of the three competency levels and 15 discriminated at least two of the three groups (α= 0.05). Moreover, other two tests differentiate beginners' level from intermediate, and other seven tests differentiate intermediate level from expert. Conclusion: The competency level of a practitioner of minimally invasive surgery can be evaluated by a specific collection of basic tests in a physical surgical simulator. Reduction of the number of tests needed to discriminate the competency level of surgeons can be the aim of future research. PMID:27226664

  6. A Framework for Evaluating Science and Technology Electronic Reference Books: A Comparison of Five Platforms in Chemistry

    ERIC Educational Resources Information Center

    Lafferty, Meghan

    2009-01-01

    This article examines what is desirable in online reference books in science and technology and outlines a framework for evaluating their interfaces. The framework considers factors unique to these subject areas like chemical structures and numerical data. Criteria in three categories, navigability, searchability, and results, were applied to five…

  7. Performance Evaluation of RTLS Based on Active RFID Power Measurement for Dense Moving Objects

    NASA Astrophysics Data System (ADS)

    Kim, Taekyu; Lee, Jin; Lee, Seungbeom; Park, Sin-Chong

    Tracking a large quantity of moving target tags simultaneously is essential for the localization and guidance of people in welfare facilities like hospitals and sanatoriums for the aged. The locating system using active RFID technology consists of a number of fixed RFID readers and tags carried by the target objects, or senior people. We compare the performances of several determination algorithms which use the power measurement of received signals emitted by the moving active RFID tags. This letter presents a study on the effect of collision in tracking large quantities of objects based on active RFID real time location system (RTLS). Traditional trilateration, fingerprinting, and well-known LANDMARC algorithm are evaluated and compared with varying number of moving tags through the SystemC-based computer simulation. From the simulation, we show the tradeoff relationship between the number of moving tags and estimation accuracy.

  8. Climate services for society: origins, institutional arrangements, and design elements for an evaluation framework

    PubMed Central

    Vaughan, Catherine; Dessai, Suraje

    2014-01-01

    Climate services involve the generation, provision, and contextualization of information and knowledge derived from climate research for decision making at all levels of society. These services are mainly targeted at informing adaptation to climate variability and change, widely recognized as an important challenge for sustainable development. This paper reviews the development of climate services, beginning with a historical overview, a short summary of improvements in climate information, and a description of the recent surge of interest in climate service development including, for example, the Global Framework for Climate Services, implemented by the World Meteorological Organization in October 2012. It also reviews institutional arrangements of selected emerging climate services across local, national, regional, and international scales. By synthesizing existing literature, the paper proposes four design elements of a climate services evaluation framework. These design elements include: problem identification and the decision-making context; the characteristics, tailoring, and dissemination of the climate information; the governance and structure of the service, including the process by which it is developed; and the socioeconomic value of the service. The design elements are intended to serve as a guide to organize future work regarding the evaluation of when and whether climate services are more or less successful. The paper concludes by identifying future research questions regarding the institutional arrangements that support climate services and nascent efforts to evaluate them. PMID:25798197

  9. A probabilistic model framework for evaluating year-to-year variation in crop productivity

    NASA Astrophysics Data System (ADS)

    Yokozawa, M.; Iizumi, T.; Tao, F.

    2008-12-01

    Most models describing the relation between crop productivity and weather condition have so far been focused on mean changes of crop yield. For keeping stable food supply against abnormal weather as well as climate change, evaluating the year-to-year variations in crop productivity rather than the mean changes is more essential. We here propose a new framework of probabilistic model based on Bayesian inference and Monte Carlo simulation. As an example, we firstly introduce a model on paddy rice production in Japan. It is called PRYSBI (Process- based Regional rice Yield Simulator with Bayesian Inference; Iizumi et al., 2008). The model structure is the same as that of SIMRIW, which was developed and used widely in Japan. The model includes three sub- models describing phenological development, biomass accumulation and maturing of rice crop. These processes are formulated to include response nature of rice plant to weather condition. This model inherently was developed to predict rice growth and yield at plot paddy scale. We applied it to evaluate the large scale rice production with keeping the same model structure. Alternatively, we assumed the parameters as stochastic variables. In order to let the model catch up actual yield at larger scale, model parameters were determined based on agricultural statistical data of each prefecture of Japan together with weather data averaged over the region. The posterior probability distribution functions (PDFs) of parameters included in the model were obtained using Bayesian inference. The MCMC (Markov Chain Monte Carlo) algorithm was conducted to numerically solve the Bayesian theorem. For evaluating the year-to-year changes in rice growth/yield under this framework, we firstly iterate simulations with set of parameter values sampled from the estimated posterior PDF of each parameter and then take the ensemble mean weighted with the posterior PDFs. We will also present another example for maize productivity in China. The

  10. Social Studies Objectives, K-12.

    ERIC Educational Resources Information Center

    Dependents Schools (DOD), Washington, DC.

    Six objectives which form the framework of a K-12 social studies program of Department of Defense Dependents Schools are outlined. The objectives are to evaluate the relationship between human beings and their social, natural, and man-made environment; analyze the origins and interrelationships of beliefs, values, and behavior patterns; solve…

  11. Using the Theory of Planned Behavior as a Framework for the Evaluation of a Professional Development Workshop

    PubMed Central

    PATTERSON, ROBIN R.

    2001-01-01

    This purpose of this study was to use a theoretical framework based on several decades of attitudinal research to assess the intentions of Microbial Discovery Workshop participants to incorporate the inquiry activities presented at the workshop into their curricula, to evaluate the participants actual use of these activities after the workshop, and to uncover the barriers and enablers the participants faced in doing so. As a framework, the theory of planned behavior was ascertained to be an appropriate means of assessment and it was revealed that participants’ intention to use the workshop activities significantly correlated with their actual use. The participants’ attitudes toward using the activities influenced their use more than the participants’ perceptions of the social pressures that would influence their decision to use the activities or their belief as to how easy or difficult it would be to incorporate a given activity. The participants were found to be highly self-efficacious pertaining to their ability to implement the activities, but perceived self-efficacy was not a significant predictor of the participants’ intentions to incorporate the activities into their teaching-learning repertoire. The study also uncovered other behaviors the participants displayed as a result of attending the workshop consistent with the goals and objectives of the workshop organizers. PMID:23653542

  12. A Catchment-Based Land Surface Model for GCMs and the Framework for its Evaluation

    NASA Technical Reports Server (NTRS)

    Ducharen, A.; Koster, R. D.; Suarez, M. J.; Kumar, P.

    1998-01-01

    A new GCM-scale land surface modeling strategy that explicitly accounts for subgrid soil moisture variability and its effects on evaporation and runoff is now being explored. In a break from traditional modeling strategies, the continental surface is disaggregated into a mosaic of hydrological catchments, with boundaries that are not dictated by a regular grid but by topography. Within each catchment, the variability of soil moisture is deduced from TOP-MODEL equations with a special treatment of the unsaturated zone. This paper gives an overview of this new approach and presents the general framework for its off-line evaluation over North-America.

  13. Framework for evaluating the effectiveness of nuclear-safeguards systems. [Aggregated Systems Model (ASM)

    SciTech Connect

    Al-Ayat, R.A.; Judd, B.R.

    1981-10-20

    This paper describes an analytical tool for evaluating the effectiveness of safeguards that protect special nuclear material (SNM). The tool quantifies the effectiveness using several measures, including probabilities and expected times to detect and respond to malevolent attempts against the facility. These measures are computed for a spectrum of threats involving outsiders, insiders, collusion, falsification, and deceit. Overall system effectiveness is judged using performance indices aggregated over all threats. These indices can be used by designers and regulators when comparing costs and benefits of various safeguards. The framework is demonstrated with an example in which we assess vulnerabilities of a safeguards system and identify cost-effective design modifications.

  14. Objective assessment methodology and evaluation of low-rate digital voice processors

    NASA Astrophysics Data System (ADS)

    Dimolitsas, Spiros; Corcoran, Franklin L.; Baraniecki, Marion; Phipps, John G., Jr.

    Methods and results are presented for an objective-instrumentation evaluation of 16 kbit/sec-operation source-encoding algorithms, which aimed to ascertain whether 'toll quality' performance with nonvoice signals is possible at such an operating rate. It is found that while 16-kbit/sec source encoding is unable to accommodate voiceband data rates in excess of 2.4 kbit/sec with acceptable quality, satisfactory performance is obtainable with signaling. The 16 kbit/sec voice coding technology is therefore judged to be suitable for public-switched telephone network applications where alternative facilities for voiceband data traffic are provided.

  15. Evaluating ecommerce websites cognitive efficiency: an integrative framework based on data envelopment analysis.

    PubMed

    Lo Storto, Corrado

    2013-11-01

    This paper presents an integrative framework to evaluate ecommerce website efficiency from the user viewpoint using Data Envelopment Analysis (DEA). This framework is inspired by concepts driven from theories of information processing and cognition and considers the website efficiency as a measure of its quality and performance. When the users interact with the website interfaces to perform a task, they are involved in a cognitive effort, sustaining a cognitive cost to search, interpret and process information, and experiencing either a sense of satisfaction or dissatisfaction for that. The amount of ambiguity and uncertainty, and the search (over-)time during navigation that they perceive determine the effort size - and, as a consequence, the cognitive cost amount - they have to bear to perform their task. On the contrary, task performing and result achievement provide the users with cognitive benefits, making interaction with the website potentially attractive, satisfying, and useful. In total, 9 variables are measured, classified in a set of 3 website macro-dimensions (user experience, site navigability and structure). The framework is implemented to compare 52 ecommerce websites that sell products in the information technology and media market. A stepwise regression is performed to assess the influence of cognitive costs and benefits that mostly affect website efficiency. PMID:23697624

  16. Applying a resources framework to analysis of the Force and Motion Conceptual Evaluation

    NASA Astrophysics Data System (ADS)

    Smith, Trevor I.; Wittmann, Michael C.

    2008-12-01

    We suggest one redefinition of common clusters of questions used to analyze student responses on the Force and Motion Conceptual Evaluation. Our goal is to propose a methodology that moves beyond an analysis of student learning defined by correct responses, either on the overall test or on clusters of questions defined solely by content. We use the resources framework theory of learning to define clusters within this experimental test that was designed without the resources framework in mind. We take special note of the contextual and representational dependence of questions with seemingly similar physics content. We analyze clusters in ways that allow the most common incorrect answers to give as much, or more, information as the correctness of responses in that cluster. We show that false positives can be found, especially on questions dealing with Newton’s third law. We apply our clustering to a small set of data to illustrate the value of comparing students’ incorrect responses which are otherwise identical on a correct or incorrect analysis. Our work provides a connection between theory and experiment in the area of survey design and the resources framework.

  17. Subjective and objective evaluations of a scattered sound field in a scale model opera house.

    PubMed

    Ryu, Jong Kwan; Jeon, Jin Yong

    2008-09-01

    Scattered sound fields in an opera house were objectively and subjectively evaluated through acoustical measurements in a 1:10 scale model and through auditory preference tests. Acoustical characteristics were measured in the stalls area with and without diffusers, both on the sidewalls close to the proscenium and in the soffit of the side balcony. Installed diffusers reduced the initial time delay gap and amplitude of the first reflected sound, and decreased sound pressure level (SPL), reverberation time (RT), and early decay time (EDT) at most seats due to the increased scattering and absorption. After diffuser installation, C(80) and 1-IACC(E3) increased at the front seats and decreased at the rear seats. Subjective evaluations showed that the preference of scattered sound fields correlates highly with loudness and reverberance. It was also found that EDT and SPL are dominant parameters describing subjective preference for scattered sounds in this experimental condition. PMID:19045645

  18. Experimental Study on Event-Related Potential for Objective Evaluation of Food

    NASA Astrophysics Data System (ADS)

    Tanaka, Motoshi; Honma, Tomohiro; Inoue, Hiroshi; Niiyama, Yoshitsugu; Takahashi, Toru; Kumagai, Masanori; Akiyama, Yoshinobu

    In order to study the application of event-related potential (ERP) for performing objective evaluation of food, the ERP was measured when subjectively judging the appearance of food by three-grade scale with the opinion “like”, “favorite” and “more favorite”. Sushi and cooked rice were selected as typical foods. Five pictures of each food that the subjects liked were chosen before measurements, and then were used in opinion tests. As a result, the P300 component of the ERP was detected, and the P300 area (surrounded by ERP waveform from the latency 250 to 500ms) became larger when the subjects judged as “more favorite”, which indicates the feasibility of evaluation of food using the ERP.

  19. Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.

  20. An Ex Post Facto Evaluation Framework for Place-Based Police Interventions

    ERIC Educational Resources Information Center

    Braga, Anthony A.; Hureau, David M.; Papachristos, Andrew V.

    2011-01-01

    Background: A small but growing body of research evidence suggests that place-based police interventions generate significant crime control gains. While place-based policing strategies have been adopted by a majority of U.S. police departments, very few agencies make a priori commitments to rigorous evaluations. Objective: Recent methodological…

  1. Framework for Optimizing the Evaluation of Data From Contaminated Soil in Sweden

    EPA Science Inventory

    The Swedish guidelines for the evaluation of data for the purpose of a risk assessment at contaminated sites are of a qualitative character, as opposed to the USEPA’s Data Quality Objective Process. In Sweden, this can sometimes be a problem because the demands on data quality ar...

  2. Implementing accountability for reasonableness framework at district level in Tanzania: a realist evaluation

    PubMed Central

    2011-01-01

    Background Despite the growing importance of the Accountability for Reasonableness (A4R) framework in priority setting worldwide, there is still an inadequate understanding of the processes and mechanisms underlying its influence on legitimacy and fairness, as conceived and reflected in service management processes and outcomes. As a result, the ability to draw scientifically sound lessons for the application of the framework to services and interventions is limited. This paper evaluates the experiences of implementing the A4R approach in Mbarali District, Tanzania, in order to find out how the innovation was shaped, enabled, and constrained by the interaction between contexts, mechanisms and outcomes. Methods This study draws on the principles of realist evaluation -- a largely qualitative approach, chiefly concerned with testing and refining programme theories by exploring the complex interactions of contexts, mechanisms, and outcomes. Mixed methods were used in data collection, including individual interviews, non-participant observation, and document reviews. A thematic framework approach was adopted for the data analysis. Results The study found that while the A4R approach to priority setting was helpful in strengthening transparency, accountability, stakeholder engagement, and fairness, the efforts at integrating it into the current district health system were challenging. Participatory structures under the decentralisation framework, central government's call for partnership in district-level planning and priority setting, perceived needs of stakeholders, as well as active engagement between researchers and decision makers all facilitated the adoption and implementation of the innovation. In contrast, however, limited local autonomy, low level of public awareness, unreliable and untimely funding, inadequate accountability mechanisms, and limited local resources were the major contextual factors that hampered the full implementation. Conclusion This study

  3. Relationships between objective acoustic indices and acoustic comfort evaluation in nonacoustic spaces

    NASA Astrophysics Data System (ADS)

    Kang, Jian

    2001-05-01

    Much attention has been paid to acoustic spaces such as concert halls and recording studios, whereas research on nonacoustic buildings/spaces has been rather limited, especially from the viewpoint of acoustic comfort. In this research a series of case studies has been carried out on this topic, considering various spaces including shopping mall atrium spaces, library reading rooms, football stadia, swimming spaces, churches, dining spaces, as well as urban open public spaces. The studies focus on the relationships between objective acoustic indices such as sound pressure level and reverberation time and perceptions of acoustic comfort. The results show that the acoustic atmosphere is an important consideration in such spaces and the evaluation of acoustic comfort may vary considerably even if the objective acoustic indices are the same. It is suggested that current guidelines and technical regulations are insufficient in terms of acoustic design of these spaces, and the relationships established from the case studies between objective and subjective aspects would be useful for developing further design guidelines. [Work supported partly by the British Academy.

  4. Operationalizing the RE-AIM framework to evaluate the impact of multi-sector partnerships

    PubMed Central

    2014-01-01

    Background The RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework is a reliable tool for the translation of research to practice. This framework has been widely applied to assess the impact of individual interventions. However, RE-AIM has rarely been used to evaluate implementation interventions, especially from multi-sector partnerships. The primary purpose of this paper is to operationalize the RE-AIM approach to evaluate large, multi-sector partnerships. SCI Action Canada, a community-university partnership aimed to promote physical activity among adults with spinal cord injury, is used as an example. A secondary purpose is to provide initial data from SCI Action Canada by using this conceptualization of RE-AIM. Methods Each RE-AIM element is operationalized for multi-sector partnerships. Specific to SCI Action Canada, seven reach calculations, four adoption rates, four effectiveness outcomes, one implementation, one organizational maintenance, and two individual maintenance outcomes are defined. The specific numerators based on SCI Action Canada activities are also listed for each of these calculations. Results The results are derived from SCI Action Canada activities. SCI Action Canada’s reach ranged from 3% (end-user direct national reach) to 37% (total regional reach). Adoption rates were 15% (provincial level adoption) to 76% (regional level adoption). Implementation and organizational maintenance rates were 92% and 100%, respectively. Conclusions We have operationalized the RE-AIM framework for larger multi-sectoral partnerships and demonstrated its applicability to such partnerships with SCI Action Canada. Future partnerships could use RE-AIM to assess their public health impact. PMID:24923331

  5. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    PubMed Central

    Wu, Yirong; Liu, Jie; del Rio, Alejandro Munoz; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2016-01-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called “radiogenomics.” Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar’s test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar’s test provides a novel framework to evaluate prediction models in the realm of radiogenomics. PMID:27095854

  6. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    NASA Astrophysics Data System (ADS)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  7. A Framework to Evaluate Wildlife Feeding in Research, Wildlife Management, Tourism and Recreation

    PubMed Central

    Dubois, Sara; Fraser, David

    2013-01-01

    Simple Summary Human feeding of wildlife is a world-wide phenomenon with very diverse effects on conservation, animal welfare and public safety. From a review of the motivations, types and consequences of wildlife feeding, an evaluative framework is presented to assist policy-makers, educators and managers to make ethical- and biologically-based decisions about the appropriateness of feeding wildlife in the context of research, wildlife management, tourism and recreation. Abstract Feeding of wildlife occurs in the context of research, wildlife management, tourism and in opportunistic ways. A review of examples shows that although feeding is often motivated by good intentions, it can lead to problems of public safety and conservation and be detrimental to the welfare of the animals. Examples from British Columbia illustrate the problems (nuisance animal activity, public safety risk) and consequences (culling, translocation) that often arise from uncontrolled feeding. Three features of wildlife feeding can be distinguished: the feasibility of control, the effects on conservation and the effects on animal welfare. An evaluative framework incorporating these three features was applied to examples of feeding from the literature. The cases of feeding for research and management purposes were generally found to be acceptable, while cases of feeding for tourism or opportunistic feeding were generally unacceptable. The framework should allow managers and policy-makers to distinguish acceptable from unacceptable forms of wildlife feeding as a basis for policy, public education and enforcement. Many harmful forms of wildlife feeding seem unlikely to change until they come to be seen as socially unacceptable. PMID:26479747

  8. Maslow's needs hierarchy as a framework for evaluating hospitality houses' resources and services.

    PubMed

    Duncan, Mary Katherine Waibel; Blugis, Ann

    2011-08-01

    As hospitality houses welcome greater numbers of families and families requiring longer stays, they do so in the absence of a widely accepted theory to guide their understanding of guests' needs and evaluations of how well they meet those needs. We propose A. Maslow's (1970) Hierarchy of Needs as a conceptual framework for understanding what makes a hospitality house a home for families of pediatric patients and for guiding the activities of hospitality houses' boards of directors, staff, volunteers, and donors. This article presents findings from a theory-driven evaluation of one hospitality house's ability to meet guests' needs, describes the house's best practice standards for addressing guests' needs, and suggests areas for future research. PMID:21726782

  9. A systematic framework for evaluating standard cell middle-of-line (MOL) robustness for multiple patterning

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoqing; Cline, Brian; Yeric, Greg; Yu, Bei; Pan, David Z.

    2015-03-01

    Multiple patterning (triple and quadruple patterning) is being considered for use on the Middle-Of-Line (MOL) layers at the 10nm technology node and beyond.1 For robust standard cell design, designers need to improve the inter-cell compatibility for all combinations of cells and cell placements. Multiple patterning colorability checks break the locality of traditional rule checking and N-wise checks are strongly needed to verify the multiple patterning colorability for layout interaction across cell boundaries. In this work, a systematic framework is proposed to evaluate the library-level robustness over multiple patterning from two perpectives, including illegal cell combinations and full chip interactions. With efficient N-wise checks, the vertical and horizontal boundary checks are explored to predict illegal cell combinations. For full chip interactions, random benchmarks are generated by cell shifting and tested to evaluate the placement-level efforts needed to reduce the quadruple patterning to triple patterning for the MOL layer.

  10. Evaluation for coastal reclamation feasibility using a comprehensive hydrodynamic framework: A case study in Haizhou Bay.

    PubMed

    Feng, Lan; He, Jia; Ai, Junyong; Sun, Xiang; Bian, Fangyuan; Zhu, Xiaodong

    2015-11-15

    Coastal reclamation (CR) is a prevailing approach to solve the contradiction between the land shortage and the growing demand of living space for human beings. In general, environmental impact assessment (EIA) focuses on evaluating the feasibility of individual coastal reclamation project (CRP). However, few studies have investigated the cumulative effect of multiple CRPs on surrounding environment. In this study, an integrated framework based on coastal hydrodynamics was established, and then applied to the feasibility evaluation of multiple CRPs for future coastal management in Haizhou Bay, China. The results indicated that three out of five reclamation projects were feasible and the remaining two were forbidden in the study area, whereas EIA approves of all the CRPs. It provides a scientific reference for effective management of coastal reclamation and future environmental impact researches when new CRPs are proposed. PMID:26364204

  11. A framework for the damage evaluation of acoustic emission signals through Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Siracusano, Giulio; Lamonaca, Francesco; Tomasello, Riccardo; Garescì, Francesca; Corte, Aurelio La; Carnì, Domenico Luca; Carpentieri, Mario; Grimaldi, Domenico; Finocchio, Giovanni

    2016-06-01

    The acoustic emission (AE) is a powerful and potential nondestructive testing method for structural monitoring in civil engineering. Here, we show how systematic investigation of crack phenomena based on AE data can be significantly improved by the use of advanced signal processing techniques. Such data are a fundamental source of information that can be used as the basis for evaluating the status of the material, thereby paving the way for a new frontier of innovation made by data-enabled analytics. In this article, we propose a framework based on the Hilbert-Huang Transform for the evaluation of material damages that (i) facilitates the systematic employment of both established and promising analysis criteria, and (ii) provides unsupervised tools to achieve an accurate classification of the fracture type, the discrimination between longitudinal (P-) and traversal (S-) waves related to an AE event. The experimental validation shows promising results for a reliable assessment of the health status through the monitoring of civil infrastructures.

  12. Evaluation of activity images in dynamics speckle in search of objective estimators

    NASA Astrophysics Data System (ADS)

    Avendaño Montecinos, Marcos; Mora Canales, Victor; Cap, Nelly; Grumel, Eduardo; Rabal, Hector; Trivi, Marcelo; Baradit, Erik

    2015-08-01

    We explore the performance of two algorithms to screen loci of equal activity in dynamic speckle images. Dynamic speckle images are currently applied to several applications in medicine, biology, agriculture and other disciplines. Nevertheless, no objective standard has been proposed so far to evaluate the performance of the algorithms, which must be then relied on subjective appreciations. We use two case studies of activity that do not bear the biologic inherent variability to test the methods: "Generalized Differences" and "Fujii", looking for a standard to evaluate their performance in an objective way. As study cases, we use the drying of paint on an (assumed) unknown topography, namely the surface of a coin, and the activity due to pre heating a piece of paper that hides writings in the surface under the paper. A known object of simple topography is included in the image, besides of the painted coin, consisting in a paint pool where the depth is a linear function of its position. Both algorithms are applied to the images and the intensity profile of the results along the linear region of the pool activity is used to estimate the depth of the geometric topography hidden under paint in the coin. The accuracy of the result is used as a merit estimation of the corresponding algorithm. In the other experiment, a hidden dark bar printed on paper is covered with one or two paper leaves, slightly pre heated with a lamp and activity images registered and processed with both algorithms. The intensity profile of the activity images is used to estimate which method gives a better description of the bar edges images and their deterioration. Experimental results are shown.

  13. The design of a contextualized responsive evaluation framework for fishery management in Benin.

    PubMed

    Kouévi, A T; Van Mierlo, B; Leeuwis, C; Vodouhê, S D

    2013-02-01

    The main question addressed by this article is how to adapt the responsive evaluation (RE) approach to an intervention context characterized by repetition of ineffective interventions, ambiguous intervention action theories among stakeholders, and high complexity. The context is Grand-Popo, a fishing municipality located on Benin's southwest Atlantic coast. The fishery management interventionists and the fishing communities in the municipality all espoused concern for the sustainable improvement of fishing actors' livelihood conditions, but differed about the reasons for this livelihood impairment, and about what should be done, when, where, and by whom. Given this ambiguity, we identified RE as a promising action research approach to facilitate dialogue and mutual learning, and consequently to improve stakeholders' ability to resolve problems. However, this approach seems to have some shortcomings in the Grand-Popo context, regarding the repetitive ineffectiveness of interventions, high complexity, and uncertainty. Therefore, based on our empirical study, we add three dimensions to the existing RE framework: historical analysis to deal with routine interventions, exploration and discussion of incongruities of action theories to trigger double-loop learning, and system analysis to deal with complexity and uncertainty. This article does not intend to address the implications or impact of this adapted RE framework. Instead, we suggest some criteria and indicators for evaluating whether the proposed amended RE approach has assisted in resolving the fishery problems in Grand-Popo after the approach has been applied. PMID:22634798

  14. Evaluating a Web-Based Educational Module on Oral Cancer Examination Based on a Behavioral Framework.

    PubMed

    Wee, Alvin G; Zimmerman, Lani M; Pullen, Carol H; Allen, Carl M; Lambert, Paul M; Paskett, Electra D

    2016-03-01

    Patients at risk of developing oral and/or oropharyngeal cancer (OPC) are more likely to see primary care providers (PCPs) than a dentist. Many PCPs do not regularly perform oral cancer examination (OCE). The purpose of this study was to design a web-based educational program based on a behavioral framework to encourage PCPs to conduct OCE. PCPs were solicited to provide feedback on the program and to evaluate their short-term knowledge. The integrated behavioral model was used to design the program. Fifteen PCPs (five in each group: physicians, physician assistants, and nurse practitioners) reviewed the program and took a posttest: (1) index of knowledge of risk factors for oral cancer (RiskOC) and (2) index of knowledge of diagnostic procedures for oral cancer (DiagOC). Findings from the process evaluation were mainly positive, with comments on the length of the program comprising the ten negative comments. No significant difference among groups of PCPs (physicians, physician assistants, and nurse practitioners) was detected for DiagOC (p = 0.43) or RiskOC (p = 0.201). A program on OPC for PCPs should be less than 40 min. Postviewing knowledge outcomes were similar for all PCPs. The web-based program on OPC based on a behavioral framework could have similar short-term knowledge outcomes for all PCPs and may increase the number of PCPs performing OCEs. PMID:25572460

  15. Attachment Based Treatments for Adolescents: The Secure Cycle as a Framework for Assessment, Treatment and Evaluation

    PubMed Central

    Kobak, Roger; Zajac, Kristyn; Herres, Joanna; KrauthamerEwing, E. Stephanie

    2016-01-01

    The emergence of ABTs for adolescents highlights the need to more clearly define and evaluate these treatments in the context of other attachment based treatments for young children and adults. We propose a general framework for defining and evaluating ABTs that describes the cyclical processes that are required to maintain a secure attachment bond. This secure cycle incorporates three components: 1) the child or adult’s IWM of the caregiver; 2) emotionally attuned communication; and 3) the caregiver’s IWM of the child or adult. We briefly review Bowlby, Ainsworth, and Main’s contributions to defining the components of the secure cycle and discuss how this framework can be adapted for understanding the process of change in ABTs. For clinicians working with adolescents, our model can be used to identify how deviations from the secure cycle (attachment injuries, empathic failures and mistuned communication) contribute to family distress and psychopathology. The secure cycle also provides a way of describing the ABT elements that have been used to revise IWMs or improve emotionally attuned communication. For researchers, our model provides a guide for conceptualizing and measuring change in attachment constructs and how change in one component of the interpersonal cycle should generalize to other components. PMID:25744572

  16. An interdisciplinary framework to evaluate bioshield plantations: Insights from peninsular India

    NASA Astrophysics Data System (ADS)

    Mukherjee, Nibedita; Dahdouh-Guebas, Farid; Koedam, Nico; Shanker, Kartik

    2015-02-01

    Bioshields or coastal vegetation structures are currently amongst the most important coastal habitat modification activities in south-east Asia, particularly after the December 2004 tsunami. Coastal plantations have been promoted at a large scale as protection against severe natural disasters despite considerable debate over their efficacy as protection measures. In this paper, we provide an interdisciplinary framework for evaluating and monitoring coastal plantations. We then use this framework in a case study in peninsular India. We conducted a socio-ecological questionnaire-based survey on government and non-government organizations directly involved in coastal plantation efforts in three 2004 Indian Ocean tsunami affected states in mainland India. We found that though coastal protection was stated to be the primary cause, socio-economic factors like providing rural employment were strong drivers of plantation activities. Local communities were engaged primarily as daily wage labour for plantation rather than in the planning or monitoring phases. Application of ecological criteria has been undermined during the establishment and maintenance of plantations and there was a general lack of awareness about conservation laws relating to coastal forests. While ample flow of international aid has fuelled the plantation of exotics in the study area particularly after the Indian Ocean tsunami in 2004, the long term ecological consequences need further evaluation and rigorous monitoring in the future.

  17. Attachment based treatments for adolescents: the secure cycle as a framework for assessment, treatment and evaluation.

    PubMed

    Kobak, Roger; Zajac, Kristyn; Herres, Joanna; Krauthamer Ewing, E Stephanie

    2015-01-01

    The emergence of attachment-based treatments (ABTs) for adolescents highlights the need to more clearly define and evaluate these treatments in the context of other attachment based treatments for young children and adults. We propose a general framework for defining and evaluating ABTs that describes the cyclical processes that are required to maintain a secure attachment bond. This secure cycle incorporates three components: (1) the child or adult's IWM of the caregiver; (2) emotionally attuned communication; and (3) the caregiver's IWM of the child or adult. We briefly review Bowlby, Ainsworth, and Main's contributions to defining the components of the secure cycle and discuss how this framework can be adapted for understanding the process of change in ABTs. For clinicians working with adolescents, our model can be used to identify how deviations from the secure cycle (attachment injuries, empathic failures and mistuned communication) contribute to family distress and psychopathology. The secure cycle also provides a way of describing the ABT elements that have been used to revise IWMs or improve emotionally attuned communication. For researchers, our model provides a guide for conceptualizing and measuring change in attachment constructs and how change in one component of the interpersonal cycle should generalize to other components. PMID:25744572

  18. Framework for Evaluating Water Quality of the New England Crystalline Rock Aquifers

    USGS Publications Warehouse

    Harte, Philip T.; Robinson, Gilpin R., Jr.; Ayotte, Joseph D.; Flanagan, Sarah M.

    2008-01-01

    Little information exists on regional ground-water-quality patterns for the New England crystalline rock aquifers (NECRA). A systematic approach to facilitate regional evaluation is needed for several reasons. First, the NECRA are vulnerable to anthropogenic and natural contaminants such as methyl tert-butyl ether (MTBE), arsenic, and radon gas. Second, the physical characteristics of the aquifers, termed 'intrinsic susceptibility', can lead to variable and degraded water quality. A framework approach for characterizing the aquifer region into areas of similar hydrogeology is described in this report and is based on hypothesized relevant physical features and chemical conditions (collectively termed 'variables') that affect regional patterns of ground-water quality. A framework for comparison of water quality across the NECRA consists of a group of spatial variables related to aquifer properties, hydrologic conditions, and contaminant sources. These spatial variables are grouped under four general categories (features) that can be mapped across the aquifers: (1) geologic, (2) hydrophysiographic, (3) land-use land-cover, and (4) geochemical. On a regional scale, these variables represent indicators of natural and anthropogenic sources of contaminants, as well as generalized physical and chemical characteristics of the aquifer system that influence ground-water chemistry and flow. These variables can be used in varying combinations (depending on the contaminant) to categorize the aquifer into areas of similar hydrogeologic characteristics to evaluate variation in regional water quality through statistical testing.

  19. Remedy Evaluation Framework for Inorganic, Non-Volatile Contaminants in the Vadose Zone

    SciTech Connect

    Truex, Michael J.; Carroll, Kenneth C.

    2013-05-01

    Contaminants in the vadose zone may act as a potential long-term source of groundwater contamination and need to be considered in remedy evaluations. In many cases, remediation decisions for the vadose zone will need to be made all or in part based on projected impacts to groundwater. Because there are significant natural attenuation processes inherent in vadose zone contaminant transport, remediation in the vadose zone to protect groundwater is functionally a combination of natural attenuation and use of other remediation techniques, as needed, to mitigate contaminant flux to groundwater. Attenuation processes include both hydrobiogeochemical processes that serve to retain contaminants within porous media and physical processes that mitigate the rate of water flux. In particular, the physical processes controlling fluid flow in the vadose zone are quite different and generally have a more significant attenuation impact on contaminant transport relative to those within the groundwater system. A remedy evaluation framework is presented herein that uses an adaptation of the established EPA Monitored Natural Attenuation (MNA) evaluation approach and a conceptual model based approach focused on identifying and quantifying features and processes that control contaminant flux through the vadose zone. A key concept for this framework is to recognize that MNA will comprise some portion of all remedies in the vadose zone. Thus, structuring evaluation of vadose zone waste sites to use an MNA-based approach provides information necessary to either select MNA as the remedy, if appropriate, or to quantify how much additional attenuation would need to be induced by a remedial action (e.g., technologies considered in a feasibility study) to augment the natural attenuation processes and meet groundwater protection goals.

  20. Evaluating the public health impact of health promotion interventions: the RE-AIM framework.

    PubMed Central

    Glasgow, R E; Vogt, T M; Boles, S M

    1999-01-01

    Progress in public health and community-based interventions has been hampered by the lack of a comprehensive evaluation framework appropriate to such programs. Multilevel interventions that incorporate policy, environmental, and individual components should be evaluated with measurements suited to their settings, goals, and purpose. In this commentary, the authors propose a model (termed the RE-AIM model) for evaluating public health interventions that assesses 5 dimensions: reach, efficacy, adoption, implementation, and maintenance. These dimensions occur at multiple levels (e.g., individual, clinic or organization, community) and interact to determine the public health or population-based impact of a program or policy. The authors discuss issues in evaluating each of these dimensions and combining them to determine overall public health impact. Failure to adequately evaluate programs on all 5 dimensions can lead to a waste of resources, discontinuities between stages of research, and failure to improve public health to the limits of our capacity. The authors summarize strengths and limitations of the RE-AIM model and recommend areas for future research and application. PMID:10474547

  1. Evaluating Academic Scientists Collaborating in Team-Based Research: A Proposed Framework.

    PubMed

    Mazumdar, Madhu; Messinger, Shari; Finkelstein, Dianne M; Goldberg, Judith D; Lindsell, Christopher J; Morton, Sally C; Pollock, Brad H; Rahbar, Mohammad H; Welty, Leah J; Parker, Robert A

    2015-10-01

    Criteria for evaluating faculty are traditionally based on a triad of scholarship, teaching, and service. Research scholarship is often measured by first or senior authorship on peer-reviewed scientific publications and being principal investigator on extramural grants. Yet scientific innovation increasingly requires collective rather than individual creativity, which traditional measures of achievement were not designed to capture and, thus, devalue. The authors propose a simple, flexible framework for evaluating team scientists that includes both quantitative and qualitative assessments. An approach for documenting contributions of team scientists in team-based scholarship, nontraditional education, and specialized service activities is also outlined. Although biostatisticians are used for illustration, the approach is generalizable to team scientists in other disciplines.The authors offer three key recommendations to members of institutional promotion committees, department chairs, and others evaluating team scientists. First, contributions to team-based scholarship and specialized contributions to education and service need to be assessed and given appropriate and substantial weight. Second, evaluations must be founded on well-articulated criteria for assessing the stature and accomplishments of team scientists. Finally, mechanisms for collecting evaluative data must be developed and implemented at the institutional level. Without these three essentials, contributions of team scientists will continue to be undervalued in the academic environment. PMID:25993282

  2. A conceptual framework for Lake Michigan coastal/nearshore ecosystems, with application to Lake Michigan Lakewide Management Plan (LaMP) objectives

    USGS Publications Warehouse

    Seelbach, Paul W.; Fogarty, Lisa R.; Bunnell, David Bo; Haack, Sheridan K.; Rogers, Mark W.

    2013-01-01

    The Lakewide Management Plans (LaMPs) within the Great Lakes region are examples of broad-scale, collaborative resource-management efforts that require a sound ecosystems approach. Yet, the LaMP process is lacking a holistic framework that allows these individual actions to be planned and understood within the broader context of the Great Lakes ecosystem. In this paper we (1) introduce a conceptual framework that unifies ideas and language among Great Lakes managers and scientists, whose focus areas range from tributary watersheds to open-lake waters, and (2) illustrate how the framework can be used to outline the geomorphic, hydrologic biological, and societal processes that underlie several goals of the Lake Michigan LaMP, thus providing a holistic and fairly comprehensive roadmap for tackling these challenges. For each selected goal, we developed a matrix that identifies the key ecosystem processes within the cell for each lake zone and each discipline; we then provide one example where a process is poorly understood and a second where a process is understood, but its impact or importance is unclear. Implicit in these objectives was our intention to highlight the importance of the Great Lakes coastal/nearshore zone. Although the coastal/nearshore zone is the important linkage zone between the watershed and open-lake zones—and is the zone where most LaMP issues are focused--scientists and managers have a relatively poor understanding of how the coastal/nearshore zone functions. We envision follow-up steps including (1) collaborative development of a more detailed and more complete conceptual model of how (and where) identified processes are thought to function, and (2) a subsequent gap analysis of science and monitoring priorities.

  3. How Much Are Harry Potter's Glasses Worth? Children's Monetary Evaluation of Authentic Objects

    ERIC Educational Resources Information Center

    Gelman, Susan A.; Frazier, Brandy N.; Noles, Nicholaus S.; Manczak, Erika M.; Stilwell, Sarah M.

    2015-01-01

    Adults attach special value to objects that link to notable people or events--authentic objects. We examined children's monetary evaluation of authentic objects, focusing on four kinds: celebrity possessions (e.g., Harry Potter's glasses), original creations (e.g., the very first teddy bear), personal possessions (e.g., your…

  4. Heart rate responses provide an objective evaluation of human disturbance stimuli in breeding birds

    PubMed Central

    Ellenberg, Ursula; Mattern, Thomas; Seddon, Philip J.

    2013-01-01

    Intuition is a poor guide for evaluating the effects of human disturbance on wildlife. Using the endangered Yellow-eyed penguin, Megadyptes antipodes, as an example, we show that heart rate responses provide an objective tool to evaluate human disturbance stimuli and encourage the wider use of this simple and low-impact approach. Yellow-eyed penguins are a flagship species for New Zealand's wildlife tourism; however, unregulated visitor access has recently been associated with reduced breeding success and lower first year survival. We measured heart rate responses of Yellow-eyed penguins via artificial eggs to evaluate a range of human stimuli regularly occurring at their breeding sites. We found the duration of a stimulus to be the most important factor, with elevated heart rate being sustained while a person remained within sight. Human activity was the next important component; a simulated wildlife photographer, crawling slowly around during his stay, elicited a significantly higher heart rate response than an entirely motionless human spending the same time at the same distance. Stimuli we subjectively might perceive as low impact, such as the careful approach of a ‘wildlife photographer’, resulted in a stronger response than a routine nest-check that involved lifting a bird up to view nest contents. A single, slow-moving human spending 20 min within 2 m from the nest may provoke a response comparable to that of 10 min handling a bird for logger deployment. To reduce cumulative impact of disturbance, any human presence in the proximity of Yellow-eyed penguins needs to be kept at a minimum. Our results highlight the need for objective quantification of the effects of human disturbance in order to provide a sound basis for guidelines to manage human activity around breeding birds. PMID:27293597

  5. Heart rate responses provide an objective evaluation of human disturbance stimuli in breeding birds.

    PubMed

    Ellenberg, Ursula; Mattern, Thomas; Seddon, Philip J

    2013-01-01

    Intuition is a poor guide for evaluating the effects of human disturbance on wildlife. Using the endangered Yellow-eyed penguin, Megadyptes antipodes, as an example, we show that heart rate responses provide an objective tool to evaluate human disturbance stimuli and encourage the wider use of this simple and low-impact approach. Yellow-eyed penguins are a flagship species for New Zealand's wildlife tourism; however, unregulated visitor access has recently been associated with reduced breeding success and lower first year survival. We measured heart rate responses of Yellow-eyed penguins via artificial eggs to evaluate a range of human stimuli regularly occurring at their breeding sites. We found the duration of a stimulus to be the most important factor, with elevated heart rate being sustained while a person remained within sight. Human activity was the next important component; a simulated wildlife photographer, crawling slowly around during his stay, elicited a significantly higher heart rate response than an entirely motionless human spending the same time at the same distance. Stimuli we subjectively might perceive as low impact, such as the careful approach of a 'wildlife photographer', resulted in a stronger response than a routine nest-check that involved lifting a bird up to view nest contents. A single, slow-moving human spending 20 min within 2 m from the nest may provoke a response comparable to that of 10 min handling a bird for logger deployment. To reduce cumulative impact of disturbance, any human presence in the proximity of Yellow-eyed penguins needs to be kept at a minimum. Our results highlight the need for objective quantification of the effects of human disturbance in order to provide a sound basis for guidelines to manage human activity around breeding birds. PMID:27293597

  6. Public involvement in multi-objective water level regulation development projects-evaluating the applicability of public involvement methods

    SciTech Connect

    Vaentaenen, Ari . E-mail: armiva@utu.fi; Marttunen, Mika . E-mail: Mika.Marttunen@ymparisto.fi

    2005-04-15

    Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.

  7. Spatial Spectrum Analyzer (SSA): A tool for calculations of spatial distribution of fast Fourier transform spectrum from Object Oriented Micromagnetic Framework output data

    NASA Astrophysics Data System (ADS)

    Frankowski, Marek; Chęciński, Jakub; Czapkiewicz, Maciej

    2015-04-01

    We present a tool for calculations of Fourier transform spatial distribution taken from magnetization dynamics simulated in Object Oriented Micromagnetic Framework (OOMMF). In OOMMF, as well as in other popular micromagnetic software, output data is organized as magnetization vectors from each simulation cell written down to separate file for each simulation step. Therefore, we use parallel computations to reorganize data in files containing time evolution for each cell. Fast Fourier transform is obtained for selected time period by parallel computations using Matlab. The output is a spatial distribution of the magnitude for the selected frequency in the sample cross-section. It allows for analysis of spin waves localization and therefore helps to understand their origin in investigated sample.

  8. A strategy to objectively evaluate the necessity of correcting detected target deviations in image guided radiotherapy

    SciTech Connect

    Yue, Ning J.; Kim, Sung; Jabbour, Salma; Narra, Venkat; Haffty, Bruce G.

    2007-11-15

    Image guided radiotherapy technologies are being increasingly utilized in the treatment of various cancers. These technologies have enhanced the ability to detect temporal and spatial deviations of the target volume relative to planned radiation beams. Correcting these detected deviations may, in principle, improve the accuracy of dose delivery to the target. However, in many situations, a clinical decision has to be made as to whether it is necessary to correct some of the deviations since the relevant dosimetric impact may or may not be significant, and the corresponding corrective action may be either impractical or time consuming. Ideally this decision should be based on objective and reproducible criteria rather than subjective judgment. In this study, a strategy is proposed for the objective evaluation of the necessity of deviation correction during the treatment verification process. At the treatment stage, without any alteration from the planned beams, the treatment beams should provide the desired dose coverage to the geometric volume identical to the planning target volume (PTV). Given this fact, the planned dose distribution and PTV geometry were used to compute the dose coverage and PTV enclosure of the clinical target volume (CTV) that was detected from imaging during the treatment setup verification. The spatial differences between the detected CTV and the planning CTV are essentially the target deviations. The extent of the PTV enclosure of the detected CTV as well as its dose coverage were used as criteria to evaluate the necessity of correcting any of the target deviations. This strategy, in principle, should be applicable to any type of target deviations, including both target deformable and positional changes and should be independent of how the deviations are detected. The proposed strategy was used on two clinical prostate cancer cases. In both cases, gold markers were implanted inside the prostate for the purpose of treatment setup

  9. Futurism: Framework for Composition.

    ERIC Educational Resources Information Center

    Keroack, Elizabeth Carros; Marquis, Leah Keating

    Noting that the study of the future has been neglected within the language arts framework, this paper proposes a curriculum unit that uses such study as a vehicle to develop composition skills. The paper provides the following information: the general objectives of the unit; evaluation methods; general humanistic themes to be studied; materials;…

  10. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements

    PubMed Central

    Barshan, Billur

    2008-01-01

    An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  11. Evaluation of Interpolation Strategies for the Morphing of Musical Sound Objects

    NASA Astrophysics Data System (ADS)

    O'Reilly Regueiro, Federico

    Audio morphing is a timbre-transformation technique that produces timbres which lie in between those of two or more given tones. It can thus be seen as the interpolation of timbre descriptors or features. Morphing is most convincing when the features are perceptually relevant and the interpolation is perceived to be smooth and linear. Our research aims at producing practical guidelines for morphing musical sound objects. We define a set of features aimed at representing timbre in a quantifiable fashion, as completely and with as little redundancies as possible. We then report the interpolation of each single feature imposed on an otherwise neutral synthetic sound, exploring strategies to obtain smooth-sounding interpolations. Chosen strategies are then evaluated by morphing recorded acoustic instrumental sounds. All of the scripts and the resulting sounds are available through the www to the reader.

  12. Multi-attribute subjective evaluations of manual tracking tasks vs. objective performance of the human operator

    NASA Technical Reports Server (NTRS)

    Siapkaras, A.

    1977-01-01

    A computational method to deal with the multidimensional nature of tracking and/or monitoring tasks is developed. Operator centered variables, including the operator's perception of the task, are considered. Matrix ratings are defined based on multidimensional scaling techniques and multivariate analysis. The method consists of two distinct steps: (1) to determine the mathematical space of subjective judgements of a certain individual (or group of evaluators) for a given set of tasks and experimental conditionings; and (2) to relate this space with respect to both the task variables and the objective performance criteria used. Results for a variety of second-order trackings with smoothed noise-driven inputs indicate that: (1) many of the internally perceived task variables form a nonorthogonal set; and (2) the structure of the subjective space varies among groups of individuals according to the degree of familiarity they have with such tasks.

  13. Modification and evaluation of a Barnes-type objective analysis scheme for surface meteorological data

    NASA Technical Reports Server (NTRS)

    Smith, D. R.

    1982-01-01

    The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a Barness-type scheme for the analysis of surface meteorological data. Modifications are introduced to the original version in order to increase its flexibility and to permit greater ease of usage. The code was rewritten for an interactive computer environment. Furthermore, a multiple iteration technique suggested by Barnes was implemented for greater accuracy. PROAM was subjected to a series of experiments in order to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution in order to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple iteration technique increases the accuracy of the analysis. Furthermore, the tests verify appropriate values for the analysis parameters in resolving meso-beta scale phenomena.

  14. Evaluating Snow Data Assimilation Framework for Streamflow Forecasting Applications Using Hindcast Verification

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2012-12-01

    Snow water equivalent (SWE) estimation is a key factor in producing reliable streamflow simulations and forecasts in snow dominated areas. However, measuring or predicting SWE has significant uncertainty. Sequential data assimilation, which updates states using both observed and modeled data based on error estimation, has been shown to reduce streamflow simulation errors but has had limited testing for forecasting applications. In the current study, a snow data assimilation framework integrated with the National Weather System River Forecasting System (NWSRFS) is evaluated for use in ensemble streamflow prediction (ESP). Seasonal water supply ESP hindcasts are generated for the North Fork of the American River Basin (NFARB) in northern California. Parameter sets from the California Nevada River Forecast Center (CNRFC), the Differential Evolution Adaptive Metropolis (DREAM) algorithm and the Multistep Automated Calibration Scheme (MACS) are tested both with and without sequential data assimilation. The traditional ESP method considers uncertainty in future climate conditions using historical temperature and precipitation time series to generate future streamflow scenarios conditioned on the current basin state. We include data uncertainty analysis in the forecasting framework through the DREAM-based parameter set which is part of a recently developed Integrated Uncertainty and Ensemble-based data Assimilation framework (ICEA). Extensive verification of all tested approaches is undertaken using traditional forecast verification measures, including root mean square error (RMSE), Nash-Sutcliffe efficiency coefficient (NSE), volumetric bias, joint distribution, rank probability score (RPS), and discrimination and reliability plots. In comparison to the RFC parameters, the DREAM and MACS sets show significant improvement in volumetric bias in flow. Use of assimilation improves hindcasts of higher flows but does not significantly improve performance in the mid flow and

  15. Objective evaluation of situation awareness for dynamic decision makers in teleoperations

    NASA Technical Reports Server (NTRS)

    Endsley, Mica R.

    1991-01-01

    Situation awareness, a current mental mode of the environment, is critical to the ability of operators to perform complex and dynamic tasks. This should be particularly true for teleoperators, who are separated from the situation they need to be aware of. The design of the man-machine interface must be guided by the goal of maintaining and enhancing situation awareness. The objective of this work has been to build a foundation upon which research in the area can proceed. A model of dynamic human decision making which is inclusive of situation awareness will be presented, along with a definition of situation awareness. A method for measuring situation awareness will also be presented as a tool for evaluating design concepts. The Situation Awareness Global Assessment Technique (SAGAT) is an objective measure of situation awareness originally developed for the fighter cockpit environment. The results of SAGAT validation efforts will be presented. Implications of this research for teleoperators and other operators of dynamic systems will be discussed.

  16. The Application of Visual Saliency Models in Objective Image Quality Assessment: A Statistical Evaluation.

    PubMed

    Zhang, Wei; Borji, Ali; Wang, Zhou; Le Callet, Patrick; Liu, Hantao

    2016-06-01

    Advances in image quality assessment have shown the potential added value of including visual attention aspects in its objective assessment. Numerous models of visual saliency are implemented and integrated in different image quality metrics (IQMs), but the gain in reliability of the resulting IQMs varies to a large extent. The causes and the trends of this variation would be highly beneficial for further improvement of IQMs, but are not fully understood. In this paper, an exhaustive statistical evaluation is conducted to justify the added value of computational saliency in objective image quality assessment, using 20 state-of-the-art saliency models and 12 best-known IQMs. Quantitative results show that the difference in predicting human fixations between saliency models is sufficient to yield a significant difference in performance gain when adding these saliency models to IQMs. However, surprisingly, the extent to which an IQM can profit from adding a saliency model does not appear to have direct relevance to how well this saliency model can predict human fixations. Our statistical analysis provides useful guidance for applying saliency models in IQMs, in terms of the effect of saliency model dependence, IQM dependence, and image distortion dependence. The testbed and software are made publicly available to the research community. PMID:26277009

  17. The social-devaluation effect: interactive evaluation deteriorates likeability of objects based on daily relationship

    PubMed Central

    Ariga, Atsunori

    2015-01-01

    Although previous research has explored the effects of discussion on optimal and collective group outcomes, it is unclear how an individual’s preference for an object is modulated by discussion with others. This study investigated the determinants of likeability ratings under two conditions. In Experiment 1, pairs of participants consisting of friends evaluated various photographic images. Under the interactive condition, the participants discussed their impressions of each image for 30 s and then independently rated how much they liked it. Under the non-interactive condition, the participants did not interact with each other but instead only thought about their impressions of each image for 30 s before rating its likeability. The results indicate that the exchange of impressions between the participants affected the individual likeability ratings of objects. More specifically, the interactive participants generally rated the images as less likeable than did the non-interactive participants (social-devaluation effect). However, in Experiment 2, the effect was eliminated when the pairs consisted of strangers. These findings suggest that shared information modulates individual preferences but only when a daily relationship exists within a group. PMID:25620947

  18. Domestic Water Service Delivery Indicators and Frameworks for Monitoring, Evaluation, Policy and Planning: A Review

    PubMed Central

    Kayser, Georgia L.; Moriarty, Patrick; Fonseca, Catarina; Bartram, Jamie

    2013-01-01

    Monitoring of water services informs policy and planning for national governments and the international community. Currently, the international monitoring system measures the type of drinking water source that households use. There have been calls for improved monitoring systems over several decades, some advocating use of multiple indicators. We review the literature on water service indicators and frameworks with a view to informing debate on their relevance to national and international monitoring. We describe the evidence concerning the relevance of each identified indicator to public health, economic development and human rights. We analyze the benefits and challenges of using these indicators separately and combined in an index as tools for planning, monitoring, and evaluating water services. We find substantial evidence on the importance of each commonly recommended indicator—service type, safety, quantity, accessibility, reliability or continuity of service, equity, and affordability. Several frameworks have been proposed that give structure to the relationships among individual indicators and some combine multiple indicator scores into a single index but few have been rigorously tested. More research is needed to understand if employing a composite metric of indicators is advantageous and how each indicator might be scored and scaled. PMID:24157507

  19. Evaluation of the 29-km Eta Model. Part 1; Objective Verification at Three Selected Stations

    NASA Technical Reports Server (NTRS)

    Nutter, Paul A.; Manobianco, John; Merceret, Francis J. (Technical Monitor)

    1998-01-01

    This paper describes an objective verification of the National Centers for Environmental Prediction (NCEP) 29-km eta model from May 1996 through January 1998. The evaluation was designed to assess the model's surface and upper-air point forecast accuracy at three selected locations during separate warm (May - August) and cool (October - January) season periods. In order to enhance sample sizes available for statistical calculations, the objective verification includes two consecutive warm and cool season periods. Systematic model deficiencies comprise the larger portion of the total error in most of the surface forecast variables that were evaluated. The error characteristics for both surface and upper-air forecasts vary widely by parameter, season, and station location. At upper levels, a few characteristic biases are identified. Overall however, the upper-level errors are more nonsystematic in nature and could be explained partly by observational measurement uncertainty. With a few exceptions, the upper-air results also indicate that 24-h model error growth is not statistically significant. In February and August 1997, NCEP implemented upgrades to the eta model's physical parameterizations that were designed to change some of the model's error characteristics near the surface. The results shown in this paper indicate that these upgrades led to identifiable and statistically significant changes in forecast accuracy for selected surface parameters. While some of the changes were expected, others were not consistent with the intent of the model updates and further emphasize the need for ongoing sensitivity studies and localized statistical verification efforts. Objective verification of point forecasts is a stringent measure of model performance, but when used alone, is not enough to quantify the overall value that model guidance may add to the forecast process. Therefore, results from a subjective verification of the meso-eta model over the Florida peninsula are

  20. Involving stakeholders in the evaluation of community alcohol projects: finding a balance between subjective insight and objective facts.

    PubMed

    Boots, Kevin; Midford, Richard

    2007-01-01

    The role played by key community representatives in the evaluation of community alcohol projects differs according to the evaluation paradigm adopted. In evaluations that adopt a positivist, experimental design they are cast in the role of independent informants. In post-positivist evaluations they are seen as having an interest in the evaluation and accordingly are considered active stakeholders. However, the degree to which stakeholders can be actively engaged in an evaluation varies considerably along a number of dimensions. Four dimensions of the stakeholder role--stakeholder inclusiveness, participation mode, participation frequency, and evaluation role--are examined in the context of eight evaluation theories. This is integrated into a model that links these dimensions to an object-subject continuum of stakeholder involvement. The model facilitates systematic consideration of these dimensions and will assist evaluators in achieving their desired balance of subjective insight and objective fact. PMID:18075920

  1. Angiogenesis in tissue-engineered nerves evaluated objectively using MICROFIL perfusion and micro-CT scanning

    PubMed Central

    Wang, Hong-kui; Wang, Ya-xian; Xue, Cheng-bin; Li, Zhen-mei-yu; Huang, Jing; Zhao, Ya-hong; Yang, Yu-min; Gu, Xiao-song

    2016-01-01

    Angiogenesis is a key process in regenerative medicine generally, as well as in the specific field of nerve regeneration. However, no convenient and objective method for evaluating the angiogenesis of tissue-engineered nerves has been reported. In this study, tissue-engineered nerves were constructed in vitro using Schwann cells differentiated from rat skin-derived precursors as supporting cells and chitosan nerve conduits combined with silk fibroin fibers as scaffolds to bridge 10-mm sciatic nerve defects in rats. Four weeks after surgery, three-dimensional blood vessel reconstructions were made through MICROFIL perfusion and micro-CT scanning, and parameter analysis of the tissue-engineered nerves was performed. New blood vessels grew into the tissue-engineered nerves from three main directions: the proximal end, the distal end, and the middle. The parameter analysis of the three-dimensional blood vessel images yielded several parameters, including the number, diameter, connection, and spatial distribution of blood vessels. The new blood vessels were mainly capillaries and microvessels, with diameters ranging from 9 to 301 μm. The blood vessels with diameters from 27 to 155 μm accounted for 82.84% of the new vessels. The microvessels in the tissue-engineered nerves implanted in vivo were relatively well-identified using the MICROFIL perfusion and micro-CT scanning method, which allows the evaluation and comparison of differences and changes of angiogenesis in tissue-engineered nerves implanted in vivo. PMID:26981108

  2. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    NASA Astrophysics Data System (ADS)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  3. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  4. Promoting ethical and objective practice in the medicolegal arena of disability evaluation.

    PubMed

    Martelli, M F; Zasler, N D; Johnson-Greene, D

    2001-08-01

    As providers of medical information and testimony, clinicians have ultimate responsibility for ethical conduct as it relates to this information. The authors offer the following recommendations for enhancing ethical relationships between expert clinicians and the courts. 1. Avoid or resist attorney efforts at enticement into joining the attorney-client team. Such compromises of scientific boundaries and ethical principles exist on a continuum ranging from standard attorney-client advocacy at the beginning of the expert consultation phase (e.g., promotional information at the forefront of retaining an expert, with either provision of selective or incomplete records or less than enthusiastic efforts to produce all records) and extending to completion of evaluation, when requests for changes in reports and documentation might be made. 2. Respect role boundaries and do not mix conflicting roles. Remember that the treating doctor possesses a bond with the patient but does not as a rule obtain complete preinjury and postinjury information in the context of assessing causality and apportionment. In contrast, the expert witness must conduct a thorough and multifaceted case analysis sans the physician-patient relationship in order to facilitate objectivity and allow optimum diagnostic formulations. Finally, the trial consultant's function in this adversarial process is to assist with critically scrutinizing and attacking positions of experts for the opposing side. These roles all represent inherently different interests, and mixing them can only reduce objectivity. 3. Insist on adequate time for thorough record review, evaluation, and report generation. Also insist on sufficient time and preparation for deposition and court appearances. 4. Work at building a reputation for general objectivity, reliance on multiple data sources, reaching opinions only after reviewing complete information from both sides, and completing the evaluation. 5. Spend a good amount of time actually

  5. A framework for the analysis and evaluation of optical imaging systems with arbitrary response functions

    NASA Astrophysics Data System (ADS)

    Wang, Zhipeng

    The scientific applications and engineering aspects of multispectral and hyperspectral imaging systems have been studied extensively. The traditional geometric spectral imaging system model is specifically developed aiming at spectral sensors with spectrally non-overlapping bands. Spectral imaging systems with overlapping bands also exist. For example, the quantum-dot infrared photodetectors (QDIPs) for midwave- and longwave-infrared (IR) imaging systems exhibit highly overlapping spectral responses tunable through the bias voltages applied. This makes it possible to build spectrally tunable imaging system in IR range based on single QDIP. Furthermore, the QDIP based system can be operated as being adaptive to scenes. Other optical imaging systems like the human eye and some polarimetric sensing systems also have overlapping bands. To analyze such sensors, a functional analysis-based framework is provided in this dissertation. The framework starts from the mathematical description of the interaction between sensor and the radiation from scene reaching it. A geometric model of the spectral imaging process is provided based on the framework. The spectral response functions and the scene spectra are considered as vectors inside an 1-dimensional spectral space. The spectral imaging process is abstracted to represent a projection of scene spectrum onto sensor. The projected spectrum, which is the least-square error reconstruction of the scene vectors, contains the useful information for image processing. Spectral sensors with arbitrary spectral response functions are can be analyzed with this model. The framework leads directly to an image pre-processing algorithm to remove the data correlation between bands. Further discussion shows that this model can also serve the purpose of sensor evaluation, and thus facilitates comparison between different sensors. The spectral shapes and the Signal-to-Noise Ratios (SNR) of different bands are seen to influence the sensor

  6. A framework for evaluating the impact of obesity prevention strategies on socioeconomic inequalities in weight.

    PubMed

    Backholer, Kathryn; Beauchamp, Alison; Ball, Kylie; Turrell, Gavin; Martin, Jane; Woods, Julie; Peeters, Anna

    2014-10-01

    We developed a theoretical framework to organize obesity prevention interventions by their likely impact on the socioeconomic gradient of weight. The degree to which an intervention involves individual agency versus structural change influences socioeconomic inequalities in weight. Agentic interventions, such as standalone social marketing, increase socioeconomic inequalities. Structural interventions, such as food procurement policies and restrictions on unhealthy foods in schools, show equal or greater benefit for lower socioeconomic groups. Many obesity prevention interventions belong to the agento-structural types of interventions, and account for the environment in which health behaviors occur, but they require a level of individual agency for behavioral change, including workplace design to encourage exercise and fiscal regulation of unhealthy foods or beverages. Obesity prevention interventions differ in their effectiveness across socioeconomic groups. Limiting further increases in socioeconomic inequalities in obesity requires implementation of structural interventions. Further empirical evaluation, especially of agento-structural type interventions, remains crucial. PMID:25121810

  7. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  8. Building an Evaluation Framework for a Competency-Based Graduate Program at the University Of Southern Mississippi

    ERIC Educational Resources Information Center

    Gaudet, Cyndi H.; Annulis, Heather M.; Kmiec, John J., Jr.

    2008-01-01

    This article describes an ongoing project to build a comprehensive evaluation framework for the competency-based Master of Science in Workforce Training and Development (MSWTD) program at The University of Southern Mississippi (USM). First, it discusses some trends and issues in evaluating the performance of higher education programs in the United…

  9. A spatial assessment framework for evaluating flood risk under extreme climates.

    PubMed

    Chen, Yun; Liu, Rui; Barrett, Damian; Gao, Lei; Zhou, Mingwei; Renzullo, Luigi; Emelyanova, Irina

    2015-12-15

    Australian coal mines have been facing a major challenge of increasing risk of flooding caused by intensive rainfall events in recent years. In light of growing climate change concerns and the predicted escalation of flooding, estimating flood inundation risk becomes essential for understanding sustainable mine water management in the Australian mining sector. This research develops a spatial multi-criteria decision making prototype for the evaluation of flooding risk at a regional scale using the Bowen Basin and its surroundings in Queensland as a case study. Spatial gridded data, including climate, hydrology, topography, vegetation and soils, were collected and processed in ArcGIS. Several indices were derived based on time series of observations and spatial modeling taking account of extreme rainfall, evapotranspiration, stream flow, potential soil water retention, elevation and slope generated from a digital elevation model (DEM), as well as drainage density and proximity extracted from a river network. These spatial indices were weighted using the analytical hierarchy process (AHP) and integrated in an AHP-based suitability assessment (AHP-SA) model under the spatial risk evaluation framework. A regional flooding risk map was delineated to represent likely impacts of criterion indices at different risk levels, which was verified using the maximum inundation extent detectable by a time series of remote sensing imagery. The result provides baseline information to help Bowen Basin coal mines identify and assess flooding risk when making adaptation strategies and implementing mitigation measures in future. The framework and methodology developed in this research offers the Australian mining industry, and social and environmental studies around the world, an effective way to produce reliable assessment on flood risk for managing uncertainty in water availability under climate change. PMID:26318687

  10. Evaluation of a subject-specific musculoskeletal modelling framework for load prediction in total knee arthroplasty.

    PubMed

    Chen, Zhenxian; Zhang, Zhifeng; Wang, Ling; Li, Dichen; Zhang, Yuanzhi; Jin, Zhongmin

    2016-08-01

    Musculoskeletal (MSK) multibody dynamics (MBD) models have been used to predict in vivo biomechanics in total knee arthroplasty (TKA). However, a full lower limb MSK MBD modelling approach for TKA that combines subject-specific skeletal and prosthetic knee geometry has not yet been applied and evaluated over a range of patients. This study evaluated a subject-specific MSK MBD modelling framework for TKA using force-dependent kinematics (FDK) and applied it to predict knee contact forces during gait trials for three patients implanted with instrumented prosthetic knees. The prediction accuracy was quantified in terms of the mean absolute deviation (MAD), root mean square error (RMSE), Pearson correlation coefficient (ρ), and Sprague and Geers metrics of magnitude (M), phase (P) and combined error (C). Generally good agreements were found between the predictions and the experimental measurements from all patients for the medial contact forces (150 N < MAD <178 N, 174 N < RMSE < 224 N, 0.87 < ρ < 0.95, -0.04 < M < 0.20, 0.06 < P < 0.09, 0.08 < C < 0.22) and the lateral contact force (113 N < MAD <195 N, 131 N < RMSE < 240 N, 0.41 < ρ < 0.82, -0.25 < M < 0.34, 0.08 < P < 0.22, 0.13 < C < 0.36). The results suggest that the subject-specific MSK MBD modelling framework for TKA using FDK has potential as a powerful tool for investigating the functional outcomes of knee implants. PMID:27245748

  11. They're Happy, But Did They Make a Difference? Applying Kirkpatrick's Framework to the Evaluation of a National Leadership Program.

    ERIC Educational Resources Information Center

    McLean, Scott; Moss, Gwenna

    2003-01-01

    Examined the Kirkpatrick evaluation framework (D. Kirkpatrick, 1959) through a case study of a national leadership development program, the Canadian Agriculture Lifetime Leadership Program. Draws conclusions about using the Kirkpatrick framework to evaluate noncredit educational programs, and shows how the framework enabled productive formative…

  12. Development and evaluation of an active instructional framework for undergraduate biology education

    NASA Astrophysics Data System (ADS)

    Lysne, Steven John

    my student interviews suggested, I found that engaging students by way of innovative instructional approaches is a major theme in science education. I conclude by arguing for the development of collaborative learning communities and the use of cognitive apprenticeships in science classrooms. In Chapter 4 I presented the development and initial evaluation of an instructional framework for undergraduate biology classrooms. I found that student satisfaction as measured by end-of-course iv evaluations increased compared to my previous instructional model. I concluded that the instructional framework was efficacious and proceeded to evaluate the model in the context of knowledge acquisition and retention. Chapter 5 is the culmination of the work I conducted for the research presented in Chapters 2 through 4. In Chapter 5 I formally test the hypotheses that my instructional framework presented in Chapter 4 results in no greater knowledge acquisition or retention compared to a more traditional lecture model of instruction. I failed to reject these hypotheses which runs contrary to much published literature; the implications of my findings are discussed.

  13. Evaluation of midwifery students’ competency in providing intrauterine device services using objective structured clinical examination

    PubMed Central

    Erfanian, Fatemeh; Khadivzadeh, Talaat

    2011-01-01

    BACKGROUND: Delivering IUD services is one of the important competencies that midwifery students must obtain during academic period. As Objective Structured Clinical Examination (OSCE) can be reasonably reliable, valid and objective method for clinical skills assessment, this study was conducted to assess midwifery students’ skill in delivering intrauterine device (IUD) services using a clinical examination and their satisfaction from the OSCE. METHODS: All of the 62 eligible Bachelor of Science midwifery students of Mashhad University of Medical Sciences participated in a ten-station OSCE about delivering IUD services for 50 minutes in 2006. Students performed technical skills or interacted with standard patients in 6 stations and in 4 stations they answered to the related questions. Students’ performance in 6 stations was rated by observer or standard patients using validated checklists. Students’ level of satisfaction and also their experience of participating in OSCE examination were gathered. RESULTS: Performance of 98.2% of students was poor. On average, the students gained 49% of total score in counseling and screening, 35.7% in inserting the IUD, 40% in IUD removal and 24.4% in management of IUD side effect. Eighty percent of students rated their satisfaction from the OSCE high and very high. Students reported the OSCE as an enjoying examination experience. CONCLUSIONS: Students’ skill in delivering IUD services was lower than expected level that shows the need to change the current teaching methods. OSCE is a valid evaluation method which provides valuable information which cannot be obtained by more traditional assessment modalities. Based on the finding of this study a workshop program on providing IUD services for midwifery students and family planning providers should be prepared. PMID:22224105

  14. Objective and Subjective Evaluation of Reflecting and Diffusing Surfaces in Auditoria

    NASA Astrophysics Data System (ADS)

    Cox, Trevor John

    Available from UMI in association with The British Library. Requires signed TDF. The performance of reflectors and diffusers used in auditoria have been evaluated both objectively and subjectively. Two accurate systems have been developed to measure the scattering from surfaces via the cross correlation function. These have been used to measure the scattering from plane panels, curved panels and quadratic residue diffusers (QRDs). The scattering measurements have been used to test theoretical prediction methods based on the Helmholtz-Kirchhoff integral equation. Accurate prediction methods were found for all surfaces tested. The limitations of the more approximate methods have been defined. The assumptions behind Schroeder's design of the QRD have been tested and the local reacting admittance assumption found to be valid over a wide frequency range. It was found that the QRD only produces uniform scattering at low frequencies. For an on-axis source the scattering from a curved panel was as good as from a QRD. For an oblique source the QRD produced much more uniform scattering than the curved panel. The subjective measurements evaluated the smallest perceivable change in the early sound field, the part most influenced by reflectors and diffusers. A natural sounding simulation of a concert hall field within an anechoic chamber was used. Standard objective parameters were reasonable values when compared to values found in real halls and subjective preference measurements. A difference limen was measured for early lateral energy fraction (.048 +/-.005); inter aural cross correlation (.075 +/-.008); clarity index (.67 +/-.13 dB); and centre time (8.6 +/- 1.6 ms). It was found that: (i) when changes are made to diffusers and reflectors, changes in spatial impression will usually be larger than those in clarity; and (ii) acousticians can gain most by paying attention to lateral sound in auditoria. It was also found that: (i) diffuse reflections in the early sound field

  15. Evaluation of Capacity-Building Program of District Health Managers in India: A Contextualized Theoretical Framework

    PubMed Central

    Prashanth, N. S.; Marchal, Bruno; Kegels, Guy; Criel, Bart

    2014-01-01

    Performance of local health services managers at district level is crucial to ensure that health services are of good quality and cater to the health needs of the population in the area. In many low- and middle-income countries, health services managers are poorly equipped with public health management capacities needed for planning and managing their local health system. In the south Indian Tumkur district, a consortium of five non-governmental organizations partnered with the state government to organize a capacity-building program for health managers. The program consisted of a mix of periodic contact classes, mentoring and assignments and was spread over 30 months. In this paper, we develop a theoretical framework in the form of a refined program theory to understand how such a capacity-building program could bring about organizational change. A well-formulated program theory enables an understanding of how interventions could bring about improvements and an evaluation of the intervention. In the refined program theory of the intervention, we identified various factors at individual, institutional, and environmental levels that could interact with the hypothesized mechanisms of organizational change, such as staff’s perceived self-efficacy and commitment to their organizations. Based on this program theory, we formulated context–mechanism–outcome configurations that can be used to evaluate the intervention and, more specifically, to understand what worked, for whom and under what conditions. We discuss the application of program theory development in conducting a realist evaluation. Realist evaluation embraces principles of systems thinking by providing a method for understanding how elements of the system interact with one another in producing a given outcome. PMID:25121081

  16. Objective evaluation of two deworming regimens in young Thoroughbreds using parasitological and performance parameters.

    PubMed

    Bellaw, Jennifer L; Pagan, Joe; Cadell, Steve; Phethean, Eileen; Donecker, John M; Nielsen, Martin K

    2016-05-15

    Parasitic helminths of equids are capable of causing ill-thrift, clinical disease, and death. Although young horses are the most susceptible to parasitic disease and are the most intensively treated cohort, deworming regimens are rarely evaluated within this age group. This study objectively evaluated the impact of deworming regimen on fecal egg counts (FECs), growth rates, and body-condition scores in young Thoroughbreds. Forty-eight Thoroughbred foals from three central Kentucky farms were randomly allocated to two treatment groups: an interval dose program receiving bi-monthly rotations of pyrantel pamoate and ivermectin and a daily deworming group receiving daily rations of pyrantel tartrate feed additive throughout the study, oxibendazole at two months of age, and moxidectin treatments at 9.5 and 16.5 months of age. Pre- and post-treatment eggs per gram of feces (EPGs) of Parascaris spp. and strongyle family parasites, gel/paste dewormer efficacies, and monthly weights and body condition scores were collected. Ascarid and strongyle FECs were not significantly different between groups but were significantly influenced by horse age with strongyle counts continually increasing and ascarid counts peaking at 4.5 months of age. Reduced strongyle efficacies of ivermectin and moxidectin were observed on two farms with consistently low pyrantel pamoate efficacies on all three farms. Ivermectin also exhibited reduced ascarid efficacy. Average daily gain did not differ significantly between groups and was only significantly influenced by age, mirroring average daily gain reference data for Kentucky Thoroughbreds born in 2013. Body condition scores also did not differ between groups, remaining in the optimal range (5-6) for the duration of the study. Management practices resulting in growth rates matching the reference data and in optimal body condition scores compensate for the negative impacts of parasitism even in cases of reduced drug efficacy. Performance parameters

  17. Objective assessment of aesthetic outcome after breast conserving therapy: subjective third party panel rating and objective BCCT.core software evaluation.

    PubMed

    Heil, Joerg; Carolus, Anne; Dahlkamp, Julia; Golatta, Michael; Domschke, Christoph; Schuetz, Florian; Blumenstein, Maria; Rauch, Geraldine; Sohn, Christof

    2012-02-01

    We analysed intra- and inter-rater agreement of subjective third party assessment and agreement with a semi-automated objective software evaluation tool (BCCT.core). We presented standardized photographs of 50 patients, taken shortly and one year after surgery to a panel of five breast surgeons, six breast nurses, seven members of a breast cancer support group, five medical and seven non-medical students. In two turns they rated aesthetic outcome on a four point scale. Moreover the same photographs were evaluated by the BCCT.core software. Intra-rater agreement in the panel members was moderate to substantial (k = 0.4-0.5; wk = 0.6-0.7; according to different subgroups and times of assessment). In contrast inter-rater agreement was only slight to fair (mk = 0.1-0.3). Agreement between the panel participants and the software was fair (wk = 0.24-0.45). Subjective third party assessment only fairly agree with objective BCCT.core evaluation just as third party participants do not agree well among each other. PMID:21852135

  18. Freva - Freie Univ Evaluation System Framework for Scientific Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Schartner, Thomas; Kirchner, Ingo; Rust, Henning W.; Cubasch, Ulrich; Ulbrich, Uwe

    2016-04-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science. Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitation of the provision and usage of tools and climate data automatically increases the number of scientists working with the data sets and identifying discrepancies. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database

  19. A framework for improving the cost-effectiveness of DSM program evaluations

    SciTech Connect

    Sonnenblick, R.; Eto, J.

    1995-09-01

    The prudence of utility demand-side management (DSM) investments hinges on their performance, yet evaluating performance is complicated because the energy saved by DSM programs can never be observed directly but only inferred. This study frames and begins to answer the following questions: (1) how well do current evaluation methods perform in improving confidence in the measurement of energy savings produced by DSM programs; (2) in view of this performance, how can limited evaluation resources be best allocated to maximize the value of the information they provide? The authors review three major classes of methods for estimating annual energy savings: tracking database (sometimes called engineering estimates), end-use metering, and billing analysis and examine them in light of the uncertainties in current estimates of DSM program measure lifetimes. The authors assess the accuracy and precision of each method and construct trade-off curves to examine the costs of increases in accuracy or precision. Several approaches for improving evaluations for the purpose of assessing program cost effectiveness are demonstrated. The methods can be easily generalized to other evaluation objectives, such as shared savings incentive payments.

  20. Evaluation of COSMO-ART in the Framework of the Air Quality Model Evaluation International Initiative (AQMEII)

    NASA Astrophysics Data System (ADS)

    Giordano, Lea; Brunner, Dominik; Im, Ulas; Galmarini, Stefano

    2014-05-01

    The Air Quality Model Evaluation International Initiative (AQMEII) coordinated by the EC-JRC and US-EPA, promotes since 2008 research on regional air quality model evaluation across the atmospheric modelling communities of Europe and North America. AQMEII has now reached its Phase 2 that is dedicated to the evaluation of on-line coupled chemistry-meteorology models as opposed to Phase 1 where only off-line models were considered. At European level, AQMEII collaborates with the COST Action "European framework for on-line integrated air quality and meteorology modelling" (EuMetChem). All European groups participating in AQMEII performed simulations over the same spatial domain (Europe at a resolution of about 20 km) and using the same simulation strategy (e.g. no nudging allowed) and the same input data as much as possible. The initial and boundary conditions (IC/BC) were shared between all groups. Emissions were provided by the TNO-MACC database for anthropogenic emissions and the FMI database for biomass burning emissions. Chemical IC/BC data were taken from IFS-MOZART output, and meteorological IC/BC from the ECWMF global model. Evaluation data sets were collected by the Joint Research Center (JRC) and include measurements from surface in situ networks (AirBase and EMEP), vertical profiles from ozone sondes and aircraft (MOZAIC), and remote sensing (AERONET, satellites). Since Phase 2 focuses on on-line coupled models, a special effort is devoted to the detailed speciation of particulate matter components, with the goal of studying feedback processes. For the AQMEII exercise, COSMO-ART has been run with 40 levels of vertical resolution, and a chemical scheme that includes the SCAV module of Knote and Brunner (ACP 2013) for wet-phase chemistry and the SOA treatment according to VBS (volatility basis set) approach (Athanasopoulou et al., ACP 2013). The COSMO-ART evaluation shows that, next to a good performance in the meteorology, the gas phase chemistry is well